diff --git a/spaces/0xSynapse/Segmagine/README.md b/spaces/0xSynapse/Segmagine/README.md deleted file mode 100644 index 654fcd2e6b7ccf0f9b7ac221bf0b66bdeb0e766b..0000000000000000000000000000000000000000 --- a/spaces/0xSynapse/Segmagine/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Segmagine -emoji: 🚀 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: lgpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Comment utiliser Markzware PDF2DTP-torrent.rar pour importer des PDF dans InDesign.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Comment utiliser Markzware PDF2DTP-torrent.rar pour importer des PDF dans InDesign.md deleted file mode 100644 index a2b9d9a547538423b18d81182646787aaf453fab..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Comment utiliser Markzware PDF2DTP-torrent.rar pour importer des PDF dans InDesign.md +++ /dev/null @@ -1,134 +0,0 @@ -
-

Markzware PDF2DTP-torrent.rar: What Is It and How to Use It?

-

If you are looking for a way to convert your PDF files to InDesign files, you might have come across a file named Markzware PDF2DTP-torrent.rar. But what is this file and how can you use it? In this article, we will explain what Markzware PDF2DTP is, what a torrent file is, how to download and use Markzware PDF2DTP-torrent.rar, how much it costs, and where to get it.

-

What is Markzware PDF2DTP?

-

Markzware PDF2DTP is a plugin for Adobe InDesign that allows you to convert any PDF file to an editable InDesign file with a single click. It is developed by Markzware, a leading provider of software solutions for the printing, publishing, and graphic design industries.

-

Markzware PDF2DTP-torrent.rar


Download ❤❤❤ https://byltly.com/2uKvs0



-

What is a PDF file and why do you need to convert it to InDesign?

-

A PDF file (Portable Document Format) is a file format that preserves the layout, formatting, and quality of a document across different platforms and devices. It is widely used for viewing and printing documents, but not for editing them.

-

If you work with Adobe InDesign, you might need to convert a PDF file to InDesign for various reasons, such as:

- -

What are the benefits and features of Markzware PDF2DTP?

-

Some of the benefits and features of Markzware PDF2DTP are:

- -

How does Markzware PDF2DTP work?

-

Markzware PDF2DTP works by analyzing the structure and content of the PDF file and converting it into an equivalent InDesign file. It uses advanced algorithms and techniques to recreate or transfer all the elements of the PDF file into an editable format within InDesign.

-

Markzware PDF2DTP can handle virtually any type of PDF file,

-

What is a torrent file and why do you need to download it?

-

What is a torrent file and how does it work?

-

A torrent file (or .torrent) is a small file that contains information about a larger file that can be downloaded from other users on a peer-to-peer network. A peer-to-peer network is a system where users share files directly with each other without relying on a central server.

-

A torrent file works by using a software program called a torrent client (such as BitTorrent or uTorrent) that connects you with other users who have the same or parts of the same file that you want. The torrent client then downloads small pieces of the file from different sources until it completes the whole file. This way, you can download large files faster and more efficiently than from a single source.

-

What are the advantages and disadvantages of using torrent files?

-

Some of the advantages of using torrent files are:

-

Markzware PDF2DTP converter torrent download
-How to use Markzware PDF2DTP to edit PDF files
-Markzware PDF2DTP free trial rar file
-Markzware PDF2DTP crack serial keygen
-Markzware PDF2DTP for InDesign CC torrent
-Markzware PDF2DTP review and tutorial
-Markzware PDF2DTP alternative software
-Markzware PDF2DTP license activation code
-Markzware PDF2DTP system requirements and compatibility
-Markzware PDF2DTP discount coupon code
-Markzware PDF2DTP vs PDF2ID comparison
-Markzware PDF2DTP for QuarkXPress torrent
-Markzware PDF2DTP installation and troubleshooting guide
-Markzware PDF2DTP features and benefits
-Markzware PDF2DTP customer support and feedback
-Markzware PDF2DTP for Mac OS X torrent
-Markzware PDF2DTP for Windows torrent
-Markzware PDF2DTP online demo and webinar
-Markzware PDF2DTP testimonials and case studies
-Markzware PDF2DTP update and upgrade information
-Markzware PDF2DTP best practices and tips
-Markzware PDF2DTP pros and cons analysis
-Markzware PDF2DTP FAQ and help page
-Markzware PDF2DTP video tutorial and walkthrough
-Markzware PDF2DTP blog and news articles
-Markzware PDF2DTP forum and community discussion
-Markzware PDF2DTP affiliate program and commission rate
-Markzware PDF2DTP refund policy and guarantee
-Markzware PDF2DTP for Adobe Illustrator torrent
-Markzware PDF2DTP for Microsoft Word torrent
-Markzware PDF2DTP for Photoshop torrent
-Markzware PDF2DTP for CorelDraw torrent
-Markzware PDF2DTP for Publisher torrent
-Markzware PDF2DTP for PowerPoint torrent
-Markzware PDF2DTP for Excel torrent
-Markzware PDF2DTP for HTML torrent
-Markzware PDF2DTP for ePub torrent
-Markzware PDF2DTP for Kindle torrent
-Markzware PDF2DTP for XML torrent
-Markzware PDF2DTP for RTF torrent
-Markzware PDF2DTP for CSV torrent
-Markzware PDF2DTP for TXT torrent
-Markzware PDF2DTP for JPEG torrent
-Markzware PDF2DTP for PNG torrent
-Markzware PDF2DTP for GIF torrent
-Markzware PDF2DTP for BMP torrent
-Markzware PDF2DTP for TIFF torrent
-Markzware PDF2DTP for PSD torrent
-Markzware PDF2DTP for AI torrent

- -

Some of the disadvantages of using torrent files are:

- -

How to download a torrent file safely and legally?

-

To download a torrent file safely and legally,

- - You should use a reputable torrent client that has security features such as encryption, - You should use a reliable VPN service that can hide your IP address, - You should scan your downloaded files with an antivirus program before opening them. - You should only download legal content that does not infringe on any copyrights or laws.

How to use Markzware PDF2DTP-torrent.rar?

-

How to install and activate Markzware PDF2DTP?

-

To install and activate Markzware PDF2DTP,

- - You should first extract the .rar file using a software program such as WinRAR or 7-Zip. - You should then run the installer for Markzware PDF2DTP - You should then follow the instructions on the screen - You should then enter your license key - You should then restart your computer

How to choose and convert a PDF file to InDesign using Markzware PDF2DTP?

-

To choose and convert a PDF file to InDesign using Markzware PDF2DTP,

- - You should first launch Adobe InDesign - You should then choose the “Convert PDF…” menu item from the “Markzware” menu in Adobe InDesign - You should then navigate to and choose the PDF document that you would like to open in Adobe InDesign - You should then click the “Open” button

How to edit and save the converted InDesign file using Markzware PDF2DTP?

-```html and save the converted InDesign file using Markzware PDF2DTP,

- - You can edit the content of the InDesign file as you would normally do with any InDesign document - You can access and modify the text, images, fonts, colors, styles, and other elements of the PDF file in InDesign - You can create new InDesign documents from existing PDF files or merge multiple PDF files into one InDesign file - You can save the InDesign file in any format that InDesign supports

How much does Markzware PDF2DTP cost and where can you get it?

-

How much does Markzware PDF2DTP cost and what are the subscription plans?

-

The price of Markzware PDF2DTP depends on which subscription plan you choose. There are two subscription plans available:

- - Annual Subscription Plan: This plan costs $199 per year . It gives you access to all updates and upgrades for one year. - Perpetual Subscription Plan: This plan costs $399. It gives you access to all updates and upgrades for life.

Where can you get Markzware PDF2DTP and how to contact the support team?

-

You can get Markzware PDF2DTP from Markzware's website. Here is the download link:

- https://markzware.com/products/pdf2dtp/ -

If you have any questions or issues regarding Markzware PDF2DTP,

- - You can contact Markzware's customer support team by filling out an online form, sending an email to sales@markzware.com or support@markzware.com, or calling a phone number (+1 949 929 1710 for sales or +1 949 756 5100 for support). - You can also check out Markzware's product documentation, online store support, video tutorials, industry news, product articles and news links, press releases, mailing list, media kit, partners, resellers, affiliate program, etc.

Conclusion

-

In conclusion,

- - Markzware PDF2DTP-torrent.rar is a file that contains a plugin for Adobe InDesign that can convert any PDF file to an editable InDesign file with a single click. - A torrent file is a file that contains information about a larger file that can be downloaded from other users on a peer-to-peer network. - To use Markzware PDF2DTP-torrent.rar, you need to download and install the plugin, choose and convert a PDF file to InDesign, edit and save the converted InDesign file as needed. - Markzware PDF2DTP costs $199 per year or $399 for life, depending on the subscription plan you choose. You can get it from Markzware's website or contact their support team for any questions or issues.

FAQs

-

Here are some frequently asked questions about Markzware PDF2DTP-torrent.rar:

-
    -
  1. Q: Is Markzware PDF2DTP-torrent.rar safe and legal to download?
    -A: Yes, Markzware PDF2DTP-torrent.rar is safe and legal to download if you use a reputable torrent client, a reliable VPN service, an antivirus program, and only download legal content.
  2. -
  3. Q: Does Markzware PDF2DTP work with Windows or other versions of InDesign?
    -A: No, Markzware PDF2DTP only works with macOS and InDesign CC 2020 through InDesign CS6. If you need to convert PDF files to other versions of InDesign or other formats, you can check out other products from Markzware such as OmniMarkz, PDFMarkz, QXPMarkz, or IDMarkz.
  4. -
  5. Q: Does Markzware PDF2DTP preserve all the elements of the PDF file in InDesign?
    -A: Yes, Markzware PDF2DTP preserves all the elements of the PDF file in InDesign as much as possible. However, some elements may not be converted exactly due to differences between the formats or limitations of the software. For example, some fonts may not be available or some images may lose quality. You can always edit and adjust the converted InDesign file as needed.
  6. -
  7. Q: How long does it take to convert a PDF file to InDesign using Markzware PDF2DTP?
    -A: The conversion time depends on several factors such as the size and complexity of the PDF file, the speed and performance of your computer, and the settings and options you choose for the conversion. Generally, it takes only a few minutes to convert a typical PDF file to InDesign using Markzware PDF2DTP.
  8. -
  9. Q: Can I try Markzware PDF2DTP for free before buying it?
    -A: Yes, you can try Markzware PDF2DTP for free for 15 days by downloading the free trial version from Markzware's website. You can also get a full refund within 30 days of purchase if you are not satisfied with the product.
  10. -
- ```

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dcouvrez Gta Mumbai City Pc Game 18 le nouveau titre de la saga Grand Theft Auto.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dcouvrez Gta Mumbai City Pc Game 18 le nouveau titre de la saga Grand Theft Auto.md deleted file mode 100644 index 954d072215f105dfb8308494d5c2bf9dc3cd0bb8..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dcouvrez Gta Mumbai City Pc Game 18 le nouveau titre de la saga Grand Theft Auto.md +++ /dev/null @@ -1,109 +0,0 @@ -
-

GTA Mumbai City Pc Game 18: A Review

-

If you are a fan of Grand Theft Auto (GTA) series, you might have heard of GTA Mumbai City Pc Game 18, a popular game that is set in the city of Mumbai, India. This game is not an official release by Rockstar Games, but a mod created by fans who wanted to experience the thrill of playing GTA in a different setting. In this article, we will review GTA Mumbai City Pc Game 18 and see what it has to offer.

-

Gta Mumbai City Pc Game 18


Download Zip ->->->-> https://byltly.com/2uKwHP



-

Gameplay

-

GTA Mumbai City Pc Game 18 follows the same gameplay mechanics as other GTA games. You can explore the open world of Mumbai, drive various vehicles, complete missions, fight enemies, and interact with other characters. The game also features some unique elements that reflect the culture and lifestyle of Mumbai, such as Bollywood music, local food, rickshaws, slums, and landmarks. You can also customize your character's appearance, clothes, weapons, and skills.

-

Graphics

-

GTA Mumbai City Pc Game 18 is based on GTA Vice City, which was released in 2002. Therefore, the graphics are not very impressive by today's standards. However, the game does a good job of recreating the atmosphere and scenery of Mumbai, with realistic textures, colors, and lighting. The game also runs smoothly on most PCs, as long as you have the minimum system requirements. You can also adjust the graphics settings to suit your preferences.

-

Sound

-

GTA Mumbai City Pc Game 18 has a great soundtrack that features songs from Bollywood movies and Indian pop artists. The songs match the mood and theme of the game, and add to the immersion. The game also has voice acting for some of the main characters, but not all of them. The voice actors have Indian accents and use some Hindi words, which adds to the authenticity. The sound effects are also decent, but not very realistic.

-

Story

-

GTA Mumbai City Pc Game 18 has a story that revolves around a young man named Raju, who comes to Mumbai from a small village to pursue his dreams. He gets involved in the criminal underworld of Mumbai, and works for various gangs and bosses. He also meets some friends and enemies along the way, who help or hinder his progress. The story is not very original or engaging, but it provides some motivation and context for the gameplay.

-

Pros and Cons

-

GTA Mumbai City Pc Game 18 has some pros and cons that you should consider before playing it. Here are some of them:

- - - - - - - - - - - - - - - - - - - - - - - - -
ProsCons
- A different and interesting setting for GTA fans.- Not an official game by Rockstar Games.
- A lot of content and variety in gameplay.- Outdated graphics and sound quality.
- A fun and catchy soundtrack.- A weak and cliched story.
- A free download for PC users.- A potential risk of viruses or malware.
- A creative and impressive mod by fans.- A possible violation of intellectual property rights.
-

Conclusion

-

GTA Mumbai City Pc Game 18 is a game that offers a new and exciting experience for GTA fans who want to explore a different city and culture. The game has a lot of content and features that make it enjoyable and entertaining. However, the game also has some drawbacks that might disappoint some players who expect high-quality graphics, sound, and story. The game is also not an official product by Rockstar Games, but a mod created by fans who might have infringed on some copyrights. Therefore, you should play this game at your own risk and discretion.

-

Gta Mumbai City Pc Game 18 download
-Gta Mumbai City Pc Game 18 free
-Gta Mumbai City Pc Game 18 full version
-Gta Mumbai City Pc Game 18 gameplay
-Gta Mumbai City Pc Game 18 cheats
-Gta Mumbai City Pc Game 18 mods
-Gta Mumbai City Pc Game 18 system requirements
-Gta Mumbai City Pc Game 18 review
-Gta Mumbai City Pc Game 18 trailer
-Gta Mumbai City Pc Game 18 release date
-Gta Mumbai City Pc Game 18 online
-Gta Mumbai City Pc Game 18 multiplayer
-Gta Mumbai City Pc Game 18 crack
-Gta Mumbai City Pc Game 18 patch
-Gta Mumbai City Pc Game 18 torrent
-Gta Mumbai City Pc Game 18 iso
-Gta Mumbai City Pc Game 18 highly compressed
-Gta Mumbai City Pc Game 18 rar
-Gta Mumbai City Pc Game 18 zip
-Gta Mumbai City Pc Game 18 setup
-Gta Mumbai City Pc Game 18 exe
-Gta Mumbai City Pc Game 18 cd key
-Gta Mumbai City Pc Game 18 serial number
-Gta Mumbai City Pc Game 18 activation code
-Gta Mumbai City Pc Game 18 license key
-Gta Mumbai City Pc Game 18 steam key
-Gta Mumbai City Pc Game 18 epic games key
-Gta Mumbai City Pc Game 18 rockstar games key
-Gta Mumbai City Pc Game 18 origin key
-Gta Mumbai City Pc Game 18 ubisoft key
-Gta Mumbai City Pc Game 18 buy
-Gta Mumbai City Pc Game 18 price
-Gta Mumbai City Pc Game 18 amazon
-Gta Mumbai City Pc Game 18 flipkart
-Gta Mumbai City Pc Game 18 snapdeal
-Gta Mumbai City Pc Game 18 ebay
-Gta Mumbai City Pc Game 18 walmart
-Gta Mumbai City Pc Game 18 best buy
-Gta Mumbai City Pc Game 18 target
-Gta Mumbai City Pc Game 18 gamestop
-Gta Mumbai City Pc Game 18 steam store
-Gta Mumbai City Pc Game 18 epic games store
-Gta Mumbai City Pc Game 18 rockstar games store
-Gta Mumbai City Pc Game 18 origin store
-Gta Mumbai City Pc Game 18 ubisoft store
-Gta Mumbai City Pc Game 18 official website
-Gta Mumbai City Pc Game 18 wiki
-Gta Mumbai City Pc Game 18 reddit
-Gta Mumbai City Pc Game 18 youtube
-Gta Mumbai City Pc Game 18 facebook

-

FAQs

-

Here are some frequently asked questions about GTA Mumbai City Pc Game 18:

-

Q1: Is GTA Mumbai City Pc Game 18 an official game or a mod?

-

A1: GTA Mumbai City Pc Game 18 is not an official game by Rockstar Games, but a mod created by fans who used GTA Vice City as a base.

-

Q2: Where can I download GTA Mumbai City Pc Game 18 for free?

-

A2: You can download GTA Mumbai City Pc Game 18 for free from various websites that host mods for GTA games. However, you should be careful about downloading files from unknown sources, as they might contain viruses or malware that can harm your PC.

-

Q3: How can I install GTA Mumbai City Pc Game 18 on my PC?

-

A3: To install GTA Mumbai City Pc Game 18 on your PC, you need to have GTA Vice City installed first. Then, you need to extract the files from the downloaded zip file into your GTA Vice City folder. After that, you can run the game from your desktop shortcut or from your start menu.

-

Q4: What are the minimum and recommended system requirements for GTA Mumbai City Pc Game 18?

-

A4: The minimum system requirements for GTA Mumbai City Pc Game 18 are:

- -

The recommended system requirements for GTA Mumbai City Pc Game 18 are:

- -

Q5: Is GTA Mumbai City Pc Game 18 suitable for children?

-

A5: No, GTA Mumbai City Pc Game 18 is not suitable for children under 18 years old. The game contains violence, blood, gore, profanity, drugs, alcohol, sex, nudity, gambling, crime, and other mature themes that are inappropriate for minors.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/3DMGAME-OMSI.2.Cracked-3DM.md b/spaces/1gistliPinn/ChatGPT4/Examples/3DMGAME-OMSI.2.Cracked-3DM.md deleted file mode 100644 index ad78250899999ec5a54c1d20517b1a906a16bbab..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/3DMGAME-OMSI.2.Cracked-3DM.md +++ /dev/null @@ -1,6 +0,0 @@ -

3DMGAME-OMSI.2.Cracked-3DM


Download Zip 🗸 https://imgfil.com/2uxWU6



- -RAGE 2 v1.0-20210219 [Trainer +20] FLiNG [Feb,27 2021] Kingdoms Reborn v0.7-v0.14 ... of my Heart [cheats] [Dec,27 2020] OMSI 2 [cheats] [Dec,27 2020] 1fdad05405
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 8 Ball Pool with All Cues Unlocked - No Hack No Root.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 8 Ball Pool with All Cues Unlocked - No Hack No Root.md deleted file mode 100644 index 57b8dbd0b7c2732077260b32d59b5b7783472f7e..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 8 Ball Pool with All Cues Unlocked - No Hack No Root.md +++ /dev/null @@ -1,116 +0,0 @@ - -

Download 8 Ball Pool Unlock All Cues: How to Get the Best Cues in the Game

-

If you are a fan of pool games, you have probably heard of or played 8 Ball Pool, one of the most popular and addictive online multiplayer games. In this game, you can challenge your friends or other players from around the world in different game rooms and tournaments. You can also customize your profile, your table, and most importantly, your cue.

-

download 8 ball pool unlock all cues


Download ✦✦✦ https://urlin.us/2uSVRg



-

But what is a cue and why do you need it? How can you get the best cues in the game and improve your performance? And how can you download 8 ball pool unlock all cues and enjoy unlimited access to all the cues available in the game? In this article, we will answer these questions and more. So, keep reading and learn how to master the game of 8 ball pool with the best cues.

-

What are Cues in 8 Ball Pool?

-

Cues are the tools that you use to hit the balls on the table. They are not just for decoration, they actually have a significant impact on your gameplay. Each cue has four stats that determine its quality and performance: force, aim, spin, and time.

- -

As you can see, cues are very important for your gameplay. They can make the difference between winning and losing a match. That's why you should always choose a cue that suits your style and skill level.

-

What are the different types of cues and how to get them?

-

There are many types of cues in 8 ball pool, each with its own design, stats, and price. You can get cues in different ways:

- -

As you can see, there are many cues to choose from in 8 ball pool. But which ones are the best and why?

-

How to get legendary cues in 8 ball pool
-8 ball pool all table exclusive cues
-8 ball pool golden shots unlock cues
-8 ball pool legendary cues hack
-8 ball pool allclash legendary cues guide
-8 ball pool gaming with k cues
-8 ball pool new cues 2023
-8 ball pool best cues for beginners
-8 ball pool free legendary boxes
-8 ball pool cue stats and upgrades
-8 ball pool cue collection rewards
-8 ball pool cue pieces exchange
-8 ball pool cue recharge trick
-8 ball pool cue of the week
-8 ball pool cue shop offers
-8 ball pool cue spin and force
-8 ball pool cue time and aim
-8 ball pool cue level max
-8 ball pool cue power bar
-8 ball pool cue customization
-8 ball pool cue codes and cheats
-8 ball pool cue reviews and ratings
-8 ball pool cue comparison and ranking
-8 ball pool cue tips and tricks
-8 ball pool cue challenges and achievements

-

What are the best cues in 8 ball pool and why?

-

The answer to this question depends on your personal preference and budget. However, some cues are generally considered to be the best in the game because of their stats, features, and popularity. Here are some of them:

- -

These are just some examples of the best cues in 8 ball pool. There are many more to discover and try out. But how can you get them without spending a lot of money or time?

-

How to Download 8 Ball Pool Unlock All Cues?

-

If you want to get all the cues in the game without spending a dime or waiting for hours, you might be tempted to download 8 ball pool unlock all cues. This is a modded version of the game that claims to give you unlimited access to all the cues available in the game. Sounds too good to be true, right? Well, it is.

-

What are the benefits of downloading 8 ball pool unlock all cues?

-

The only benefit of downloading 8 ball pool unlock all cues is that you can use any cue you want in the game without paying or earning it. You can enjoy playing with different cues and see how they affect your gameplay. You can also impress your friends or opponents with your collection of cues.

-

What are the risks of downloading 8 ball pool unlock all cues?

-

The risks of downloading 8 ball pool unlock all cues are far greater than the benefits. Here are some of them:

- -

As you can see, downloading 8 ball pool unlock all cues is not worth it. It is risky, illegal, and unethical. So, how can you download 8 ball pool unlock all cues safely and legally?

-

How to download 8 ball pool unlock all cues safely and legally?

-

The answer is simple: you can't. There is no safe and legal way to download 8 ball pool unlock all cues. The only way to get all the cues in the game is to play fair and square, earn coins and cash, buy or win cues, and collect pieces of cues. This is how the game is meant to be played and enjoyed.

-

How to Use 8 Ball Pool Unlock All Cues?

-

If you have downloaded 8 ball pool unlock all cues, you might be wondering how to use it. Well, here are some tips on how to use 8 ball pool unlock all cues:

-

How to select and customize your cue in the game?

-

To select your cue in the game, you need to go to the Pool Shop and tap on Cues. There you will see all the cues that you have unlocked or bought. You can scroll through them and tap on the one that you want to use. You can also customize your cue by changing its color or adding stickers.

-

How to use your cue effectively in different game modes and situations?

-

To use your cue effectively in the game, you need to know how to adjust your aim, power, and spin according to the game mode and situation. Here are some tips on how to do that:

- -

How to improve your skills and strategy with your cue?

-

To improve your skills and strategy with your cue, you need to practice a lot and learn from your mistakes. Here are some tips on how to do that:

- -

Conclusion

-

8 Ball Pool is a fun and challenging game that requires skill, strategy, and luck. One of the most important aspects of the game is the cue, which can make or break your performance. There are many cues to choose from in the game, each with its own stats and features. Some of them are better than others, but none of them are free or easy to get.

-

If you want to get all the cues in the game without spending money or time, you might be tempted to download 8 ball pool unlock all cues. This is a modded version of the game that claims to give you unlimited access to all the cues available in the game. However, this is not a safe or legal way to play the game. It can expose you to viruses, malware, bans, or account loss. It can also ruin your gaming experience or interest in the game by having everything unlocked without any challenge or reward.

-

The best way to play the game is to play fair and square, earn coins and cash, buy or win cues, and collect pieces of cues. This is how the game is meant to be played and enjoyed. You can also improve your skills and strategy with your cue by practicing a lot and learning from your matches and opponents. This way, you can have fun and satisfaction with the game.

-

We hope this article has helped you understand how to download 8 ball pool unlock all cues and how to use them in the game. If you have any questions or comments, feel free to leave them below. And if you liked this article, please share it with your friends or fellow players. Thank you for reading!

-

FAQs

-

Q: How can I get free coins and cash in 8 ball pool?

-

A: There are several ways to get free coins and cash in 8 ball pool. You can:

- -

Q: How can I upgrade my cue in 8 ball pool?

-

A: You can upgrade your cue by using Pool Cash or Cue Pieces. Pool Cash is a premium currency that you can buy with real money or earn by playing the game. Cue Pieces are fragments of cues that you can collect by opening Victory Boxes or Legendary Boxes. To upgrade your cue, go to the Pool Shop, tap on Cues, select your cue, and tap on Upgrade.

-

Q: How can I change my cue in 8 ball pool?

A: You can change your cue by going to the Pool Shop, tapping on Cues, and selecting the cue that you want to use. You can also change your cue before or during a match by tapping on the cue icon on the bottom left corner of the screen.

-

Q: How can I get Legendary Cues in 8 ball pool?

-

A: You can get Legendary Cues by opening Legendary Boxes or by collecting pieces of cues in the Pool Pass. Legendary Boxes are special boxes that contain pieces of Legendary Cues. You can buy them with Pool Cash or win them in some events or promotions. Pool Pass is a seasonal feature that allows you to earn rewards by completing challenges and leveling up. Some of the rewards are pieces of Legendary Cues.

-

Q: How can I contact the support team of 8 ball pool?

-

A: You can contact the support team of 8 ball pool by going to the Settings, tapping on Help and Support, and choosing the option that suits your issue. You can also visit the official website or social media pages of 8 ball pool and send them a message or feedback.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Sigma Battle Royale APK for Android - Enjoy Creative and Stylized Survival Shooter Game.md b/spaces/1phancelerku/anime-remove-background/Download Sigma Battle Royale APK for Android - Enjoy Creative and Stylized Survival Shooter Game.md deleted file mode 100644 index 24f25fef2c99d723929545cc50d1e0fd0410e55d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Sigma Battle Royale APK for Android - Enjoy Creative and Stylized Survival Shooter Game.md +++ /dev/null @@ -1,96 +0,0 @@ - -

Download Game Sigma APK: A Stylized Survival Shooter Game for Mobile Phones

-

If you are looking for a new and exciting game to play on your mobile phone, you might want to check out Game Sigma APK. This is a stylized survival shooter game that offers two different modes: Classic Battle Royale and 4v4 Fight Out. In this article, we will tell you what Game Sigma APK is, what features it has, how to download and install it, and some tips and tricks for playing it.

-

download game sigma apk


Download ===== https://jinyurl.com/2uNMdw



-

What is Game Sigma APK?

-

Game Sigma APK is a game developed by Studio Arm Private Limited, a company based in India. It is a survival shooter game that combines elements of action, strategy, and creativity. The game is available on Android devices and can be downloaded from various websites, such as APKCombo. The game has been updated recently, with the latest version being 1.0.113 as of January 14, 2023.

-

Features of Game Sigma APK

-

Game Sigma APK has many features that make it stand out from other survival shooter games. Here are some of them:

-

- Stylized graphics

-

The game has a unique and creative art style that immerses you into a stylized survival world. The game uses vibrant colors, cartoon-like characters, and dynamic effects to create a visually appealing experience. The game also runs smoothly on most devices, thanks to its optimized performance.

-

download sigma battle royale apk
-download sigma game android apk
-download sigma apk latest version
-download sigma apk for free
-download sigma apk from apkcombo
-download sigma apk mod
-download sigma apk offline
-download sigma apk obb
-download sigma apk xapk
-download sigma apk full version
-download sigma game for android
-download sigma game free
-download sigma game mod apk
-download sigma game offline
-download sigma game online
-download sigma game update
-download sigma game hack
-download sigma game cheats
-download sigma game tips and tricks
-download sigma game guide
-how to download sigma apk
-how to download sigma game on android
-how to download sigma game for free
-how to download sigma game mod apk
-how to download sigma game offline
-how to download sigma game online
-how to download sigma game update
-how to download sigma game hack
-how to download sigma game cheats
-how to download sigma game tips and tricks
-where to download sigma apk
-where to download sigma game for android
-where to download sigma game free
-where to download sigma game mod apk
-where to download sigma game offline
-where to download sigma game online
-where to download sigma game update
-where to download sigma game hack
-where to download sigma game cheats
-where to download sigma game tips and tricks
-best site to download sigma apk
-best site to download sigma game for android
-best site to download sigma game free
-best site to download sigma game mod apk
-best site to download sigma game offline
-best site to download sigma game online
-best site to download sigma game update
-best site to download sigma game hack
-best site to download sigma game cheats

-

- Unique survival shooter experience

-

The game has easy-to-use controls that promise an unforgettable survival experience on mobile. You can move, aim, shoot, jump, crouch, and interact with the environment using simple gestures and buttons. You can also customize your controls and settings according to your preferences.

-

- Classic Battle Royale mode

-

In this mode, you will compete against 49 other players in a fast-paced and lite gameplay. You can choose your starting point with your parachute, and then explore the vast map to find weapons, items, and vehicles. You have to stay in the safe zone as long as possible, while avoiding or eliminating other players. The last one standing wins the match.

-

- 4v4 Fight Out mode

-

In this mode, you will team up with three other players to fight against another squad in a tense and strategic battle. You have to allocate resources, purchase weapons, and outlast your enemies in various creative maps. You have to fight for your faith and lead your team to victory.

-

How to download and install Game Sigma APK?

-

If you want to play Game Sigma APK on your Android device, you have to download and install it from a third-party source, such as APKCombo. Here are the steps to do so:

-

Steps to download Game Sigma APK from APKCombo

-

- Visit the APKCombo website

-

Go to https://apkcombo.com/sigma/com.studioarm.sigma/ using your browser. This is the official page of Game Sigma APK on APKCombo.

-

- Search for Game Sigma APK

-

Type "Game Sigma APK" in the search bar and hit enter

- Choose the version and device compatibility

-

On the APKCombo page, you will see different versions of Game Sigma APK, along with their file size, update date, and device compatibility. Choose the version that suits your device and click on the download button.

-

- Download the APK file

-

Wait for the download to finish. You will see a notification on your device when the APK file is downloaded. You can also check the download progress in your browser or in your file manager.

-

- Enable unknown sources on your device

-

Before you can install the APK file, you have to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources. Turn on the toggle to enable unknown sources.

-

- Install the APK file

-

Now you can install the APK file on your device. Locate the file in your file manager and tap on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for the process to complete. You will see a notification when the app is installed.

-

Tips and tricks for playing Game Sigma APK

-

Now that you have downloaded and installed Game Sigma APK, you are ready to play it. Here are some tips and tricks to help you enjoy the game more:

-

- Customize your controls and settings

-

Before you start playing, you should customize your controls and settings according to your preferences. You can access the settings menu from the main screen of the game. Here you can adjust the sensitivity, sound, graphics, language, and other options. You can also customize your controls by dragging and resizing the buttons on the screen.

-

- Choose your landing spot wisely

-

In Classic Battle Royale mode, you have to choose your landing spot with your parachute. You should choose a spot that has good loot, but also has less enemies. You can use the map to see where other players are landing, and avoid crowded areas. You can also use the markers to communicate with your teammates and coordinate your landing.

-

- Loot and equip the best weapons and items

-

Once you land, you have to loot and equip the best weapons and items you can find. You can loot from buildings, crates, vehicles, and dead enemies. You can equip up to two primary weapons, one secondary weapon, and one melee weapon. You can also equip armor, helmets, backpacks, grenades, medkits, and other items. You should always look for better loot as you play.

-

- Use cover and stealth to your advantage

-

The game is not only about shooting, but also about survival. You have to use cover and stealth to your advantage. You can use buildings, trees, rocks, vehicles, and other objects as cover from enemy fire. You can also use crouch and prone positions to reduce your visibility and noise. You should always be aware of your surroundings and avoid exposing yourself too much.

-

- Communicate and cooperate with your teammates

-

The game is more fun and easier when you play with your teammates. You can communicate and cooperate with them using voice chat or text chat. You can also use gestures, markers, pings, and other tools to convey information. You should always stick with your teammates, share loot, revive them when they are downed, and support them in combat.

-

Conclusion

-

Game Sigma APK is a stylized survival shooter game that offers two different modes: Classic Battle Royale and 4v4 Fight Out. It has many features that make it stand out from other survival shooter games, such as stylized graphics, unique survival shooter experience, easy-to-use controls, and optimized performance. You can download and install Game Sigma APK from APKCombo, following the steps we have provided in this article. You can also use our tips and tricks to improve your gameplay and have more fun.

- FAQs - Q: Is Game Sigma APK safe to download? - A: Yes, Game Sigma APK is safe to download from APKCombo, as it is verified by VirusTotal and does not contain any malware or viruses. - Q: Is Game Sigma APK free to play? - A: Yes, Game Sigma APK is free to play, but it may contain ads and in-app purchases. - Q: What are the minimum requirements to play Game Sigma APK? - A: The minimum requirements to play Game Sigma APK are Android 5.0 or higher, 2 GB of RAM, 1 GB of storage space, and a stable internet connection. - Q: How can I update Game Sigma APK? - A: You can update Game Sigma APK by visiting the APKCombo website and downloading the latest version of the game. You can also check for updates from within the game settings. - Q: How can I contact the developers of Game Sigma APK? - A: You can contact the developers of Game Sigma APK by visiting their official website, Facebook page, or Instagram account. You can also send them an email at studioarm@gmail.com.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/MDXNet.py b/spaces/AI-Hobbyist/Hoyo-RVC/MDXNet.py deleted file mode 100644 index 99780afb2266a058a172e13c74e63c92b115e8c2..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/MDXNet.py +++ /dev/null @@ -1,274 +0,0 @@ -import soundfile as sf -import torch, pdb, time, argparse, os, warnings, sys, librosa -import numpy as np -import onnxruntime as ort -from scipy.io.wavfile import write -from tqdm import tqdm -import torch -import torch.nn as nn - -dim_c = 4 - - -class Conv_TDF_net_trim: - def __init__( - self, device, model_name, target_name, L, dim_f, dim_t, n_fft, hop=1024 - ): - super(Conv_TDF_net_trim, self).__init__() - - self.dim_f = dim_f - self.dim_t = 2**dim_t - self.n_fft = n_fft - self.hop = hop - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to( - device - ) - self.target_name = target_name - self.blender = "blender" in model_name - - out_c = dim_c * 4 if target_name == "*" else dim_c - self.freq_pad = torch.zeros( - [1, out_c, self.n_bins - self.dim_f, self.dim_t] - ).to(device) - - self.n = L // 2 - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft( - x, - n_fft=self.n_fft, - hop_length=self.hop, - window=self.window, - center=True, - return_complex=True, - ) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape( - [-1, dim_c, self.n_bins, self.dim_t] - ) - return x[:, :, : self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = ( - self.freq_pad.repeat([x.shape[0], 1, 1, 1]) - if freq_pad is None - else freq_pad - ) - x = torch.cat([x, freq_pad], -2) - c = 4 * 2 if self.target_name == "*" else 2 - x = x.reshape([-1, c, 2, self.n_bins, self.dim_t]).reshape( - [-1, 2, self.n_bins, self.dim_t] - ) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft( - x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True - ) - return x.reshape([-1, c, self.chunk_size]) - - -def get_models(device, dim_f, dim_t, n_fft): - return Conv_TDF_net_trim( - device=device, - model_name="Conv-TDF", - target_name="vocals", - L=11, - dim_f=dim_f, - dim_t=dim_t, - n_fft=n_fft, - ) - - -warnings.filterwarnings("ignore") -cpu = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda:0") -elif torch.backends.mps.is_available(): - device = torch.device("mps") -else: - device = torch.device("cpu") - - -class Predictor: - def __init__(self, args): - self.args = args - self.model_ = get_models( - device=cpu, dim_f=args.dim_f, dim_t=args.dim_t, n_fft=args.n_fft - ) - self.model = ort.InferenceSession( - os.path.join(args.onnx, self.model_.target_name + ".onnx"), - providers=["CUDAExecutionProvider", "CPUExecutionProvider"], - ) - print("onnx load done") - - def demix(self, mix): - samples = mix.shape[-1] - margin = self.args.margin - chunk_size = self.args.chunks * 44100 - assert not margin == 0, "margin cannot be zero!" - if margin > chunk_size: - margin = chunk_size - - segmented_mix = {} - - if self.args.chunks == 0 or samples < chunk_size: - chunk_size = samples - - counter = -1 - for skip in range(0, samples, chunk_size): - counter += 1 - - s_margin = 0 if counter == 0 else margin - end = min(skip + chunk_size + margin, samples) - - start = skip - s_margin - - segmented_mix[skip] = mix[:, start:end].copy() - if end == samples: - break - - sources = self.demix_base(segmented_mix, margin_size=margin) - """ - mix:(2,big_sample) - segmented_mix:offset->(2,small_sample) - sources:(1,2,big_sample) - """ - return sources - - def demix_base(self, mixes, margin_size): - chunked_sources = [] - progress_bar = tqdm(total=len(mixes)) - progress_bar.set_description("Processing") - for mix in mixes: - cmix = mixes[mix] - sources = [] - n_sample = cmix.shape[1] - model = self.model_ - trim = model.n_fft // 2 - gen_size = model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - mix_p = np.concatenate( - (np.zeros((2, trim)), cmix, np.zeros((2, pad)), np.zeros((2, trim))), 1 - ) - mix_waves = [] - i = 0 - while i < n_sample + pad: - waves = np.array(mix_p[:, i : i + model.chunk_size]) - mix_waves.append(waves) - i += gen_size - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(cpu) - with torch.no_grad(): - _ort = self.model - spek = model.stft(mix_waves) - if self.args.denoise: - spec_pred = ( - -_ort.run(None, {"input": -spek.cpu().numpy()})[0] * 0.5 - + _ort.run(None, {"input": spek.cpu().numpy()})[0] * 0.5 - ) - tar_waves = model.istft(torch.tensor(spec_pred)) - else: - tar_waves = model.istft( - torch.tensor(_ort.run(None, {"input": spek.cpu().numpy()})[0]) - ) - tar_signal = ( - tar_waves[:, :, trim:-trim] - .transpose(0, 1) - .reshape(2, -1) - .numpy()[:, :-pad] - ) - - start = 0 if mix == 0 else margin_size - end = None if mix == list(mixes.keys())[::-1][0] else -margin_size - if margin_size == 0: - end = None - sources.append(tar_signal[:, start:end]) - - progress_bar.update(1) - - chunked_sources.append(sources) - _sources = np.concatenate(chunked_sources, axis=-1) - # del self.model - progress_bar.close() - return _sources - - def prediction(self, m, vocal_root, others_root, format): - os.makedirs(vocal_root, exist_ok=True) - os.makedirs(others_root, exist_ok=True) - basename = os.path.basename(m) - mix, rate = librosa.load(m, mono=False, sr=44100) - if mix.ndim == 1: - mix = np.asfortranarray([mix, mix]) - mix = mix.T - sources = self.demix(mix.T) - opt = sources[0].T - if format in ["wav", "flac"]: - sf.write( - "%s/%s_main_vocal.%s" % (vocal_root, basename, format), mix - opt, rate - ) - sf.write("%s/%s_others.%s" % (others_root, basename, format), opt, rate) - else: - path_vocal = "%s/%s_main_vocal.wav" % (vocal_root, basename) - path_other = "%s/%s_others.wav" % (others_root, basename) - sf.write(path_vocal, mix - opt, rate) - sf.write(path_other, opt, rate) - if os.path.exists(path_vocal): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_vocal, path_vocal[:-4] + ".%s" % format) - ) - if os.path.exists(path_other): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_other, path_other[:-4] + ".%s" % format) - ) - - -class MDXNetDereverb: - def __init__(self, chunks): - self.onnx = "uvr5_weights/onnx_dereverb_By_FoxJoy" - self.shifts = 10 #'Predict with randomised equivariant stabilisation' - self.mixing = "min_mag" # ['default','min_mag','max_mag'] - self.chunks = chunks - self.margin = 44100 - self.dim_t = 9 - self.dim_f = 3072 - self.n_fft = 6144 - self.denoise = True - self.pred = Predictor(self) - - def _path_audio_(self, input, vocal_root, others_root, format): - self.pred.prediction(input, vocal_root, others_root, format) - - -if __name__ == "__main__": - dereverb = MDXNetDereverb(15) - from time import time as ttime - - t0 = ttime() - dereverb._path_audio_( - "雪雪伴奏对消HP5.wav", - "vocal", - "others", - ) - t1 = ttime() - print(t1 - t0) - - -""" - -runtime\python.exe MDXNet.py - -6G: -15/9:0.8G->6.8G -14:0.8G->6.5G -25:炸 - -half15:0.7G->6.6G,22.69s -fp32-15:0.7G->6.6G,20.85s - -""" diff --git a/spaces/AI-Zero-to-Hero/06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md b/spaces/AI-Zero-to-Hero/06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md deleted file mode 100644 index c7f042e4c9c0f401731f009842a325e2d1386bf5..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md +++ /dev/null @@ -1,51 +0,0 @@ - -# Image Generation for Art, Marketing, Ideation, Design, and Use in Business - -A number of multiple AI pipeline element strategies have evolved on the open market which allow you to generate images using a combination of image prompts and word prompts. This brief analysis gives an idea of the prompting capabilities as well as image rendering techniques that are used in the strategy to generate art from human understanding of images and text used to describe a scene. - -First a top five list on state of the art generators both free and paid is worth consideration. - -1) Midjourney - a Discord server based chatboat AI that allows /imagine prompts which can generate multiple images at a time. This is best at parallel creation, high accuracy even photo real creations. -2) Artbreeder - A multiple capability tool which now features a Collager which assists in starting image composition. By far the most innovative approach which does great to combine the right partial elements in a scene. -3) Dreamstudio - A Huggingface derived art program in beta which uses stable diffusion to create highly accurate art and images. -4) Nightcafe - A credit based creation AI app that can do generation of video dives into an AI art piece which can produce some of the best experiences in Video. -5) RunwayML - a quintessential tool in processing morph audio and video tracks which rival most high end video edit tools. - -These 5 tools make up some of the best AI pipeline programs that are cloud based that allow anyone to begin easily building their portfolio of art. - -The prompting capabilities often involve having a set of text based prompts to get started. Most also feature a starter image which could be an example of what you would like to create. - -URL Links: -1) Collager: https://www.artbreeder.com/beta/collage -2) NightCafe: https://creator.nightcafe.studio/explore -3) Midjourney: https://www.midjourney.com/app/users/779773261440614430/ -4) Dreamstudio: https://beta.dreamstudio.ai/dream -5) RunwayML: https://app.runwayml.com/ - -## Getting Started and Organizing Your AI Pipeline and Process - -Any great strategy has a number of steps that combine all capabilities at your disposal. It is useful to note how you can easily fir these together into a process that works for you. - -The techniques worth noted are listed below. Consider how you will use them will make your pipeline easier and more automated to allow you to spend the majority of your time curating what you have made, and ideating what you want to create next. - -1) Source materials: Since prompting requires text and text examples can quickly help you compose good input, its worth considering and documenting some effective prompts. Nightcafe with its integration into email, sends you a copy of your creation plus the prompting text so one option is to use your email account to keep a record of which prompts work for which outputs. -2) Source materials: Discord since its a public chat format allows you to easily see what others are using for prompts in bulk. There are a number of chat channels designed for people new to the platform and often you can copy and paste if you see very effective prompts with material you are looking for. -3) Source materials: Collager is unique in its ability to add additive parts and then dial in the percent of AI you would like with that. This allows you to add a few image elements which help start out your generation. -4) Source materials: Since images and prompts are going to be your mainstay for inputs its worth considering an open standard for storing and retrieving these from anywhere. Github is a good place since markdown language can involve text in table or list format and includes a capability to reference uploaded images within markdown. This is also a good form for portability since you can later fork and download your repository with a few clicks from anywhere. -5) Source materials: Google drive is integrated into the Artbreeder Collager workflow which allows you easily expand your work and even compose albums of the ones you like to place in Google photo albums. The portfolio you save on different sites have different degrees of ease when aggregating your collections. Collager for instance allows right click save for instant saving of your creation. Dreamstudio features a history. Midjourney features a profile site for you to store and review creations even triggering Upscales which important to use to get the highest resolution output for your creations. - -## Social Media integration - -Depending on your target "safe for work" exports of your work, it is sometimes important to know your accepted social media outlets that you can integrate. Cloud based interactions are the key to successful audiences if you want to scale and share your process with others. - -The key social media outlets supported for these tools are here in a sorted link list which start with public open source first: - -1) Github - Github is open at most companies and allow creation of a free space to share your content. -2) LinkedIn - LinkedIn is acceptable use at nearly every company. -3) Twitter - Twitter is supported as a social media outlet at most companies yet can also be used with security restrictions which might limit posting but allow read access. -4) Facebook - Meta's Facebook is a good outlet since it allows creation of large folios of your images along with stories. This venue however is locked down at many organizations. -5) Instagram - Instagram is supported as an output channel for many tools yet has decreased in popularity due to high frequency of ads and pay for likes models. While it can still be one of the best places for domain specific arrangements of images it is likely locked down in most secure organizations. -6) Youtube - For video uploads with automated captioning and long term storage of short and long form video this is an essential for any creation you compose as video. It is also useful to review and compose playlists of videos here for yourself that speed up your learning - Spend some time at Youtube university and keep a record of keyword searches there sometimes along with your playlists to accelerate learning. -7) Gmail - With the baility to move email in and out its useful to create and wrap up details within email. Most email policies come with a content limitation (for example no files larger than 25MB. For this reason get used to creating pproject wrap up archives with winzip or compression software. With the convenience of keyword searching you can usually use this as a base. -8) Last a worth mention is Huggingface.com. Like github as you become more sophisticated in your public open source capabilities, HuggingFace can allow you to wrap up using one of three software development kits which are gadio, streamlit, and HTML5 each with unique AI and UI integration components and features. If you want to create your own AI pipelines this one also has the open source code and models ready to go to help you on your journey. - diff --git a/spaces/AICODER009/Food101_Detection/model.py b/spaces/AICODER009/Food101_Detection/model.py deleted file mode 100644 index 52c2696c874740179528f0bdae8ce87b774a138f..0000000000000000000000000000000000000000 --- a/spaces/AICODER009/Food101_Detection/model.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -import torchvision - -from torch import nn - - -def create_effnetb2_model(num_classes:int=3, - seed:int=42): - """Creates an EfficientNetB2 feature extractor model and transforms. - - Args: - num_classes (int, optional): number of classes in the classifier head. - Defaults to 3. - seed (int, optional): random seed value. Defaults to 42. - - Returns: - model (torch.nn.Module): EffNetB2 feature extractor model. - transforms (torchvision.transforms): EffNetB2 image transforms. - """ - # Create EffNetB2 pretrained weights, transforms and model - weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT - transforms = weights.transforms() - model = torchvision.models.efficientnet_b2(weights=weights) - - # Freeze all layers in base model - for param in model.parameters(): - param.requires_grad = False - - # Change classifier head with random seed for reproducibility - torch.manual_seed(seed) - model.classifier = nn.Sequential( - nn.Dropout(p=0.3, inplace=True), - nn.Linear(in_features=1408, out_features=num_classes), - ) - - return model, transforms diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/utils.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/utils.py deleted file mode 100644 index de59fd2746a13742197ecdeac671d61ece3f79ba..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/utils.py +++ /dev/null @@ -1,361 +0,0 @@ -import numpy as np -import torch -from torch import nn as nn -from torchvision.ops.misc import FrozenBatchNorm2d -import logging -# import h5py -from tqdm import tqdm -import random -import json -import os -import pathlib - -# TODO: (yusong) this not a good place to store those information and does not scale. Need to be fixed later. -dataset_split = { - "audiocaps": ["train", "valid", "test"], - "audioset": ["balanced_train", "unbalanced_train", "eval"], - "BBCSoundEffects": ["train", "test"], - "Clotho": ["train", "test", "valid"], - "free_to_use_sounds": ["train", "test"], - "paramount_motion": ["train", "test"], - "sonniss_game_effects": ["train", "test"], - "wesoundeffects": ["train", "test"], - "MACS": ["train", "test"], - "freesound": ["train", "test"], - "FSD50K": ["train", "test", "valid"], - "fsd50k_class_label": ["train", "test", "valid"], - "esc50": ["train", "test"], - "audiostock": ["train", "test"], - "freesound_no_overlap_noesc50": ["train", "test"], - "epidemic_sound_effects": ["train", "test"], - "VGGSound": ["train", "test"], - "urbansound8k_class_label": ["train", "test"], - "audioset_t5": ["balanced_train", "unbalanced_train", "eval"], - "epidemic_sound_effects_t5": ["train", "test"], - "WavText5K": ["train", "test"], - "esc50_no_overlap": ["train", "test"], - "usd8k_no_overlap": ["train", "test"], - "fsd50k_200_class_label": ["train", "test", "valid"], -} - - -def freeze_batch_norm_2d(module, module_match={}, name=""): - """ - Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is - itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and - returned. Otherwise, the module is walked recursively and submodules are converted in place. - - Args: - module (torch.nn.Module): Any PyTorch module. - module_match (dict): Dictionary of full module names to freeze (all if empty) - name (str): Full module name (prefix) - - Returns: - torch.nn.Module: Resulting module - - Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762 - """ - res = module - is_match = True - if module_match: - is_match = name in module_match - if is_match and isinstance( - module, (nn.modules.batchnorm.BatchNorm2d, nn.modules.batchnorm.SyncBatchNorm) - ): - res = FrozenBatchNorm2d(module.num_features) - res.num_features = module.num_features - res.affine = module.affine - if module.affine: - res.weight.data = module.weight.data.clone().detach() - res.bias.data = module.bias.data.clone().detach() - res.running_mean.data = module.running_mean.data - res.running_var.data = module.running_var.data - res.eps = module.eps - else: - for child_name, child in module.named_children(): - full_child_name = ".".join([name, child_name]) if name else child_name - new_child = freeze_batch_norm_2d(child, module_match, full_child_name) - if new_child is not child: - res.add_module(child_name, new_child) - return res - - -def exist(dataset_name, dataset_type): - """ - Check if dataset exists - """ - if dataset_type in dataset_split[dataset_name]: - return True - else: - return False - - -def get_tar_path_from_dataset_name( - dataset_names, dataset_types, islocal, dataset_path, proportion=1, full_dataset=None -): - """ - Get tar path from dataset name and type - """ - output = [] - for n in dataset_names: - if full_dataset is not None and n in full_dataset: - current_dataset_types = dataset_split[n] - else: - current_dataset_types = dataset_types - for s in current_dataset_types: - tmp = [] - if islocal: - sizefilepath_ = f"{dataset_path}/{n}/{s}/sizes.json" - if not os.path.exists(sizefilepath_): - sizefilepath_ = f"./json_files/{n}/{s}/sizes.json" - else: - sizefilepath_ = f"./json_files/{n}/{s}/sizes.json" - if not os.path.exists(sizefilepath_): - continue - sizes = json.load(open(sizefilepath_, "r")) - for k in sizes.keys(): - if islocal: - tmp.append(f"{dataset_path}/{n}/{s}/{k}") - else: - tmp.append( - f"pipe:aws s3 --cli-connect-timeout 0 cp s3://s-laion-audio/webdataset_tar/{n}/{s}/{k} -" - ) - if proportion != 1: - tmp = random.sample(tmp, int(proportion * len(tmp))) - output.append(tmp) - return sum(output, []) - - -def get_tar_path_from_txts(txt_path, islocal, proportion=1): - """ - Get tar path from txt path - """ - if isinstance(txt_path, (list, tuple)): - return sum( - [ - get_tar_path_from_txts( - txt_path[i], islocal=islocal, proportion=proportion - ) - for i in range(len(txt_path)) - ], - [], - ) - if isinstance(txt_path, str): - with open(txt_path) as f: - lines = f.readlines() - if islocal: - lines = [ - lines[i] - .split("\n")[0] - .replace("pipe:aws s3 cp s3://s-laion-audio/", "/mnt/audio_clip/") - for i in range(len(lines)) - ] - else: - lines = [ - lines[i].split("\n")[0].replace(".tar", ".tar -") - for i in range(len(lines)) - ] - if proportion != 1: - print("Sampling tars with proportion of {}".format(proportion)) - lines = random.sample(lines, int(proportion * len(lines))) - return lines - - -def get_mix_lambda(mixup_alpha, batch_size): - mixup_lambdas = [ - np.random.beta(mixup_alpha, mixup_alpha, 1)[0] for _ in range(batch_size) - ] - return np.array(mixup_lambdas).astype(np.float32) - - -def do_mixup(x, mixup_lambda): - """ - Args: - x: (batch_size , ...) - mixup_lambda: (batch_size,) - Returns: - out: (batch_size, ...) - """ - out = ( - x.transpose(0, -1) * mixup_lambda - + torch.flip(x, dims=[0]).transpose(0, -1) * (1 - mixup_lambda) - ).transpose(0, -1) - return out - - -def interpolate(x, ratio): - """Interpolate data in time domain. This is used to compensate the - resolution reduction in downsampling of a CNN. - - Args: - x: (batch_size, time_steps, classes_num) - ratio: int, ratio to interpolate - Returns: - upsampled: (batch_size, time_steps * ratio, classes_num) - """ - (batch_size, time_steps, classes_num) = x.shape - upsampled = x[:, :, None, :].repeat(1, 1, ratio, 1) - upsampled = upsampled.reshape(batch_size, time_steps * ratio, classes_num) - return upsampled - - -def pad_framewise_output(framewise_output, frames_num): - """Pad framewise_output to the same length as input frames. The pad value - is the same as the value of the last frame. - Args: - framewise_output: (batch_size, frames_num, classes_num) - frames_num: int, number of frames to pad - Outputs: - output: (batch_size, frames_num, classes_num) - """ - pad = framewise_output[:, -1:, :].repeat( - 1, frames_num - framewise_output.shape[1], 1 - ) - """tensor for padding""" - - output = torch.cat((framewise_output, pad), dim=1) - """(batch_size, frames_num, classes_num)""" - - -# def process_ipc(index_path, classes_num, filename): -# # load data -# logging.info("Load Data...............") -# ipc = [[] for _ in range(classes_num)] -# with h5py.File(index_path, "r") as f: -# for i in tqdm(range(len(f["target"]))): -# t_class = np.where(f["target"][i])[0] -# for t in t_class: -# ipc[t].append(i) -# print(ipc) -# np.save(filename, ipc) -# logging.info("Load Data Succeed...............") - - -def save_to_dict(s, o_={}): - sp = s.split(": ") - o_.update({sp[0]: float(sp[1])}) - return o_ - - -def get_data_from_log(txt_path): - """ - Output dictionary from out.txt log file - """ - with open(txt_path) as f: - lines = f.readlines() - val_data = {} - train_data = {} - train_losses = [] - train_losses_epoch = [] - for i in range(len(lines)): - if "| INFO |" in lines[i]: - if "Eval Epoch" in lines[i]: - if "val_loss" in lines[i]: - # float(regex.sub("", lines[310].split(" ")[-1]).replace(" ", "")) - line = lines[i].split("Eval Epoch: ")[-1] - num_epoch = int(line.split(" ")[0].split(" ")[0]) - d = { - line.split(" ")[0] - .split(" ")[1] - .replace(":", ""): float(line.split(" ")[0].split(" ")[-1]) - } - for i in range(1, len(line.split(" "))): - d = save_to_dict(line.split(" ")[i], d) - val_data[num_epoch] = d - elif "Train Epoch" in lines[i]: - num_epoch = int(lines[i].split("Train Epoch: ")[1][0]) - loss = float(lines[i].split("Loss: ")[-1].split(" (")[0]) - train_losses.append(loss) - train_losses_epoch.append(num_epoch) - for i in range(len(train_losses)): - train_data[i] = { - "num_epoch": train_losses_epoch[i], - "train_loss": train_losses[i], - } - return train_data, val_data - - -def save_p(obj, filename): - import pickle - - try: - from deepdiff import DeepDiff - except: - os.system("pip install deepdiff") - from deepdiff import DeepDiff - with open(filename, "wb") as file: - pickle.dump(obj, file, protocol=pickle.HIGHEST_PROTOCOL) # highest protocol - with open(filename, "rb") as file: - z = pickle.load(file) - assert ( - DeepDiff(obj, z, ignore_string_case=True) == {} - ), "there is something wrong with the saving process" - return - - -def load_p(filename): - import pickle - - with open(filename, "rb") as file: - z = pickle.load(file) - return z - - -def save_json(data, name="data.json"): - import json - - with open(name, "w") as fp: - json.dump(data, fp) - return - - -def load_json(name): - import json - - with open(name, "r") as fp: - data = json.load(fp) - return data - - -from multiprocessing import Process, Manager -from multiprocessing import Process, Value, Array -from ctypes import c_wchar - - -def load_class_label(path): - # https://stackoverflow.com/questions/48004243/how-to-share-large-read-only-dictionary-list-across-processes-in-multiprocessing - # https://stackoverflow.com/questions/45693949/storing-strings-in-a-multiprocessing-sharedctypes-array - out = None - if path is not None: - if pathlib.Path(path).suffix in [".pkl", ".pickle"]: - out = load_p(path) - elif pathlib.Path(path).suffix in [".json", ".txt"]: - out = load_json(path) - elif pathlib.Path(path).suffix in [".npy", ".npz"]: - out = np.load(path) - elif pathlib.Path(path).suffix in [".csv"]: - import pandas as pd - - out = pd.read_csv(path) - return out - # if out is None: - # return None - # else: - # key = Array(c_wchar, '\n'.join(list(out.keys())), lock=False) - # val = Array('i', out.values(), lock=False) - # return (key, val) - - -from torch import optim - - -def get_optimizer(params, lr, betas, eps, momentum, optimizer_name): - if optimizer_name.lower() == "adamw": - optimizer = optim.AdamW(params, lr=lr, betas=betas, eps=eps) - elif optimizer_name.lower() == "sgd": - optimizer = optim.SGD(params, lr=lr, momentum=momentum) - elif optimizer_name.lower() == "adam": - optimizer = optim.Adam(params, lr=lr, betas=betas, eps=eps) - else: - raise ValueError("optimizer name is not correct") - return optimizer diff --git a/spaces/AIWaves/SOP_Generation-single/Memory/__init__.py b/spaces/AIWaves/SOP_Generation-single/Memory/__init__.py deleted file mode 100644 index 56f3aa09d927077ebc7f1a925f956dee78cb1c26..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Memory/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .base_Memory import Memory \ No newline at end of file diff --git a/spaces/AchyuthGamer/ImMagician-Gradio/README.md b/spaces/AchyuthGamer/ImMagician-Gradio/README.md deleted file mode 100644 index e73afd176139da62cd897bdff1881944b77ab0c8..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/ImMagician-Gradio/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ImMagician Gradio -emoji: 🪄 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/PerplexityAi.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/PerplexityAi.py deleted file mode 100644 index fc0fd48c573375551b9c0d08b9b7132c6a2f2178..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/PerplexityAi.py +++ /dev/null @@ -1,87 +0,0 @@ -from __future__ import annotations - -import json -import time -import base64 -from curl_cffi.requests import AsyncSession - -from .base_provider import AsyncProvider, format_prompt - - -class PerplexityAi(AsyncProvider): - url = "https://www.perplexity.ai" - working = True - supports_gpt_35_turbo = True - _sources = [] - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - **kwargs - ) -> str: - url = cls.url + "/socket.io/?EIO=4&transport=polling" - async with AsyncSession(proxies={"https": proxy}, impersonate="chrome107") as session: - url_session = "https://www.perplexity.ai/api/auth/session" - response = await session.get(url_session) - - response = await session.get(url, params={"t": timestamp()}) - response.raise_for_status() - sid = json.loads(response.text[1:])["sid"] - - data = '40{"jwt":"anonymous-ask-user"}' - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - data = "424" + json.dumps([ - "perplexity_ask", - format_prompt(messages), - { - "version":"2.1", - "source":"default", - "language":"en", - "timezone": time.tzname[0], - "search_focus":"internet", - "mode":"concise" - } - ]) - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - while True: - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - for line in response.text.splitlines(): - if line.startswith("434"): - result = json.loads(json.loads(line[3:])[0]["text"]) - - cls._sources = [{ - "title": source["name"], - "url": source["url"], - "snippet": source["snippet"] - } for source in result["web_results"]] - - return result["answer"] - - @classmethod - def get_sources(cls): - return cls._sources - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("proxy", "str"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" - - -def timestamp() -> str: - return base64.urlsafe_b64encode(int(time.time()-1407782612).to_bytes(4, 'big')).decode() \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Wewordle.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Wewordle.py deleted file mode 100644 index 090d0bf3ab2e1f3851880393d43662edfbe9d984..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Wewordle.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import requests -import json -import random -import time -import string -from ...typing import sha256, Dict, get_type_hints - -url = "https://wewordle.org/gptapi/v1/android/turbo" -model = ['gpt-3.5-turbo'] -supports_stream = False -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - base = '' - for message in messages: - base += '%s: %s\n' % (message['role'], message['content']) - base += 'assistant:' - # randomize user id and app id - _user_id = ''.join(random.choices( - f'{string.ascii_lowercase}{string.digits}', k=16)) - _app_id = ''.join(random.choices( - f'{string.ascii_lowercase}{string.digits}', k=31)) - # make current date with format utc - _request_date = time.strftime("%Y-%m-%dT%H:%M:%S.000Z", time.gmtime()) - headers = { - 'accept': '*/*', - 'pragma': 'no-cache', - 'Content-Type': 'application/json', - 'Connection': 'keep-alive' - } - data = { - "user": _user_id, - "messages": [ - {"role": "user", "content": base} - ], - "subscriber": { - "originalPurchaseDate": None, - "originalApplicationVersion": None, - "allPurchaseDatesMillis": {}, - "entitlements": { - "active": {}, - "all": {} - }, - "allPurchaseDates": {}, - "allExpirationDatesMillis": {}, - "allExpirationDates": {}, - "originalAppUserId": f"$RCAnonymousID:{_app_id}", - "latestExpirationDate": None, - "requestDate": _request_date, - "latestExpirationDateMillis": None, - "nonSubscriptionTransactions": [], - "originalPurchaseDateMillis": None, - "managementURL": None, - "allPurchasedProductIdentifiers": [], - "firstSeen": _request_date, - "activeSubscriptions": [] - } - } - response = requests.post(url, headers=headers, data=json.dumps(data)) - if response.status_code == 200: - _json = response.json() - if 'message' in _json: - message_content = _json['message']['content'] - message_content = message_content.replace('**assistant:** ', '') - yield message_content - else: - print(f"Error Occurred::{response.status_code}") - return None - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/AgentVerse/agentVerse/app.py b/spaces/AgentVerse/agentVerse/app.py deleted file mode 100644 index 41d0927e73d66bf22e7f1552e01a584bf36c4527..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/app.py +++ /dev/null @@ -1,599 +0,0 @@ -import base64 -import openai -import itertools -import json -from typing import Dict, List, Tuple - -import cv2 -import gradio as gr - -from agentverse import TaskSolving -from agentverse.simulation import Simulation -from agentverse.message import Message - - -def cover_img(background, img, place: Tuple[int, int]): - """ - Overlays the specified image to the specified position of the background image. - :param background: background image - :param img: the specified image - :param place: the top-left coordinate of the target location - """ - back_h, back_w, _ = background.shape - height, width, _ = img.shape - for i, j in itertools.product(range(height), range(width)): - if img[i, j, 3]: - background[place[0] + i, place[1] + j] = img[i, j, :3] - - -class GUI: - """ - the UI of frontend - """ - - def __init__( - self, - task: str = "simulation/nlp_classroom_9players", - tasks_dir: str = "agentverse/tasks", - ): - """ - init a UI. - default number of students is 0 - """ - self.messages = [] - self.task = task - self.tasks_dir = tasks_dir - if task == "pipeline_brainstorming": - self.backend = TaskSolving.from_task(task, tasks_dir) - else: - self.backend = Simulation.from_task(task, tasks_dir) - self.turns_remain = 0 - self.agent_id = { - self.backend.agents[idx].name: idx - for idx in range(len(self.backend.agents)) - } - self.stu_num = len(self.agent_id) - 1 - self.autoplay = False - self.image_now = None - self.text_now = None - self.tot_solutions = 5 - self.solution_status = [False] * self.tot_solutions - - def get_avatar(self, idx): - if idx == -1: - img = cv2.imread("./imgs/db_diag/-1.png") - elif self.task == "simulation/prisoner_dilemma": - img = cv2.imread(f"./imgs/prison/{idx}.png") - else: - img = cv2.imread(f"./imgs/{idx}.png") - base64_str = cv2.imencode(".png", img)[1].tostring() - return "data:image/png;base64," + base64.b64encode(base64_str).decode("utf-8") - - def stop_autoplay(self): - self.autoplay = False - return ( - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - ) - - def start_autoplay(self): - self.autoplay = True - yield ( - self.image_now, - self.text_now, - gr.Button.update(interactive=False), - gr.Button.update(interactive=True), - gr.Button.update(interactive=False), - *[gr.Button.update(visible=statu) for statu in self.solution_status], - gr.Box.update(visible=any(self.solution_status)), - ) - - while self.autoplay and self.turns_remain > 0: - outputs = self.gen_output() - self.image_now, self.text_now = outputs - - yield ( - *outputs, - gr.Button.update( - interactive=not self.autoplay and self.turns_remain > 0 - ), - gr.Button.update(interactive=self.autoplay and self.turns_remain > 0), - gr.Button.update( - interactive=not self.autoplay and self.turns_remain > 0 - ), - *[gr.Button.update(visible=statu) for statu in self.solution_status], - gr.Box.update(visible=any(self.solution_status)), - ) - - def delay_gen_output( - self, - ): - yield ( - self.image_now, - self.text_now, - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - *[gr.Button.update(visible=statu) for statu in self.solution_status], - gr.Box.update(visible=any(self.solution_status)), - ) - - outputs = self.gen_output() - self.image_now, self.text_now = outputs - - yield ( - self.image_now, - self.text_now, - gr.Button.update(interactive=self.turns_remain > 0), - gr.Button.update(interactive=self.turns_remain > 0), - *[gr.Button.update(visible=statu) for statu in self.solution_status], - gr.Box.update(visible=any(self.solution_status)), - ) - - def delay_reset(self, task_dropdown, api_key_text, organization_text): - self.autoplay = False - self.image_now, self.text_now = self.reset( - task_dropdown, api_key_text, organization_text - ) - return ( - self.image_now, - self.text_now, - gr.Button.update(interactive=True), - gr.Button.update(interactive=False), - gr.Button.update(interactive=True), - *[gr.Button.update(visible=statu) for statu in self.solution_status], - gr.Box.update(visible=any(self.solution_status)), - ) - - def reset( - self, - task_dropdown="simulation/nlp_classroom_9players", - api_key_text="", - organization_text="", - ): - openai.api_key = api_key_text - openai.organization = organization_text - """ - tell backend the new number of students and generate new empty image - :param stu_num: - :return: [empty image, empty message] - """ - # if not 0 <= stu_num <= 30: - # raise gr.Error("the number of students must be between 0 and 30.") - - """ - # [To-Do] Need to add a function to assign agent numbers into the backend. - """ - # self.backend.reset(stu_num) - # self.stu_num = stu_num - - """ - # [To-Do] Pass the parameters to reset - """ - if task_dropdown == "pipeline_brainstorming": - self.backend = TaskSolving.from_task(task_dropdown, self.tasks_dir) - else: - self.backend = Simulation.from_task(task_dropdown, self.tasks_dir) - self.agent_id = { - self.backend.agents[idx].name: idx - for idx in range(len(self.backend.agents)) - } - - self.task = task_dropdown - self.stu_num = len(self.agent_id) - 1 - self.backend.reset() - self.turns_remain = self.backend.environment.max_turns - - if task_dropdown == "simulation/prisoner_dilemma": - background = cv2.imread("./imgs/prison/case_1.png") - elif task_dropdown == "simulation/db_diag": - background = cv2.imread("./imgs/db_diag/background.png") - elif "sde" in task_dropdown: - background = cv2.imread("./imgs/sde/background.png") - else: - background = cv2.imread("./imgs/background.png") - back_h, back_w, _ = background.shape - stu_cnt = 0 - for h_begin, w_begin in itertools.product( - range(800, back_h, 300), range(135, back_w - 200, 200) - ): - stu_cnt += 1 - img = cv2.imread( - f"./imgs/{(stu_cnt - 1) % 11 + 1 if stu_cnt <= self.stu_num else 'empty'}.png", - cv2.IMREAD_UNCHANGED, - ) - cover_img( - background, - img, - (h_begin - 30 if img.shape[0] > 190 else h_begin, w_begin), - ) - self.messages = [] - self.solution_status = [False] * self.tot_solutions - return [cv2.cvtColor(background, cv2.COLOR_BGR2RGB), ""] - - def gen_img(self, data: List[Dict]): - """ - generate new image with sender rank - :param data: - :return: the new image - """ - # The following code need to be more general. This one is too task-specific. - # if len(data) != self.stu_num: - if len(data) != self.stu_num + 1: - raise gr.Error("data length is not equal to the total number of students.") - if self.task == "simulation/prisoner_dilemma": - img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED) - if ( - len(self.messages) < 2 - or self.messages[-1][0] == 1 - or self.messages[-2][0] == 2 - ): - background = cv2.imread("./imgs/prison/case_1.png") - if data[0]["message"] != "": - cover_img(background, img, (400, 480)) - else: - background = cv2.imread("./imgs/prison/case_2.png") - if data[0]["message"] != "": - cover_img(background, img, (400, 880)) - if data[1]["message"] != "": - cover_img(background, img, (550, 480)) - if data[2]["message"] != "": - cover_img(background, img, (550, 880)) - elif self.task == "db_diag": - background = cv2.imread("./imgs/db_diag/background.png") - img = cv2.imread("./imgs/db_diag/speaking.png", cv2.IMREAD_UNCHANGED) - if data[0]["message"] != "": - cover_img(background, img, (750, 80)) - if data[1]["message"] != "": - cover_img(background, img, (310, 220)) - if data[2]["message"] != "": - cover_img(background, img, (522, 11)) - elif "sde" in self.task: - background = cv2.imread("./imgs/sde/background.png") - img = cv2.imread("./imgs/sde/speaking.png", cv2.IMREAD_UNCHANGED) - if data[0]["message"] != "": - cover_img(background, img, (692, 330)) - if data[1]["message"] != "": - cover_img(background, img, (692, 660)) - if data[2]["message"] != "": - cover_img(background, img, (692, 990)) - else: - background = cv2.imread("./imgs/background.png") - back_h, back_w, _ = background.shape - stu_cnt = 0 - if data[stu_cnt]["message"] not in ["", "[RaiseHand]"]: - img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED) - cover_img(background, img, (370, 1250)) - for h_begin, w_begin in itertools.product( - range(800, back_h, 300), range(135, back_w - 200, 200) - ): - stu_cnt += 1 - if stu_cnt <= self.stu_num: - img = cv2.imread( - f"./imgs/{(stu_cnt - 1) % 11 + 1}.png", cv2.IMREAD_UNCHANGED - ) - cover_img( - background, - img, - (h_begin - 30 if img.shape[0] > 190 else h_begin, w_begin), - ) - if "[RaiseHand]" in data[stu_cnt]["message"]: - # elif data[stu_cnt]["message"] == "[RaiseHand]": - img = cv2.imread("./imgs/hand.png", cv2.IMREAD_UNCHANGED) - cover_img(background, img, (h_begin - 90, w_begin + 10)) - elif data[stu_cnt]["message"] not in ["", "[RaiseHand]"]: - img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED) - cover_img(background, img, (h_begin - 90, w_begin + 10)) - - else: - img = cv2.imread("./imgs/empty.png", cv2.IMREAD_UNCHANGED) - cover_img(background, img, (h_begin, w_begin)) - return cv2.cvtColor(background, cv2.COLOR_BGR2RGB) - - def return_format(self, messages: List[Message]): - _format = [{"message": "", "sender": idx} for idx in range(len(self.agent_id))] - - for message in messages: - if self.task == "db_diag": - content_json: dict = message.content - content_json[ - "diagnose" - ] = f"[{message.sender}]: {content_json['diagnose']}" - _format[self.agent_id[message.sender]]["message"] = json.dumps( - content_json - ) - elif "sde" in self.task: - if message.sender == "code_tester": - pre_message, message_ = message.content.split("\n") - message_ = "{}\n{}".format( - pre_message, json.loads(message_)["feedback"] - ) - _format[self.agent_id[message.sender]][ - "message" - ] = "[{}]: {}".format(message.sender, message_) - else: - _format[self.agent_id[message.sender]][ - "message" - ] = "[{}]: {}".format(message.sender, message.content) - - else: - _format[self.agent_id[message.sender]]["message"] = "[{}]: {}".format( - message.sender, message.content - ) - - return _format - - def gen_output(self): - """ - generate new image and message of next step - :return: [new image, new message] - """ - - # data = self.backend.next_data() - - return_message = self.backend.next() - - data = self.return_format(return_message) - - # data.sort(key=lambda item: item["sender"]) - """ - # [To-Do]; Check the message from the backend: only 1 person can speak - """ - - for item in data: - if item["message"] not in ["", "[RaiseHand]"]: - self.messages.append((item["sender"], item["message"])) - - message = self.gen_message() - self.turns_remain -= 1 - return [self.gen_img(data), message] - - def gen_message(self): - # If the backend cannot handle this error, use the following code. - message = "" - """ - for item in data: - if item["message"] not in ["", "[RaiseHand]"]: - message = item["message"] - break - """ - for sender, msg in self.messages: - if sender == 0: - avatar = self.get_avatar(0) - elif sender == -1: - avatar = self.get_avatar(-1) - else: - avatar = self.get_avatar((sender - 1) % 11 + 1) - if self.task == "db_diag": - msg_json = json.loads(msg) - self.solution_status = [False] * self.tot_solutions - msg = msg_json["diagnose"] - if msg_json["solution"] != "": - solution: List[str] = msg_json["solution"] - for solu in solution: - if "query" in solu or "queries" in solu: - self.solution_status[0] = True - solu = solu.replace( - "query", 'query' - ) - solu = solu.replace( - "queries", 'queries' - ) - if "join" in solu: - self.solution_status[1] = True - solu = solu.replace( - "join", 'join' - ) - if "index" in solu: - self.solution_status[2] = True - solu = solu.replace( - "index", 'index' - ) - if "system configuration" in solu: - self.solution_status[3] = True - solu = solu.replace( - "system configuration", - 'system configuration', - ) - if ( - "monitor" in solu - or "Monitor" in solu - or "Investigate" in solu - ): - self.solution_status[4] = True - solu = solu.replace( - "monitor", 'monitor' - ) - solu = solu.replace( - "Monitor", 'Monitor' - ) - solu = solu.replace( - "Investigate", - 'Investigate', - ) - msg = f"{msg}
{solu}" - if msg_json["knowledge"] != "": - msg = f'{msg}
{msg_json["knowledge"]}' - else: - msg = msg.replace("<", "<") - msg = msg.replace(">", ">") - message = ( - f'
' - f'' - f'
' - f"{msg}" - f"
" + message - ) - message = ( - '
' - + message - + "
" - ) - return message - - def submit(self, message: str): - """ - submit message to backend - :param message: message - :return: [new image, new message] - """ - self.backend.submit(message) - self.messages.append((-1, f"[User]: {message}")) - return self.gen_img([{"message": ""}] * len(self.agent_id)), self.gen_message() - - def launch(self, single_agent=False, discussion_mode=False): - if self.task == "pipeline_brainstorming": - with gr.Blocks() as demo: - chatbot = gr.Chatbot(height=800, show_label=False) - msg = gr.Textbox(label="Input") - - def respond(message, chat_history): - chat_history.append((message, None)) - yield "", chat_history - for response in self.backend.iter_run( - single_agent=single_agent, discussion_mode=discussion_mode - ): - print(response) - chat_history.append((None, response)) - yield "", chat_history - - msg.submit(respond, [msg, chatbot], [msg, chatbot]) - else: - with gr.Blocks() as demo: - with gr.Row(): - task_dropdown = gr.Dropdown( - choices=[ - "simulation/nlp_classroom_9players", - "simulation/prisoner_dilemma", - ], - value="simulation/nlp_classroom_9players", - label="Task", - ) - api_key_text = gr.Textbox(label="OPENAI API KEY") - organization_text = gr.Textbox(label="Organization") - with gr.Row(): - with gr.Column(): - image_output = gr.Image() - with gr.Row(): - reset_btn = gr.Button("Build/Reset") - # next_btn = gr.Button("Next", variant="primary") - next_btn = gr.Button("Next", interactive=False) - stop_autoplay_btn = gr.Button( - "Stop Autoplay", interactive=False - ) - start_autoplay_btn = gr.Button( - "Start Autoplay", interactive=False - ) - with gr.Box(visible=False) as solutions: - with gr.Column(): - gr.HTML("Optimization Solutions:") - with gr.Row(): - rewrite_slow_query_btn = gr.Button( - "Rewrite Slow Query", visible=False - ) - add_query_hints_btn = gr.Button( - "Add Query Hints", visible=False - ) - update_indexes_btn = gr.Button( - "Update Indexes", visible=False - ) - tune_parameters_btn = gr.Button( - "Tune Parameters", visible=False - ) - gather_more_info_btn = gr.Button( - "Gather More Info", visible=False - ) - # text_output = gr.Textbox() - text_output = gr.HTML(self.reset()[1]) - - # Given a botton to provide student numbers and their inf. - # stu_num = gr.Number(label="Student Number", precision=0) - # stu_num = self.stu_num - - if self.task == "db_diag": - user_msg = gr.Textbox() - submit_btn = gr.Button("Submit", variant="primary") - - submit_btn.click( - fn=self.submit, - inputs=user_msg, - outputs=[image_output, text_output], - show_progress=False, - ) - else: - pass - - # next_btn.click(fn=self.gen_output, inputs=None, outputs=[image_output, text_output], - # show_progress=False) - next_btn.click( - fn=self.delay_gen_output, - inputs=None, - outputs=[ - image_output, - text_output, - next_btn, - start_autoplay_btn, - rewrite_slow_query_btn, - add_query_hints_btn, - update_indexes_btn, - tune_parameters_btn, - gather_more_info_btn, - solutions, - ], - show_progress=False, - ) - - # [To-Do] Add botton: re-start (load different people and env) - # reset_btn.click(fn=self.reset, inputs=stu_num, outputs=[image_output, text_output], - # show_progress=False) - # reset_btn.click(fn=self.reset, inputs=None, outputs=[image_output, text_output], show_progress=False) - reset_btn.click( - fn=self.delay_reset, - inputs=[task_dropdown, api_key_text, organization_text], - outputs=[ - image_output, - text_output, - next_btn, - stop_autoplay_btn, - start_autoplay_btn, - rewrite_slow_query_btn, - add_query_hints_btn, - update_indexes_btn, - tune_parameters_btn, - gather_more_info_btn, - solutions, - ], - show_progress=False, - ) - - stop_autoplay_btn.click( - fn=self.stop_autoplay, - inputs=None, - outputs=[next_btn, stop_autoplay_btn, start_autoplay_btn], - show_progress=False, - ) - start_autoplay_btn.click( - fn=self.start_autoplay, - inputs=None, - outputs=[ - image_output, - text_output, - next_btn, - stop_autoplay_btn, - start_autoplay_btn, - rewrite_slow_query_btn, - add_query_hints_btn, - update_indexes_btn, - tune_parameters_btn, - gather_more_info_btn, - solutions, - ], - show_progress=False, - ) - - demo.queue(concurrency_count=5, max_size=20).launch() - # demo.launch() - - -GUI().launch() diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/dots/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/dots/Factory.d.ts deleted file mode 100644 index b7cd9933e65a0ca8be20d2be655d6f2a34144cc9..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/dots/Factory.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import Dots from './Dots'; -import Base from '../base/Base'; - -export default function Factory( - config?: Base.IConfig -): Dots; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.js deleted file mode 100644 index 869c845863e41ff695a184dc9a6b10e67b202078..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import InputText from './InputText.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('inputText', function (config) { - var gameObject = new InputText(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.InputText', InputText); - -export default InputText; \ No newline at end of file diff --git a/spaces/Akhil-77/Toxicity_Detector/app.py b/spaces/Akhil-77/Toxicity_Detector/app.py deleted file mode 100644 index 36440551ea010364f15c2a9e2ce111748e11068d..0000000000000000000000000000000000000000 --- a/spaces/Akhil-77/Toxicity_Detector/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import gradio as gr -from transformers import pipeline - -sentiment = pipeline('sentiment-analysis') - -def get_sentiment(input_text): - return sentiment(input_text) - -front_end = gr.Interface(fn = get_sentiment, - inputs = "text", - outputs = ["text"], - title = "Toxicity Detector", - description = "A simple web-app to find out that text is toxic or not") - -front_end.launch(inline=False) \ No newline at end of file diff --git a/spaces/Akmyradov/chatbot_testing/README.md b/spaces/Akmyradov/chatbot_testing/README.md deleted file mode 100644 index cf1848811d735a8b15240ee9cf720aebb3329fdc..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/chatbot_testing/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatbot Testing -emoji: 🔥 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlekseyKorshuk/gai-project/modules/models.py b/spaces/AlekseyKorshuk/gai-project/modules/models.py deleted file mode 100644 index 8f6a4b649b50c09112fdc405feb2f9290e86ea68..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/gai-project/modules/models.py +++ /dev/null @@ -1,73 +0,0 @@ -import requests -import gradio as gr - -from config import GUANACO_DEVELOPER_KEY, MODELS - - -class ModelConfig(): - def __init__(self, config): - self.name = config['name'] - self.endpoint = config['endpoint'] - self.generation_params = config.get('params', {}) - self.author_id = config.get('author-id') - - -class ChaiBot(): - def __init__(self, bot_config): - self.messages = [] - self.config = bot_config - self.bot_label = bot_config.get("botLabel", "Character") - self.user_label = bot_config.get("userLabel", "User") - self.add_bot_message(bot_config.get("firstMessage")) - - def add_user_message(self, message): - self.messages.append((self.user_label, message.strip())) - - def add_bot_message(self, message): - self.messages.append((self.bot_label, message.strip())) - - def get_conversation(self): - conversation = [] - for label, value in self.messages: - role_type = "user" if label == self.user_label else "bot" - message = { - "from": label, - "value": value, - "role_type": role_type - } - conversation.append(message) - return conversation - - -class BaseModel: - def __init__(self, model_config): - self.config = model_config - - def generate_response(self, chaibot): - raise NotImplemented - - -class GuanacoModel(BaseModel): - def generate_response(self, chaibot): - model_inputs = self._get_model_input(chaibot) - return self._get_response(model_inputs) - - def _get_model_input(self, chaibot): - model_inputs = { - "bot_name": chaibot.bot_label, - "memory": chaibot.config.get('memory', ""), - "prompt": chaibot.config.get('prompt', ""), - "chat_history": [{"sender": sender, "message": message} for sender, message in chaibot.messages], - "user_name": "You" - } - return model_inputs - - def _get_response(self, inputs): - headers = {"Authorization": f"Bearer {GUANACO_DEVELOPER_KEY}"} - model_id = MODELS[self.config] - url = f'https://guanaco-submitter.chai-research.com/models/{model_id}/chat' - try: - response = requests.post(url=url, json=inputs, headers=headers, timeout=20) - except requests.ReadTimeout: - raise gr.Error("Generating response took too long, please try again in new conversation.") - return response.json()["model_output"] diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/utils.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/utils.py deleted file mode 100644 index 9794e0fc3463a5e8fad05c037cce64683059a6d3..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_window.py b/spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_window.py deleted file mode 100644 index 5937788f2e8e51772677ab12c67038f5ccd37b42..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_window.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import imgui -import imgui.integrations.glfw - -from . import glfw_window -from . import imgui_utils -from . import text_utils - -# ---------------------------------------------------------------------------- - - -class ImguiWindow(glfw_window.GlfwWindow): - def __init__(self, *, title='ImguiWindow', font=None, font_sizes=range(14, 24), **glfw_kwargs): - if font is None: - font = text_utils.get_default_font() - font_sizes = {int(size) for size in font_sizes} - super().__init__(title=title, **glfw_kwargs) - - # Init fields. - self._imgui_context = None - self._imgui_renderer = None - self._imgui_fonts = None - self._cur_font_size = max(font_sizes) - - # Delete leftover imgui.ini to avoid unexpected behavior. - if os.path.isfile('imgui.ini'): - os.remove('imgui.ini') - - # Init ImGui. - self._imgui_context = imgui.create_context() - self._imgui_renderer = _GlfwRenderer(self._glfw_window) - self._attach_glfw_callbacks() - # Disable creating imgui.ini at runtime. - imgui.get_io().ini_saving_rate = 0 - # Improve behavior with imgui_utils.drag_custom(). - imgui.get_io().mouse_drag_threshold = 0 - self._imgui_fonts = {size: imgui.get_io().fonts.add_font_from_file_ttf( - font, size) for size in font_sizes} - self._imgui_renderer.refresh_font_texture() - - def close(self): - self.make_context_current() - self._imgui_fonts = None - if self._imgui_renderer is not None: - self._imgui_renderer.shutdown() - self._imgui_renderer = None - if self._imgui_context is not None: - # imgui.destroy_context(self._imgui_context) # Commented out to avoid creating imgui.ini at the end. - self._imgui_context = None - super().close() - - def _glfw_key_callback(self, *args): - super()._glfw_key_callback(*args) - self._imgui_renderer.keyboard_callback(*args) - - @property - def font_size(self): - return self._cur_font_size - - @property - def spacing(self): - return round(self._cur_font_size * 0.4) - - def set_font_size(self, target): # Applied on next frame. - self._cur_font_size = min((abs(key - target), key) - for key in self._imgui_fonts.keys())[1] - - def begin_frame(self): - # Begin glfw frame. - super().begin_frame() - - # Process imgui events. - self._imgui_renderer.mouse_wheel_multiplier = self._cur_font_size / 10 - if self.content_width > 0 and self.content_height > 0: - self._imgui_renderer.process_inputs() - - # Begin imgui frame. - imgui.new_frame() - imgui.push_font(self._imgui_fonts[self._cur_font_size]) - imgui_utils.set_default_style( - spacing=self.spacing, indent=self.font_size, scrollbar=self.font_size+4) - - def end_frame(self): - imgui.pop_font() - imgui.render() - imgui.end_frame() - self._imgui_renderer.render(imgui.get_draw_data()) - super().end_frame() - -# ---------------------------------------------------------------------------- -# Wrapper class for GlfwRenderer to fix a mouse wheel bug on Linux. - - -class _GlfwRenderer(imgui.integrations.glfw.GlfwRenderer): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.mouse_wheel_multiplier = 1 - - def scroll_callback(self, window, x_offset, y_offset): - self.io.mouse_wheel += y_offset * self.mouse_wheel_multiplier - -# ---------------------------------------------------------------------------- diff --git a/spaces/Amrrs/textsummarizer/README.md b/spaces/Amrrs/textsummarizer/README.md deleted file mode 100644 index 72a34e57a1bfedba20af3c0063d07fb6bd41ab11..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/textsummarizer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Textsummarizer -emoji: ⚡ -colorFrom: green -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/An-619/FastSAM/utils/tools_gradio.py b/spaces/An-619/FastSAM/utils/tools_gradio.py deleted file mode 100644 index be7df994bc22970d5e360a60ecb729d36926e931..0000000000000000000000000000000000000000 --- a/spaces/An-619/FastSAM/utils/tools_gradio.py +++ /dev/null @@ -1,175 +0,0 @@ -import numpy as np -from PIL import Image -import matplotlib.pyplot as plt -import cv2 -import torch - - -def fast_process( - annotations, - image, - device, - scale, - better_quality=False, - mask_random_color=True, - bbox=None, - use_retina=True, - withContours=True, -): - if isinstance(annotations[0], dict): - annotations = [annotation['segmentation'] for annotation in annotations] - - original_h = image.height - original_w = image.width - if better_quality: - if isinstance(annotations[0], torch.Tensor): - annotations = np.array(annotations.cpu()) - for i, mask in enumerate(annotations): - mask = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_CLOSE, np.ones((3, 3), np.uint8)) - annotations[i] = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((8, 8), np.uint8)) - if device == 'cpu': - annotations = np.array(annotations) - inner_mask = fast_show_mask( - annotations, - plt.gca(), - random_color=mask_random_color, - bbox=bbox, - retinamask=use_retina, - target_height=original_h, - target_width=original_w, - ) - else: - if isinstance(annotations[0], np.ndarray): - annotations = torch.from_numpy(annotations) - inner_mask = fast_show_mask_gpu( - annotations, - plt.gca(), - random_color=mask_random_color, - bbox=bbox, - retinamask=use_retina, - target_height=original_h, - target_width=original_w, - ) - if isinstance(annotations, torch.Tensor): - annotations = annotations.cpu().numpy() - - if withContours: - contour_all = [] - temp = np.zeros((original_h, original_w, 1)) - for i, mask in enumerate(annotations): - if type(mask) == dict: - mask = mask['segmentation'] - annotation = mask.astype(np.uint8) - if use_retina == False: - annotation = cv2.resize( - annotation, - (original_w, original_h), - interpolation=cv2.INTER_NEAREST, - ) - contours, _ = cv2.findContours(annotation, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) - for contour in contours: - contour_all.append(contour) - cv2.drawContours(temp, contour_all, -1, (255, 255, 255), 2 // scale) - color = np.array([0 / 255, 0 / 255, 255 / 255, 0.9]) - contour_mask = temp / 255 * color.reshape(1, 1, -1) - - image = image.convert('RGBA') - overlay_inner = Image.fromarray((inner_mask * 255).astype(np.uint8), 'RGBA') - image.paste(overlay_inner, (0, 0), overlay_inner) - - if withContours: - overlay_contour = Image.fromarray((contour_mask * 255).astype(np.uint8), 'RGBA') - image.paste(overlay_contour, (0, 0), overlay_contour) - - return image - - -# CPU post process -def fast_show_mask( - annotation, - ax, - random_color=False, - bbox=None, - retinamask=True, - target_height=960, - target_width=960, -): - mask_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - # 将annotation 按照面积 排序 - areas = np.sum(annotation, axis=(1, 2)) - sorted_indices = np.argsort(areas)[::1] - annotation = annotation[sorted_indices] - - index = (annotation != 0).argmax(axis=0) - if random_color: - color = np.random.random((mask_sum, 1, 1, 3)) - else: - color = np.ones((mask_sum, 1, 1, 3)) * np.array([30 / 255, 144 / 255, 255 / 255]) - transparency = np.ones((mask_sum, 1, 1, 1)) * 0.6 - visual = np.concatenate([color, transparency], axis=-1) - mask_image = np.expand_dims(annotation, -1) * visual - - mask = np.zeros((height, weight, 4)) - - h_indices, w_indices = np.meshgrid(np.arange(height), np.arange(weight), indexing='ij') - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - - mask[h_indices, w_indices, :] = mask_image[indices] - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch(plt.Rectangle((x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor='b', linewidth=1)) - - if not retinamask: - mask = cv2.resize(mask, (target_width, target_height), interpolation=cv2.INTER_NEAREST) - - return mask - - -def fast_show_mask_gpu( - annotation, - ax, - random_color=False, - bbox=None, - retinamask=True, - target_height=960, - target_width=960, -): - device = annotation.device - mask_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - areas = torch.sum(annotation, dim=(1, 2)) - sorted_indices = torch.argsort(areas, descending=False) - annotation = annotation[sorted_indices] - # 找每个位置第一个非零值下标 - index = (annotation != 0).to(torch.long).argmax(dim=0) - if random_color: - color = torch.rand((mask_sum, 1, 1, 3)).to(device) - else: - color = torch.ones((mask_sum, 1, 1, 3)).to(device) * torch.tensor( - [30 / 255, 144 / 255, 255 / 255] - ).to(device) - transparency = torch.ones((mask_sum, 1, 1, 1)).to(device) * 0.6 - visual = torch.cat([color, transparency], dim=-1) - mask_image = torch.unsqueeze(annotation, -1) * visual - # 按index取数,index指每个位置选哪个batch的数,把mask_image转成一个batch的形式 - mask = torch.zeros((height, weight, 4)).to(device) - h_indices, w_indices = torch.meshgrid(torch.arange(height), torch.arange(weight)) - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - # 使用向量化索引更新show的值 - mask[h_indices, w_indices, :] = mask_image[indices] - mask_cpu = mask.cpu().numpy() - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch( - plt.Rectangle( - (x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="b", linewidth=1 - ) - ) - if not retinamask: - mask_cpu = cv2.resize( - mask_cpu, (target_width, target_height), interpolation=cv2.INTER_NEAREST - ) - return mask_cpu diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/loading.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/loading.md deleted file mode 100644 index 79c8b278468d019700a1a3337e2c037c7eefdb2c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/loading.md +++ /dev/null @@ -1,463 +0,0 @@ - - -# Load pipelines, models, and schedulers - -[[open-in-colab]] - -Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the [`DiffusionPipeline`] to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. - -Everything you need for inference or training is accessible with the `from_pretrained()` method. - -This guide will show you how to load: - -- pipelines from the Hub and locally -- different components into a pipeline -- checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights -- models and schedulers - -## Diffusion Pipeline - - - -💡 Skip to the [DiffusionPipeline explained](#diffusionpipeline-explained) section if you interested in learning in more detail about how the [`DiffusionPipeline`] class works. - - - -The [`DiffusionPipeline`] class is the simplest and most generic way to load any diffusion model from the [Hub](https://huggingface.co/models?library=diffusers). The [`DiffusionPipeline.from_pretrained`] method automatically detects the correct pipeline class from the checkpoint, downloads and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. - -```python -from diffusers import DiffusionPipeline - -repo_id = "runwayml/stable-diffusion-v1-5" -pipe = DiffusionPipeline.from_pretrained(repo_id) -``` - -You can also load a checkpoint with it's specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the [`StableDiffusionPipeline`] class: - -```python -from diffusers import StableDiffusionPipeline - -repo_id = "runwayml/stable-diffusion-v1-5" -pipe = StableDiffusionPipeline.from_pretrained(repo_id) -``` - -A checkpoint (such as [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) or [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with it's corresponding task-specific pipeline class: - -```python -from diffusers import StableDiffusionImg2ImgPipeline - -repo_id = "runwayml/stable-diffusion-v1-5" -pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) -``` - -### Local pipeline - -To load a diffusion pipeline locally, use [`git-lfs`](https://git-lfs.github.com/) to manually download the checkpoint (in this case, [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) to your local disk. This creates a local folder, `./stable-diffusion-v1-5`, on your disk: - -```bash -git lfs install -git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 -``` - -Then pass the local path to [`~DiffusionPipeline.from_pretrained`]: - -```python -from diffusers import DiffusionPipeline - -repo_id = "./stable-diffusion-v1-5" -stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) -``` - -The [`~DiffusionPipeline.from_pretrained`] method won't download any files from the Hub when it detects a local path, but this also means it won't download and cache the latest changes to a checkpoint. - -### Swap components in a pipeline - -You can customize the default components of any pipeline with another compatible component. Customization is important because: - -- Changing the scheduler is important for exploring the trade-off between generation speed and quality. -- Different components of a model are typically trained independently and you can swap out a component with a better-performing one. -- During finetuning, usually only some components - like the UNet or text encoder - are trained. - -To find out which schedulers are compatible for customization, you can use the `compatibles` method: - -```py -from diffusers import DiffusionPipeline - -repo_id = "runwayml/stable-diffusion-v1-5" -stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) -stable_diffusion.scheduler.compatibles -``` - -Let's use the [`SchedulerMixin.from_pretrained`] method to replace the default [`PNDMScheduler`] with a more performant scheduler, [`EulerDiscreteScheduler`]. The `subfolder="scheduler"` argument is required to load the scheduler configuration from the correct [subfolder](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main/scheduler) of the pipeline repository. - -Then you can pass the new [`EulerDiscreteScheduler`] instance to the `scheduler` argument in [`DiffusionPipeline`]: - -```python -from diffusers import DiffusionPipeline, EulerDiscreteScheduler, DPMSolverMultistepScheduler - -repo_id = "runwayml/stable-diffusion-v1-5" - -scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") - -stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler) -``` - -### Safety checker - -Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a [safety checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) to check generated outputs against known hardcoded NSFW content. If you'd like to disable the safety checker for whatever reason, pass `None` to the `safety_checker` argument: - -```python -from diffusers import DiffusionPipeline - -repo_id = "runwayml/stable-diffusion-v1-5" -stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None) -``` - -### Reuse components across pipelines - -You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the [`~DiffusionPipeline.components`] method to save the components: - -```python -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline - -model_id = "runwayml/stable-diffusion-v1-5" -stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) - -components = stable_diffusion_txt2img.components -``` - -Then you can pass the `components` to another pipeline without reloading the weights into RAM: - -```py -stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) -``` - -You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: - -```py -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline - -model_id = "runwayml/stable-diffusion-v1-5" -stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) -stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( - vae=stable_diffusion_txt2img.vae, - text_encoder=stable_diffusion_txt2img.text_encoder, - tokenizer=stable_diffusion_txt2img.tokenizer, - unet=stable_diffusion_txt2img.unet, - scheduler=stable_diffusion_txt2img.scheduler, - safety_checker=None, - feature_extractor=None, - requires_safety_checker=False, -) -``` - -## Checkpoint variants - -A checkpoint variant is usually a checkpoint where it's weights are: - -- Stored in a different floating point type for lower precision and lower storage, such as [`torch.float16`](https://pytorch.org/docs/stable/tensors.html#data-types), because it only requires half the bandwidth and storage to download. You can't use this variant if you're continuing training or using a CPU. -- Non-exponential mean averaged (EMA) weights which shouldn't be used for inference. You should use these to continue finetuning a model. - - - -💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, [`stable-diffusion-v1-4`] and [`stable-diffusion-v1-5`]). - - - -Otherwise, a variant is **identical** to the original checkpoint. They have exactly the same serialization format (like [Safetensors](./using_safetensors)), model structure, and weights have identical tensor shapes. - -| **checkpoint type** | **weight name** | **argument for loading weights** | -|---------------------|-------------------------------------|----------------------------------| -| original | diffusion_pytorch_model.bin | | -| floating point | diffusion_pytorch_model.fp16.bin | `variant`, `torch_dtype` | -| non-EMA | diffusion_pytorch_model.non_ema.bin | `variant` | - -There are two important arguments to know for loading variants: - -- `torch_dtype` defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a `fp16` variant, you should specify `torch_dtype=torch.float16` to *convert the weights* to `fp16`. Otherwise, the `fp16` weights are converted to the default `fp32` precision. You can also load the original checkpoint without defining the `variant` argument, and convert it to `fp16` with `torch_dtype=torch.float16`. In this case, the default `fp32` weights are downloaded first, and then they're converted to `fp16` after loading. - -- `variant` defines which files should be loaded from the repository. For example, if you want to load a `non_ema` variant from the [`diffusers/stable-diffusion-variants`](https://huggingface.co/diffusers/stable-diffusion-variants/tree/main/unet) repository, you should specify `variant="non_ema"` to download the `non_ema` files. - -```python -from diffusers import DiffusionPipeline -import torch - -# load fp16 variant -stable_diffusion = DiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16 -) -# load non_ema variant -stable_diffusion = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") -``` - -To save a checkpoint stored in a different floating point type or as a non-EMA variant, use the [`DiffusionPipeline.save_pretrained`] method and specify the `variant` argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: - -```python -from diffusers import DiffusionPipeline - -# save as fp16 variant -stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") -# save as non-ema variant -stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") -``` - -If you don't save the variant to an existing folder, you must specify the `variant` argument otherwise it'll throw an `Exception` because it can't find the original checkpoint: - -```python -# 👎 this won't work -stable_diffusion = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", torch_dtype=torch.float16) -# 👍 this works -stable_diffusion = DiffusionPipeline.from_pretrained( - "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16 -) -``` - - - -## Models - -Models are loaded from the [`ModelMixin.from_pretrained`] method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, [`~ModelMixin.from_pretrained`] reuses files in the cache instead of redownloading them. - -Models can be loaded from a subfolder with the `subfolder` argument. For example, the model weights for `runwayml/stable-diffusion-v1-5` are stored in the [`unet`](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main/unet) subfolder: - -```python -from diffusers import UNet2DConditionModel - -repo_id = "runwayml/stable-diffusion-v1-5" -model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet") -``` - -Or directly from a repository's [directory](https://huggingface.co/google/ddpm-cifar10-32/tree/main): - -```python -from diffusers import UNet2DModel - -repo_id = "google/ddpm-cifar10-32" -model = UNet2DModel.from_pretrained(repo_id) -``` - -You can also load and save model variants by specifying the `variant` argument in [`ModelMixin.from_pretrained`] and [`ModelMixin.save_pretrained`]: - -```python -from diffusers import UNet2DConditionModel - -model = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non-ema") -model.save_pretrained("./local-unet", variant="non-ema") -``` - -## Schedulers - -Schedulers are loaded from the [`SchedulerMixin.from_pretrained`] method, and unlike models, schedulers are **not parameterized** or **trained**; they are defined by a configuration file. - -Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. -For example, the following schedulers are compatible with [`StableDiffusionPipeline`] which means you can load the same scheduler configuration file in any of these classes: - -```python -from diffusers import StableDiffusionPipeline -from diffusers import ( - DDPMScheduler, - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, -) - -repo_id = "runwayml/stable-diffusion-v1-5" - -ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") -ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") -pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") -lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") -euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") -euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") -dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") - -# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` -pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm) -``` - -## DiffusionPipeline explained - -As a class method, [`DiffusionPipeline.from_pretrained`] is responsible for two things: - -- Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, [`DiffusionPipeline.from_pretrained`] reuses the cache and won't redownload the files. -- Load the cached weights into the correct pipeline [class](./api/pipelines/overview#diffusers-summary) - retrieved from the `model_index.json` file - and return an instance of it. - -The pipelines underlying folder structure corresponds directly with their class instances. For example, the [`StableDiffusionPipeline`] corresponds to the folder structure in [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5). - -```python -from diffusers import DiffusionPipeline - -repo_id = "runwayml/stable-diffusion-v1-5" -pipeline = DiffusionPipeline.from_pretrained(repo_id) -print(pipeline) -``` - -You'll see pipeline is an instance of [`StableDiffusionPipeline`], which consists of seven components: - -- `"feature_extractor"`: a [`~transformers.CLIPFeatureExtractor`] from 🤗 Transformers. -- `"safety_checker"`: a [component](https://github.com/huggingface/diffusers/blob/e55687e1e15407f60f32242027b7bb8170e58266/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L32) for screening against harmful content. -- `"scheduler"`: an instance of [`PNDMScheduler`]. -- `"text_encoder"`: a [`~transformers.CLIPTextModel`] from 🤗 Transformers. -- `"tokenizer"`: a [`~transformers.CLIPTokenizer`] from 🤗 Transformers. -- `"unet"`: an instance of [`UNet2DConditionModel`]. -- `"vae"` an instance of [`AutoencoderKL`]. - -```json -StableDiffusionPipeline { - "feature_extractor": [ - "transformers", - "CLIPImageProcessor" - ], - "safety_checker": [ - "stable_diffusion", - "StableDiffusionSafetyChecker" - ], - "scheduler": [ - "diffusers", - "PNDMScheduler" - ], - "text_encoder": [ - "transformers", - "CLIPTextModel" - ], - "tokenizer": [ - "transformers", - "CLIPTokenizer" - ], - "unet": [ - "diffusers", - "UNet2DConditionModel" - ], - "vae": [ - "diffusers", - "AutoencoderKL" - ] -} -``` - -Compare the components of the pipeline instance to the [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) folder structure, and you'll see there is a separate folder for each of the components in the repository: - -``` -. -├── feature_extractor -│   └── preprocessor_config.json -├── model_index.json -├── safety_checker -│   ├── config.json -│   └── pytorch_model.bin -├── scheduler -│   └── scheduler_config.json -├── text_encoder -│   ├── config.json -│   └── pytorch_model.bin -├── tokenizer -│   ├── merges.txt -│   ├── special_tokens_map.json -│   ├── tokenizer_config.json -│   └── vocab.json -├── unet -│   ├── config.json -│   ├── diffusion_pytorch_model.bin -└── vae - ├── config.json - ├── diffusion_pytorch_model.bin -``` - -You can access each of the components of the pipeline as an attribute to view its configuration: - -```py -pipeline.tokenizer -CLIPTokenizer( - name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", - vocab_size=49408, - model_max_length=77, - is_fast=False, - padding_side="right", - truncation_side="right", - special_tokens={ - "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), - "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), - "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), - "pad_token": "<|endoftext|>", - }, -) -``` - -Every pipeline expects a `model_index.json` file that tells the [`DiffusionPipeline`]: - -- which pipeline class to load from `_class_name` -- which version of 🧨 Diffusers was used to create the model in `_diffusers_version` -- what components from which library are stored in the subfolders (`name` corresponds to the component and subfolder name, `library` corresponds to the name of the library to load the class from, and `class` corresponds to the class name) - -```json -{ - "_class_name": "StableDiffusionPipeline", - "_diffusers_version": "0.6.0", - "feature_extractor": [ - "transformers", - "CLIPImageProcessor" - ], - "safety_checker": [ - "stable_diffusion", - "StableDiffusionSafetyChecker" - ], - "scheduler": [ - "diffusers", - "PNDMScheduler" - ], - "text_encoder": [ - "transformers", - "CLIPTextModel" - ], - "tokenizer": [ - "transformers", - "CLIPTokenizer" - ], - "unet": [ - "diffusers", - "UNet2DConditionModel" - ], - "vae": [ - "diffusers", - "AutoencoderKL" - ] -} -``` \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_1.5x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_1.5x_coco.py deleted file mode 100644 index aabce4af987aa5504e1748e10b9955f760a013e1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_1.5x_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './paa_r50_fpn_1x_coco.py' -lr_config = dict(step=[12, 16]) -runner = dict(type='EpochBasedRunner', max_epochs=18) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/point_rend.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/point_rend.py deleted file mode 100644 index 808ef2258ae88301d349db3aaa2711f223e5c971..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/point_rend.py +++ /dev/null @@ -1,29 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class PointRend(TwoStageDetector): - """PointRend: Image Segmentation as Rendering - - This detector is the implementation of - `PointRend `_. - - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(PointRend, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_80k_cityscapes.py deleted file mode 100644 index de4a8a5e9f030f1e8a8802596885186163f23eed..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = './deeplabv3_r50-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet101_v1c', - backbone=dict( - depth=101, - dilations=(1, 1, 1, 2), - strides=(1, 2, 2, 1), - multi_grid=(1, 2, 4)), - decode_head=dict( - dilations=(1, 6, 12, 18), - sampler=dict(type='OHEMPixelSampler', min_kept=100000))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes.py deleted file mode 100644 index c53ec41baf9043029549b4893b2380372ea5ecd9..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet101_v1c', - backbone=dict( - depth=101, - dilations=(1, 1, 1, 2), - strides=(1, 2, 2, 1), - multi_grid=(1, 2, 4)), - decode_head=dict( - dilations=(1, 6, 12, 18), - sampler=dict(type='OHEMPixelSampler', min_kept=100000))) diff --git a/spaces/Anon4review/HIPTDemo/vision_transformer.py b/spaces/Anon4review/HIPTDemo/vision_transformer.py deleted file mode 100644 index 59a1106789675c20561369065c4c30954951395e..0000000000000000000000000000000000000000 --- a/spaces/Anon4review/HIPTDemo/vision_transformer.py +++ /dev/null @@ -1,330 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Mostly copy-paste from timm library. -https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py -""" -import math -from functools import partial - -import torch -import torch.nn as nn - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. " - "The distribution of values may be incorrect.", - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * l - 1, 2 * u - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - # type: (Tensor, float, float, float, float) -> Tensor - return _no_grad_trunc_normal_(tensor, mean, std, a, b) - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x, attn - - -class Block(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x, return_attention=False): - y, attn = self.attn(self.norm1(x)) - if return_attention: - return attn - x = x + self.drop_path(y) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - num_patches = (img_size // patch_size) * (img_size // patch_size) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, C, H, W = x.shape - x = self.proj(x).flatten(2).transpose(1, 2) - return x - - -class VisionTransformer(nn.Module): - """ Vision Transformer """ - def __init__(self, img_size=[224], patch_size=16, in_chans=3, num_classes=0, embed_dim=768, depth=12, - num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0., - drop_path_rate=0., norm_layer=nn.LayerNorm, **kwargs): - super().__init__() - self.num_features = self.embed_dim = embed_dim - - self.patch_embed = PatchEmbed( - img_size=img_size[0], patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim) - num_patches = self.patch_embed.num_patches - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim)) - self.pos_drop = nn.Dropout(p=drop_rate) - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule - self.blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(depth)]) - self.norm = norm_layer(embed_dim) - - # Classifier head - self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def interpolate_pos_encoding(self, x, w, h): - npatch = x.shape[1] - 1 - N = self.pos_embed.shape[1] - 1 - if npatch == N and w == h: - return self.pos_embed - class_pos_embed = self.pos_embed[:, 0] - patch_pos_embed = self.pos_embed[:, 1:] - dim = x.shape[-1] - w0 = w // self.patch_embed.patch_size - h0 = h // self.patch_embed.patch_size - # we add a small number to avoid floating point error in the interpolation - # see discussion at https://github.com/facebookresearch/dino/issues/8 - w0, h0 = w0 + 0.1, h0 + 0.1 - patch_pos_embed = nn.functional.interpolate( - patch_pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2), - scale_factor=(w0 / math.sqrt(N), h0 / math.sqrt(N)), - mode='bicubic', - ) - assert int(w0) == patch_pos_embed.shape[-2] and int(h0) == patch_pos_embed.shape[-1] - patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) - return torch.cat((class_pos_embed.unsqueeze(0), patch_pos_embed), dim=1) - - def prepare_tokens(self, x): - B, nc, w, h = x.shape - x = self.patch_embed(x) # patch linear embedding - - # add the [CLS] token to the embed patch tokens - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - - # add positional encoding to each token - x = x + self.interpolate_pos_encoding(x, w, h) - - return self.pos_drop(x) - - def forward(self, x): - x = self.prepare_tokens(x) - for blk in self.blocks: - x = blk(x) - x = self.norm(x) - return x[:, 0] - - def get_last_selfattention(self, x): - x = self.prepare_tokens(x) - for i, blk in enumerate(self.blocks): - if i < len(self.blocks) - 1: - x = blk(x) - else: - # return attention of the last block - return blk(x, return_attention=True) - - def get_intermediate_layers(self, x, n=1): - x = self.prepare_tokens(x) - # we return the output tokens from the `n` last blocks - output = [] - for i, blk in enumerate(self.blocks): - x = blk(x) - if len(self.blocks) - i <= n: - output.append(self.norm(x)) - return output - - -def vit_tiny(patch_size=16, **kwargs): - model = VisionTransformer( - patch_size=patch_size, embed_dim=192, depth=12, num_heads=3, mlp_ratio=4, - qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -def vit_small(patch_size=16, **kwargs): - model = VisionTransformer( - patch_size=patch_size, embed_dim=384, depth=12, num_heads=6, mlp_ratio=4, - qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -def vit_base(patch_size=16, **kwargs): - model = VisionTransformer( - patch_size=patch_size, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4, - qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs) - return model - - -class DINOHead(nn.Module): - def __init__(self, in_dim, out_dim, use_bn=False, norm_last_layer=True, nlayers=3, hidden_dim=2048, bottleneck_dim=256): - super().__init__() - nlayers = max(nlayers, 1) - if nlayers == 1: - self.mlp = nn.Linear(in_dim, bottleneck_dim) - else: - layers = [nn.Linear(in_dim, hidden_dim)] - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - for _ in range(nlayers - 2): - layers.append(nn.Linear(hidden_dim, hidden_dim)) - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - layers.append(nn.Linear(hidden_dim, bottleneck_dim)) - self.mlp = nn.Sequential(*layers) - self.apply(self._init_weights) - self.last_layer = nn.utils.weight_norm(nn.Linear(bottleneck_dim, out_dim, bias=False)) - self.last_layer.weight_g.data.fill_(1) - if norm_last_layer: - self.last_layer.weight_g.requires_grad = False - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x = self.mlp(x) - x = nn.functional.normalize(x, dim=-1, p=2) - x = self.last_layer(x) - return x diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py deleted file mode 100644 index ba1d42d0c5781f56dc177d860d856bb34adce555..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py +++ /dev/null @@ -1,57 +0,0 @@ -# dataset settings -dataset_type = 'PascalVOCDataset' -data_root = 'data/VOCdevkit/VOC2012' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/evaluate.py b/spaces/Anonymous-sub/Rerender/gmflow_module/evaluate.py deleted file mode 100644 index e2aac7735a86f7f6c8a3f32d62fdc2b55ee75f23..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/gmflow_module/evaluate.py +++ /dev/null @@ -1,689 +0,0 @@ -from PIL import Image -import os -import time -import numpy as np -import torch -import torch.nn.functional as F - -import data -from utils import frame_utils -from utils.flow_viz import save_vis_flow_tofile - -from utils.utils import InputPadder, compute_out_of_boundary_mask -from glob import glob -from gmflow.geometry import forward_backward_consistency_check - - -@torch.no_grad() -def create_sintel_submission(model, - output_path='sintel_submission', - padding_factor=8, - save_vis_flow=False, - no_save_flo=False, - attn_splits_list=None, - corr_radius_list=None, - prop_radius_list=None, - ): - """ Create submission for the Sintel leaderboard """ - model.eval() - for dstype in ['clean', 'final']: - test_dataset = data.MpiSintel(split='test', aug_params=None, dstype=dstype) - - flow_prev, sequence_prev = None, None - for test_id in range(len(test_dataset)): - image1, image2, (sequence, frame) = test_dataset[test_id] - if sequence != sequence_prev: - flow_prev = None - - padder = InputPadder(image1.shape, padding_factor=padding_factor) - image1, image2 = padder.pad(image1[None].cuda(), image2[None].cuda()) - - results_dict = model(image1, image2, - attn_splits_list=attn_splits_list, - corr_radius_list=corr_radius_list, - prop_radius_list=prop_radius_list, - ) - - flow_pr = results_dict['flow_preds'][-1] # [B, 2, H, W] - - flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() - - output_dir = os.path.join(output_path, dstype, sequence) - output_file = os.path.join(output_dir, 'frame%04d.flo' % (frame + 1)) - - if not os.path.exists(output_dir): - os.makedirs(output_dir) - - if not no_save_flo: - frame_utils.writeFlow(output_file, flow) - sequence_prev = sequence - - # Save vis flow - if save_vis_flow: - vis_flow_file = output_file.replace('.flo', '.png') - save_vis_flow_tofile(flow, vis_flow_file) - - -@torch.no_grad() -def create_kitti_submission(model, - output_path='kitti_submission', - padding_factor=8, - save_vis_flow=False, - attn_splits_list=None, - corr_radius_list=None, - prop_radius_list=None, - ): - """ Create submission for the Sintel leaderboard """ - model.eval() - test_dataset = data.KITTI(split='testing', aug_params=None) - - if not os.path.exists(output_path): - os.makedirs(output_path) - - for test_id in range(len(test_dataset)): - image1, image2, (frame_id,) = test_dataset[test_id] - padder = InputPadder(image1.shape, mode='kitti', padding_factor=padding_factor) - image1, image2 = padder.pad(image1[None].cuda(), image2[None].cuda()) - - results_dict = model(image1, image2, - attn_splits_list=attn_splits_list, - corr_radius_list=corr_radius_list, - prop_radius_list=prop_radius_list, - ) - - flow_pr = results_dict['flow_preds'][-1] - - flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() - - output_filename = os.path.join(output_path, frame_id) - - if save_vis_flow: - vis_flow_file = output_filename - save_vis_flow_tofile(flow, vis_flow_file) - else: - frame_utils.writeFlowKITTI(output_filename, flow) - - -@torch.no_grad() -def validate_chairs(model, - with_speed_metric=False, - attn_splits_list=False, - corr_radius_list=False, - prop_radius_list=False, - ): - """ Perform evaluation on the FlyingChairs (test) split """ - model.eval() - epe_list = [] - results = {} - - if with_speed_metric: - s0_10_list = [] - s10_40_list = [] - s40plus_list = [] - - val_dataset = data.FlyingChairs(split='validation') - - print('Number of validation image pairs: %d' % len(val_dataset)) - - for val_id in range(len(val_dataset)): - image1, image2, flow_gt, _ = val_dataset[val_id] - - image1 = image1[None].cuda() - image2 = image2[None].cuda() - - results_dict = model(image1, image2, - attn_splits_list=attn_splits_list, - corr_radius_list=corr_radius_list, - prop_radius_list=prop_radius_list, - ) - - flow_pr = results_dict['flow_preds'][-1] # [B, 2, H, W] - - assert flow_pr.size()[-2:] == flow_gt.size()[-2:] - - epe = torch.sum((flow_pr[0].cpu() - flow_gt) ** 2, dim=0).sqrt() - epe_list.append(epe.view(-1).numpy()) - - if with_speed_metric: - flow_gt_speed = torch.sum(flow_gt ** 2, dim=0).sqrt() - valid_mask = (flow_gt_speed < 10) - if valid_mask.max() > 0: - s0_10_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed >= 10) * (flow_gt_speed <= 40) - if valid_mask.max() > 0: - s10_40_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed > 40) - if valid_mask.max() > 0: - s40plus_list.append(epe[valid_mask].cpu().numpy()) - - epe_all = np.concatenate(epe_list) - epe = np.mean(epe_all) - px1 = np.mean(epe_all > 1) - px3 = np.mean(epe_all > 3) - px5 = np.mean(epe_all > 5) - print("Validation Chairs EPE: %.3f, 1px: %.3f, 3px: %.3f, 5px: %.3f" % (epe, px1, px3, px5)) - results['chairs_epe'] = epe - results['chairs_1px'] = px1 - results['chairs_3px'] = px3 - results['chairs_5px'] = px5 - - if with_speed_metric: - s0_10 = np.mean(np.concatenate(s0_10_list)) - s10_40 = np.mean(np.concatenate(s10_40_list)) - s40plus = np.mean(np.concatenate(s40plus_list)) - - print("Validation Chairs s0_10: %.3f, s10_40: %.3f, s40+: %.3f" % ( - s0_10, - s10_40, - s40plus)) - - results['chairs_s0_10'] = s0_10 - results['chairs_s10_40'] = s10_40 - results['chairs_s40+'] = s40plus - - return results - - -@torch.no_grad() -def validate_things(model, - padding_factor=8, - with_speed_metric=False, - max_val_flow=400, - val_things_clean_only=True, - attn_splits_list=False, - corr_radius_list=False, - prop_radius_list=False, - ): - """ Peform validation using the Things (test) split """ - model.eval() - results = {} - - for dstype in ['frames_cleanpass', 'frames_finalpass']: - if val_things_clean_only: - if dstype == 'frames_finalpass': - continue - - val_dataset = data.FlyingThings3D(dstype=dstype, test_set=True, validate_subset=True, - ) - print('Number of validation image pairs: %d' % len(val_dataset)) - epe_list = [] - - if with_speed_metric: - s0_10_list = [] - s10_40_list = [] - s40plus_list = [] - - for val_id in range(len(val_dataset)): - image1, image2, flow_gt, valid_gt = val_dataset[val_id] - image1 = image1[None].cuda() - image2 = image2[None].cuda() - - padder = InputPadder(image1.shape, padding_factor=padding_factor) - image1, image2 = padder.pad(image1, image2) - - results_dict = model(image1, image2, - attn_splits_list=attn_splits_list, - corr_radius_list=corr_radius_list, - prop_radius_list=prop_radius_list, - ) - flow_pr = results_dict['flow_preds'][-1] - - flow = padder.unpad(flow_pr[0]).cpu() - - # Evaluation on flow <= max_val_flow - flow_gt_speed = torch.sum(flow_gt ** 2, dim=0).sqrt() - valid_gt = valid_gt * (flow_gt_speed < max_val_flow) - valid_gt = valid_gt.contiguous() - - epe = torch.sum((flow - flow_gt) ** 2, dim=0).sqrt() - val = valid_gt >= 0.5 - epe_list.append(epe[val].cpu().numpy()) - - if with_speed_metric: - valid_mask = (flow_gt_speed < 10) * (valid_gt >= 0.5) - if valid_mask.max() > 0: - s0_10_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed >= 10) * (flow_gt_speed <= 40) * (valid_gt >= 0.5) - if valid_mask.max() > 0: - s10_40_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed > 40) * (valid_gt >= 0.5) - if valid_mask.max() > 0: - s40plus_list.append(epe[valid_mask].cpu().numpy()) - - epe_list = np.mean(np.concatenate(epe_list)) - - epe = np.mean(epe_list) - - if dstype == 'frames_cleanpass': - dstype = 'things_clean' - if dstype == 'frames_finalpass': - dstype = 'things_final' - - print("Validation Things test set (%s) EPE: %.3f" % (dstype, epe)) - results[dstype + '_epe'] = epe - - if with_speed_metric: - s0_10 = np.mean(np.concatenate(s0_10_list)) - s10_40 = np.mean(np.concatenate(s10_40_list)) - s40plus = np.mean(np.concatenate(s40plus_list)) - - print("Validation Things test (%s) s0_10: %.3f, s10_40: %.3f, s40+: %.3f" % ( - dstype, s0_10, - s10_40, - s40plus)) - - results[dstype + '_s0_10'] = s0_10 - results[dstype + '_s10_40'] = s10_40 - results[dstype + '_s40+'] = s40plus - - return results - - -@torch.no_grad() -def validate_sintel(model, - count_time=False, - padding_factor=8, - with_speed_metric=False, - evaluate_matched_unmatched=False, - attn_splits_list=False, - corr_radius_list=False, - prop_radius_list=False, - ): - """ Peform validation using the Sintel (train) split """ - model.eval() - results = {} - - if count_time: - total_time = 0 - num_runs = 100 - - for dstype in ['clean', 'final']: - val_dataset = data.MpiSintel(split='training', dstype=dstype, - load_occlusion=evaluate_matched_unmatched, - ) - - print('Number of validation image pairs: %d' % len(val_dataset)) - epe_list = [] - - if evaluate_matched_unmatched: - matched_epe_list = [] - unmatched_epe_list = [] - - if with_speed_metric: - s0_10_list = [] - s10_40_list = [] - s40plus_list = [] - - for val_id in range(len(val_dataset)): - if evaluate_matched_unmatched: - image1, image2, flow_gt, valid, noc_valid = val_dataset[val_id] - - # compuate in-image-plane valid mask - in_image_valid = compute_out_of_boundary_mask(flow_gt.unsqueeze(0)).squeeze(0) # [H, W] - - else: - image1, image2, flow_gt, _ = val_dataset[val_id] - - image1 = image1[None].cuda() - image2 = image2[None].cuda() - - padder = InputPadder(image1.shape, padding_factor=padding_factor) - image1, image2 = padder.pad(image1, image2) - - if count_time and val_id >= 5: # 5 warmup - torch.cuda.synchronize() - time_start = time.perf_counter() - - results_dict = model(image1, image2, - attn_splits_list=attn_splits_list, - corr_radius_list=corr_radius_list, - prop_radius_list=prop_radius_list, - ) - - # useful when using parallel branches - flow_pr = results_dict['flow_preds'][-1] - - if count_time and val_id >= 5: - torch.cuda.synchronize() - total_time += time.perf_counter() - time_start - - if val_id >= num_runs + 4: - break - - flow = padder.unpad(flow_pr[0]).cpu() - - epe = torch.sum((flow - flow_gt) ** 2, dim=0).sqrt() - epe_list.append(epe.view(-1).numpy()) - - if evaluate_matched_unmatched: - matched_valid_mask = (noc_valid > 0.5) & (in_image_valid > 0.5) - - if matched_valid_mask.max() > 0: - matched_epe_list.append(epe[matched_valid_mask].cpu().numpy()) - unmatched_epe_list.append(epe[~matched_valid_mask].cpu().numpy()) - - if with_speed_metric: - flow_gt_speed = torch.sum(flow_gt ** 2, dim=0).sqrt() - valid_mask = (flow_gt_speed < 10) - if valid_mask.max() > 0: - s0_10_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed >= 10) * (flow_gt_speed <= 40) - if valid_mask.max() > 0: - s10_40_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed > 40) - if valid_mask.max() > 0: - s40plus_list.append(epe[valid_mask].cpu().numpy()) - - epe_all = np.concatenate(epe_list) - epe = np.mean(epe_all) - px1 = np.mean(epe_all > 1) - px3 = np.mean(epe_all > 3) - px5 = np.mean(epe_all > 5) - - dstype_ori = dstype - - print("Validation Sintel (%s) EPE: %.3f, 1px: %.3f, 3px: %.3f, 5px: %.3f" % (dstype_ori, epe, px1, px3, px5)) - - dstype = 'sintel_' + dstype - - results[dstype + '_epe'] = np.mean(epe_list) - results[dstype + '_1px'] = px1 - results[dstype + '_3px'] = px3 - results[dstype + '_5px'] = px5 - - if with_speed_metric: - s0_10 = np.mean(np.concatenate(s0_10_list)) - s10_40 = np.mean(np.concatenate(s10_40_list)) - s40plus = np.mean(np.concatenate(s40plus_list)) - - print("Validation Sintel (%s) s0_10: %.3f, s10_40: %.3f, s40+: %.3f" % ( - dstype_ori, s0_10, - s10_40, - s40plus)) - - results[dstype + '_s0_10'] = s0_10 - results[dstype + '_s10_40'] = s10_40 - results[dstype + '_s40+'] = s40plus - - if count_time: - print('Time: %.6fs' % (total_time / num_runs)) - break # only the clean pass when counting time - - if evaluate_matched_unmatched: - matched_epe = np.mean(np.concatenate(matched_epe_list)) - unmatched_epe = np.mean(np.concatenate(unmatched_epe_list)) - - print('Validatation Sintel (%s) matched epe: %.3f, unmatched epe: %.3f' % ( - dstype_ori, matched_epe, unmatched_epe)) - - results[dstype + '_matched'] = matched_epe - results[dstype + '_unmatched'] = unmatched_epe - - return results - - -@torch.no_grad() -def validate_kitti(model, - padding_factor=8, - with_speed_metric=False, - average_over_pixels=True, - attn_splits_list=False, - corr_radius_list=False, - prop_radius_list=False, - ): - """ Peform validation using the KITTI-2015 (train) split """ - model.eval() - - val_dataset = data.KITTI(split='training') - print('Number of validation image pairs: %d' % len(val_dataset)) - - out_list, epe_list = [], [] - results = {} - - if with_speed_metric: - if average_over_pixels: - s0_10_list = [] - s10_40_list = [] - s40plus_list = [] - else: - s0_10_epe_sum = 0 - s0_10_valid_samples = 0 - s10_40_epe_sum = 0 - s10_40_valid_samples = 0 - s40plus_epe_sum = 0 - s40plus_valid_samples = 0 - - for val_id in range(len(val_dataset)): - image1, image2, flow_gt, valid_gt = val_dataset[val_id] - image1 = image1[None].cuda() - image2 = image2[None].cuda() - - padder = InputPadder(image1.shape, mode='kitti', padding_factor=padding_factor) - image1, image2 = padder.pad(image1, image2) - - results_dict = model(image1, image2, - attn_splits_list=attn_splits_list, - corr_radius_list=corr_radius_list, - prop_radius_list=prop_radius_list, - ) - - # useful when using parallel branches - flow_pr = results_dict['flow_preds'][-1] - - flow = padder.unpad(flow_pr[0]).cpu() - - epe = torch.sum((flow - flow_gt) ** 2, dim=0).sqrt() - mag = torch.sum(flow_gt ** 2, dim=0).sqrt() - - if with_speed_metric: - # flow_gt_speed = torch.sum(flow_gt ** 2, dim=0).sqrt() - flow_gt_speed = mag - - if average_over_pixels: - valid_mask = (flow_gt_speed < 10) * (valid_gt >= 0.5) # note KITTI GT is sparse - if valid_mask.max() > 0: - s0_10_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed >= 10) * (flow_gt_speed <= 40) * (valid_gt >= 0.5) - if valid_mask.max() > 0: - s10_40_list.append(epe[valid_mask].cpu().numpy()) - - valid_mask = (flow_gt_speed > 40) * (valid_gt >= 0.5) - if valid_mask.max() > 0: - s40plus_list.append(epe[valid_mask].cpu().numpy()) - - else: - valid_mask = (flow_gt_speed < 10) * (valid_gt >= 0.5) # note KITTI GT is sparse - if valid_mask.max() > 0: - s0_10_epe_sum += (epe * valid_mask).sum() / valid_mask.sum() - s0_10_valid_samples += 1 - - valid_mask = (flow_gt_speed >= 10) * (flow_gt_speed <= 40) * (valid_gt >= 0.5) - if valid_mask.max() > 0: - s10_40_epe_sum += (epe * valid_mask).sum() / valid_mask.sum() - s10_40_valid_samples += 1 - - valid_mask = (flow_gt_speed > 40) * (valid_gt >= 0.5) - if valid_mask.max() > 0: - s40plus_epe_sum += (epe * valid_mask).sum() / valid_mask.sum() - s40plus_valid_samples += 1 - - epe = epe.view(-1) - mag = mag.view(-1) - val = valid_gt.view(-1) >= 0.5 - - out = ((epe > 3.0) & ((epe / mag) > 0.05)).float() - - if average_over_pixels: - epe_list.append(epe[val].cpu().numpy()) - else: - epe_list.append(epe[val].mean().item()) - - out_list.append(out[val].cpu().numpy()) - - if average_over_pixels: - epe_list = np.concatenate(epe_list) - else: - epe_list = np.array(epe_list) - out_list = np.concatenate(out_list) - - epe = np.mean(epe_list) - f1 = 100 * np.mean(out_list) - - print("Validation KITTI EPE: %.3f, F1-all: %.3f" % (epe, f1)) - results['kitti_epe'] = epe - results['kitti_f1'] = f1 - - if with_speed_metric: - if average_over_pixels: - s0_10 = np.mean(np.concatenate(s0_10_list)) - s10_40 = np.mean(np.concatenate(s10_40_list)) - s40plus = np.mean(np.concatenate(s40plus_list)) - else: - s0_10 = s0_10_epe_sum / s0_10_valid_samples - s10_40 = s10_40_epe_sum / s10_40_valid_samples - s40plus = s40plus_epe_sum / s40plus_valid_samples - - print("Validation KITTI s0_10: %.3f, s10_40: %.3f, s40+: %.3f" % ( - s0_10, - s10_40, - s40plus)) - - results['kitti_s0_10'] = s0_10 - results['kitti_s10_40'] = s10_40 - results['kitti_s40+'] = s40plus - - return results - - -@torch.no_grad() -def inference_on_dir(model, - inference_dir, - output_path='output', - padding_factor=8, - inference_size=None, - paired_data=False, # dir of paired testdata instead of a sequence - save_flo_flow=False, # save as .flo for quantative evaluation - attn_splits_list=None, - corr_radius_list=None, - prop_radius_list=None, - pred_bidir_flow=False, - fwd_bwd_consistency_check=False, - ): - """ Inference on a directory """ - model.eval() - - if fwd_bwd_consistency_check: - assert pred_bidir_flow - - if not os.path.exists(output_path): - os.makedirs(output_path) - - filenames = sorted(glob(inference_dir + '/*')) - print('%d images found' % len(filenames)) - - stride = 2 if paired_data else 1 - - if paired_data: - assert len(filenames) % 2 == 0 - - for test_id in range(0, len(filenames) - 1, stride): - - image1 = frame_utils.read_gen(filenames[test_id]) - image2 = frame_utils.read_gen(filenames[test_id + 1]) - - image1 = np.array(image1).astype(np.uint8) - image2 = np.array(image2).astype(np.uint8) - - if len(image1.shape) == 2: # gray image, for example, HD1K - image1 = np.tile(image1[..., None], (1, 1, 3)) - image2 = np.tile(image2[..., None], (1, 1, 3)) - else: - image1 = image1[..., :3] - image2 = image2[..., :3] - - image1 = torch.from_numpy(image1).permute(2, 0, 1).float() - image2 = torch.from_numpy(image2).permute(2, 0, 1).float() - - if inference_size is None: - padder = InputPadder(image1.shape, padding_factor=padding_factor) - image1, image2 = padder.pad(image1[None].cuda(), image2[None].cuda()) - else: - image1, image2 = image1[None].cuda(), image2[None].cuda() - - # resize before inference - if inference_size is not None: - assert isinstance(inference_size, list) or isinstance(inference_size, tuple) - ori_size = image1.shape[-2:] - image1 = F.interpolate(image1, size=inference_size, mode='bilinear', - align_corners=True) - image2 = F.interpolate(image2, size=inference_size, mode='bilinear', - align_corners=True) - - results_dict = model(image1, image2, - attn_splits_list=attn_splits_list, - corr_radius_list=corr_radius_list, - prop_radius_list=prop_radius_list, - pred_bidir_flow=pred_bidir_flow, - ) - - flow_pr = results_dict['flow_preds'][-1] # [B, 2, H, W] - - # resize back - if inference_size is not None: - flow_pr = F.interpolate(flow_pr, size=ori_size, mode='bilinear', - align_corners=True) - flow_pr[:, 0] = flow_pr[:, 0] * ori_size[-1] / inference_size[-1] - flow_pr[:, 1] = flow_pr[:, 1] * ori_size[-2] / inference_size[-2] - - if inference_size is None: - flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() # [H, W, 2] - else: - flow = flow_pr[0].permute(1, 2, 0).cpu().numpy() # [H, W, 2] - - output_file = os.path.join(output_path, os.path.basename(filenames[test_id])[:-4] + '_flow.png') - - # save vis flow - save_vis_flow_tofile(flow, output_file) - - # also predict backward flow - if pred_bidir_flow: - assert flow_pr.size(0) == 2 # [2, H, W, 2] - - if inference_size is None: - flow_bwd = padder.unpad(flow_pr[1]).permute(1, 2, 0).cpu().numpy() # [H, W, 2] - else: - flow_bwd = flow_pr[1].permute(1, 2, 0).cpu().numpy() # [H, W, 2] - - output_file = os.path.join(output_path, os.path.basename(filenames[test_id])[:-4] + '_flow_bwd.png') - - # save vis flow - save_vis_flow_tofile(flow_bwd, output_file) - - # forward-backward consistency check - # occlusion is 1 - if fwd_bwd_consistency_check: - if inference_size is None: - fwd_flow = padder.unpad(flow_pr[0]).unsqueeze(0) # [1, 2, H, W] - bwd_flow = padder.unpad(flow_pr[1]).unsqueeze(0) # [1, 2, H, W] - else: - fwd_flow = flow_pr[0].unsqueeze(0) - bwd_flow = flow_pr[1].unsqueeze(0) - - fwd_occ, bwd_occ = forward_backward_consistency_check(fwd_flow, bwd_flow) # [1, H, W] float - - fwd_occ_file = os.path.join(output_path, os.path.basename(filenames[test_id])[:-4] + '_occ.png') - bwd_occ_file = os.path.join(output_path, os.path.basename(filenames[test_id])[:-4] + '_occ_bwd.png') - - Image.fromarray((fwd_occ[0].cpu().numpy() * 255.).astype(np.uint8)).save(fwd_occ_file) - Image.fromarray((bwd_occ[0].cpu().numpy() * 255.).astype(np.uint8)).save(bwd_occ_file) - - if save_flo_flow: - output_file = os.path.join(output_path, os.path.basename(filenames[test_id])[:-4] + '_pred.flo') - frame_utils.writeFlow(output_file, flow) diff --git a/spaces/ApathyINC/CustomGPT/utils.py b/spaces/ApathyINC/CustomGPT/utils.py deleted file mode 100644 index b09b072410049e2aa6f82cdd775084d8c0f7064e..0000000000000000000000000000000000000000 --- a/spaces/ApathyINC/CustomGPT/utils.py +++ /dev/null @@ -1,54 +0,0 @@ -import json, os -from tencentcloud.common import credential -from tencentcloud.common.profile.client_profile import ClientProfile -from tencentcloud.common.profile.http_profile import HttpProfile -from tencentcloud.common.exception.tencent_cloud_sdk_exception import TencentCloudSDKException -from tencentcloud.tmt.v20180321 import tmt_client, models - -def get_tmt_client(): - try: - # 实例化一个认证对象,入参需要传入腾讯云账户 SecretId 和 SecretKey,此处还需注意密钥对的保密 - # 代码泄露可能会导致 SecretId 和 SecretKey 泄露,并威胁账号下所有资源的安全性。以下代码示例仅供参考,建议采用更安全的方式来使用密钥,请参见:https://cloud.tencent.com/document/product/1278/85305 - # 密钥可前往官网控制台 https://console.cloud.tencent.com/cam/capi 进行获取 - SecretId = os.environ.get("TENCENTCLOUD_SECRET_ID") - SecretKey = os.environ.get("TENCENTCLOUD_SECRET_KEY") - cred = credential.Credential(SecretId, SecretKey) - # 实例化一个http选项,可选的,没有特殊需求可以跳过 - httpProfile = HttpProfile() - httpProfile.endpoint = "tmt.tencentcloudapi.com" - - # 实例化一个client选项,可选的,没有特殊需求可以跳过 - clientProfile = ClientProfile() - clientProfile.httpProfile = httpProfile - # 实例化要请求产品的client对象,clientProfile是可选的 - client = tmt_client.TmtClient(cred, "ap-shanghai", clientProfile) - print(f'client_{client}') - return client - except TencentCloudSDKException as err: - print(f'client_err_{err}') - return None - -def getTextTrans_tmt(tmt_client, text, source='zh', target='en'): - def is_chinese(string): - for ch in string: - if u'\u4e00' <= ch <= u'\u9fff': - return True - return False - - if tmt_client is None: - return text - if not is_chinese(text) and target == 'en': - return text - try: - req = models.TextTranslateRequest() - params = { - "SourceText": text, - "Source": source, - "Target": target, - "ProjectId": 0 - } - req.from_json_string(json.dumps(params)) - resp = tmt_client.TextTranslate(req) - return resp.TargetText - except Exception as e: - return text \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/msgpack/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/msgpack/__init__.py deleted file mode 100644 index 1300b866043e22e3b318ba791d31333ca8fe8514..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/msgpack/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -# coding: utf-8 -from .exceptions import * -from .ext import ExtType, Timestamp - -import os -import sys - - -version = (1, 0, 5) -__version__ = "1.0.5" - - -if os.environ.get("MSGPACK_PUREPYTHON") or sys.version_info[0] == 2: - from .fallback import Packer, unpackb, Unpacker -else: - try: - from ._cmsgpack import Packer, unpackb, Unpacker - except ImportError: - from .fallback import Packer, unpackb, Unpacker - - -def pack(o, stream, **kwargs): - """ - Pack object `o` and write it to `stream` - - See :class:`Packer` for options. - """ - packer = Packer(**kwargs) - stream.write(packer.pack(o)) - - -def packb(o, **kwargs): - """ - Pack object `o` and return packed bytes - - See :class:`Packer` for options. - """ - return Packer(**kwargs).pack(o) - - -def unpack(stream, **kwargs): - """ - Unpack an object from `stream`. - - Raises `ExtraData` when `stream` contains extra bytes. - See :class:`Unpacker` for options. - """ - data = stream.read() - return unpackb(data, **kwargs) - - -# alias for compatibility to simplejson/marshal/pickle. -load = unpack -loads = unpackb - -dump = pack -dumps = packb diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/register.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/register.py deleted file mode 100644 index c1402650d7f7defdde15741aabafa9f42843dcdf..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/register.py +++ /dev/null @@ -1,319 +0,0 @@ -"""distutils.command.register - -Implements the Distutils 'register' command (register with the repository). -""" - -# created 2002/10/21, Richard Jones - -import getpass -import io -import urllib.parse -import urllib.request -from warnings import warn - -from distutils.core import PyPIRCCommand -from distutils import log - - -class register(PyPIRCCommand): - - description = "register the distribution with the Python package index" - user_options = PyPIRCCommand.user_options + [ - ('list-classifiers', None, 'list the valid Trove classifiers'), - ( - 'strict', - None, - 'Will stop the registering if the meta-data are not fully compliant', - ), - ] - boolean_options = PyPIRCCommand.boolean_options + [ - 'verify', - 'list-classifiers', - 'strict', - ] - - sub_commands = [('check', lambda self: True)] - - def initialize_options(self): - PyPIRCCommand.initialize_options(self) - self.list_classifiers = 0 - self.strict = 0 - - def finalize_options(self): - PyPIRCCommand.finalize_options(self) - # setting options for the `check` subcommand - check_options = { - 'strict': ('register', self.strict), - 'restructuredtext': ('register', 1), - } - self.distribution.command_options['check'] = check_options - - def run(self): - self.finalize_options() - self._set_config() - - # Run sub commands - for cmd_name in self.get_sub_commands(): - self.run_command(cmd_name) - - if self.dry_run: - self.verify_metadata() - elif self.list_classifiers: - self.classifiers() - else: - self.send_metadata() - - def check_metadata(self): - """Deprecated API.""" - warn( - "distutils.command.register.check_metadata is deprecated; " - "use the check command instead", - DeprecationWarning, - ) - check = self.distribution.get_command_obj('check') - check.ensure_finalized() - check.strict = self.strict - check.restructuredtext = 1 - check.run() - - def _set_config(self): - '''Reads the configuration file and set attributes.''' - config = self._read_pypirc() - if config != {}: - self.username = config['username'] - self.password = config['password'] - self.repository = config['repository'] - self.realm = config['realm'] - self.has_config = True - else: - if self.repository not in ('pypi', self.DEFAULT_REPOSITORY): - raise ValueError('%s not found in .pypirc' % self.repository) - if self.repository == 'pypi': - self.repository = self.DEFAULT_REPOSITORY - self.has_config = False - - def classifiers(self): - '''Fetch the list of classifiers from the server.''' - url = self.repository + '?:action=list_classifiers' - response = urllib.request.urlopen(url) - log.info(self._read_pypi_response(response)) - - def verify_metadata(self): - '''Send the metadata to the package index server to be checked.''' - # send the info to the server and report the result - (code, result) = self.post_to_server(self.build_post_data('verify')) - log.info('Server response (%s): %s', code, result) - - def send_metadata(self): # noqa: C901 - '''Send the metadata to the package index server. - - Well, do the following: - 1. figure who the user is, and then - 2. send the data as a Basic auth'ed POST. - - First we try to read the username/password from $HOME/.pypirc, - which is a ConfigParser-formatted file with a section - [distutils] containing username and password entries (both - in clear text). Eg: - - [distutils] - index-servers = - pypi - - [pypi] - username: fred - password: sekrit - - Otherwise, to figure who the user is, we offer the user three - choices: - - 1. use existing login, - 2. register as a new user, or - 3. set the password to a random string and email the user. - - ''' - # see if we can short-cut and get the username/password from the - # config - if self.has_config: - choice = '1' - username = self.username - password = self.password - else: - choice = 'x' - username = password = '' - - # get the user's login info - choices = '1 2 3 4'.split() - while choice not in choices: - self.announce( - '''\ -We need to know who you are, so please choose either: - 1. use your existing login, - 2. register as a new user, - 3. have the server generate a new password for you (and email it to you), or - 4. quit -Your selection [default 1]: ''', - log.INFO, - ) - choice = input() - if not choice: - choice = '1' - elif choice not in choices: - print('Please choose one of the four options!') - - if choice == '1': - # get the username and password - while not username: - username = input('Username: ') - while not password: - password = getpass.getpass('Password: ') - - # set up the authentication - auth = urllib.request.HTTPPasswordMgr() - host = urllib.parse.urlparse(self.repository)[1] - auth.add_password(self.realm, host, username, password) - # send the info to the server and report the result - code, result = self.post_to_server(self.build_post_data('submit'), auth) - self.announce('Server response ({}): {}'.format(code, result), log.INFO) - - # possibly save the login - if code == 200: - if self.has_config: - # sharing the password in the distribution instance - # so the upload command can reuse it - self.distribution.password = password - else: - self.announce( - ( - 'I can store your PyPI login so future ' - 'submissions will be faster.' - ), - log.INFO, - ) - self.announce( - '(the login will be stored in %s)' % self._get_rc_file(), - log.INFO, - ) - choice = 'X' - while choice.lower() not in 'yn': - choice = input('Save your login (y/N)?') - if not choice: - choice = 'n' - if choice.lower() == 'y': - self._store_pypirc(username, password) - - elif choice == '2': - data = {':action': 'user'} - data['name'] = data['password'] = data['email'] = '' - data['confirm'] = None - while not data['name']: - data['name'] = input('Username: ') - while data['password'] != data['confirm']: - while not data['password']: - data['password'] = getpass.getpass('Password: ') - while not data['confirm']: - data['confirm'] = getpass.getpass(' Confirm: ') - if data['password'] != data['confirm']: - data['password'] = '' - data['confirm'] = None - print("Password and confirm don't match!") - while not data['email']: - data['email'] = input(' EMail: ') - code, result = self.post_to_server(data) - if code != 200: - log.info('Server response (%s): %s', code, result) - else: - log.info('You will receive an email shortly.') - log.info('Follow the instructions in it to ' 'complete registration.') - elif choice == '3': - data = {':action': 'password_reset'} - data['email'] = '' - while not data['email']: - data['email'] = input('Your email address: ') - code, result = self.post_to_server(data) - log.info('Server response (%s): %s', code, result) - - def build_post_data(self, action): - # figure the data to send - the metadata plus some additional - # information used by the package server - meta = self.distribution.metadata - data = { - ':action': action, - 'metadata_version': '1.0', - 'name': meta.get_name(), - 'version': meta.get_version(), - 'summary': meta.get_description(), - 'home_page': meta.get_url(), - 'author': meta.get_contact(), - 'author_email': meta.get_contact_email(), - 'license': meta.get_licence(), - 'description': meta.get_long_description(), - 'keywords': meta.get_keywords(), - 'platform': meta.get_platforms(), - 'classifiers': meta.get_classifiers(), - 'download_url': meta.get_download_url(), - # PEP 314 - 'provides': meta.get_provides(), - 'requires': meta.get_requires(), - 'obsoletes': meta.get_obsoletes(), - } - if data['provides'] or data['requires'] or data['obsoletes']: - data['metadata_version'] = '1.1' - return data - - def post_to_server(self, data, auth=None): # noqa: C901 - '''Post a query to the server, and return a string response.''' - if 'name' in data: - self.announce( - 'Registering {} to {}'.format(data['name'], self.repository), log.INFO - ) - # Build up the MIME payload for the urllib2 POST data - boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254' - sep_boundary = '\n--' + boundary - end_boundary = sep_boundary + '--' - body = io.StringIO() - for key, value in data.items(): - # handle multiple entries for the same name - if type(value) not in (type([]), type(())): - value = [value] - for value in value: - value = str(value) - body.write(sep_boundary) - body.write('\nContent-Disposition: form-data; name="%s"' % key) - body.write("\n\n") - body.write(value) - if value and value[-1] == '\r': - body.write('\n') # write an extra newline (lurve Macs) - body.write(end_boundary) - body.write("\n") - body = body.getvalue().encode("utf-8") - - # build the Request - headers = { - 'Content-type': 'multipart/form-data; boundary=%s; charset=utf-8' - % boundary, - 'Content-length': str(len(body)), - } - req = urllib.request.Request(self.repository, body, headers) - - # handle HTTP and include the Basic Auth handler - opener = urllib.request.build_opener( - urllib.request.HTTPBasicAuthHandler(password_mgr=auth) - ) - data = '' - try: - result = opener.open(req) - except urllib.error.HTTPError as e: - if self.show_response: - data = e.fp.read() - result = e.code, e.msg - except urllib.error.URLError as e: - result = 500, str(e) - else: - if self.show_response: - data = self._read_pypi_response(result) - result = 200, 'OK' - if self.show_response: - msg = '\n'.join(('-' * 75, data, '-' * 75)) - self.announce(msg, log.INFO) - return result diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/train_loop.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/train_loop.py deleted file mode 100644 index c4a86b52a5604f2b5799abac299ca4726345b7a6..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/train_loop.py +++ /dev/null @@ -1,417 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -import time -import weakref -from typing import List, Mapping, Optional -import torch -from torch.nn.parallel import DataParallel, DistributedDataParallel - -import detectron2.utils.comm as comm -from detectron2.utils.events import EventStorage, get_event_storage -from detectron2.utils.logger import _log_api_usage - -__all__ = ["HookBase", "TrainerBase", "SimpleTrainer", "AMPTrainer"] - - -class HookBase: - """ - Base class for hooks that can be registered with :class:`TrainerBase`. - - Each hook can implement 4 methods. The way they are called is demonstrated - in the following snippet: - :: - hook.before_train() - for iter in range(start_iter, max_iter): - hook.before_step() - trainer.run_step() - hook.after_step() - iter += 1 - hook.after_train() - - Notes: - 1. In the hook method, users can access ``self.trainer`` to access more - properties about the context (e.g., model, current iteration, or config - if using :class:`DefaultTrainer`). - - 2. A hook that does something in :meth:`before_step` can often be - implemented equivalently in :meth:`after_step`. - If the hook takes non-trivial time, it is strongly recommended to - implement the hook in :meth:`after_step` instead of :meth:`before_step`. - The convention is that :meth:`before_step` should only take negligible time. - - Following this convention will allow hooks that do care about the difference - between :meth:`before_step` and :meth:`after_step` (e.g., timer) to - function properly. - - """ - - trainer: "TrainerBase" = None - """ - A weak reference to the trainer object. Set by the trainer when the hook is registered. - """ - - def before_train(self): - """ - Called before the first iteration. - """ - pass - - def after_train(self): - """ - Called after the last iteration. - """ - pass - - def before_step(self): - """ - Called before each iteration. - """ - pass - - def after_step(self): - """ - Called after each iteration. - """ - pass - - def state_dict(self): - """ - Hooks are stateless by default, but can be made checkpointable by - implementing `state_dict` and `load_state_dict`. - """ - return {} - - -class TrainerBase: - """ - Base class for iterative trainer with hooks. - - The only assumption we made here is: the training runs in a loop. - A subclass can implement what the loop is. - We made no assumptions about the existence of dataloader, optimizer, model, etc. - - Attributes: - iter(int): the current iteration. - - start_iter(int): The iteration to start with. - By convention the minimum possible value is 0. - - max_iter(int): The iteration to end training. - - storage(EventStorage): An EventStorage that's opened during the course of training. - """ - - def __init__(self) -> None: - self._hooks: List[HookBase] = [] - self.iter: int = 0 - self.start_iter: int = 0 - self.max_iter: int - self.storage: EventStorage - _log_api_usage("trainer." + self.__class__.__name__) - - def register_hooks(self, hooks: List[Optional[HookBase]]) -> None: - """ - Register hooks to the trainer. The hooks are executed in the order - they are registered. - - Args: - hooks (list[Optional[HookBase]]): list of hooks - """ - hooks = [h for h in hooks if h is not None] - for h in hooks: - assert isinstance(h, HookBase) - # To avoid circular reference, hooks and trainer cannot own each other. - # This normally does not matter, but will cause memory leak if the - # involved objects contain __del__: - # See http://engineering.hearsaysocial.com/2013/06/16/circular-references-in-python/ - h.trainer = weakref.proxy(self) - self._hooks.extend(hooks) - - def train(self, start_iter: int, max_iter: int): - """ - Args: - start_iter, max_iter (int): See docs above - """ - logger = logging.getLogger(__name__) - logger.info("Starting training from iteration {}".format(start_iter)) - - self.iter = self.start_iter = start_iter - self.max_iter = max_iter - - with EventStorage(start_iter) as self.storage: - try: - self.before_train() - for self.iter in range(start_iter, max_iter): - self.before_step() - self.run_step() - self.after_step() - # self.iter == max_iter can be used by `after_train` to - # tell whether the training successfully finished or failed - # due to exceptions. - self.iter += 1 - except Exception: - logger.exception("Exception during training:") - raise - finally: - self.after_train() - - def before_train(self): - for h in self._hooks: - h.before_train() - - def after_train(self): - self.storage.iter = self.iter - for h in self._hooks: - h.after_train() - - def before_step(self): - # Maintain the invariant that storage.iter == trainer.iter - # for the entire execution of each step - self.storage.iter = self.iter - - for h in self._hooks: - h.before_step() - - def after_step(self): - for h in self._hooks: - h.after_step() - - def run_step(self): - raise NotImplementedError - - def state_dict(self): - ret = {"iteration": self.iter} - hooks_state = {} - for h in self._hooks: - sd = h.state_dict() - if sd: - name = type(h).__qualname__ - if name in hooks_state: - # TODO handle repetitive stateful hooks - continue - hooks_state[name] = sd - if hooks_state: - ret["hooks"] = hooks_state - return ret - - def load_state_dict(self, state_dict): - logger = logging.getLogger(__name__) - self.iter = state_dict["iteration"] - for key, value in state_dict.get("hooks", {}).items(): - for h in self._hooks: - try: - name = type(h).__qualname__ - except AttributeError: - continue - if name == key: - h.load_state_dict(value) - break - else: - logger.warning(f"Cannot find the hook '{key}', its state_dict is ignored.") - - -class SimpleTrainer(TrainerBase): - """ - A simple trainer for the most common type of task: - single-cost single-optimizer single-data-source iterative optimization, - optionally using data-parallelism. - It assumes that every step, you: - - 1. Compute the loss with a data from the data_loader. - 2. Compute the gradients with the above loss. - 3. Update the model with the optimizer. - - All other tasks during training (checkpointing, logging, evaluation, LR schedule) - are maintained by hooks, which can be registered by :meth:`TrainerBase.register_hooks`. - - If you want to do anything fancier than this, - either subclass TrainerBase and implement your own `run_step`, - or write your own training loop. - """ - - def __init__(self, model, data_loader, optimizer): - """ - Args: - model: a torch Module. Takes a data from data_loader and returns a - dict of losses. - data_loader: an iterable. Contains data to be used to call model. - optimizer: a torch optimizer. - """ - super().__init__() - - """ - We set the model to training mode in the trainer. - However it's valid to train a model that's in eval mode. - If you want your model (or a submodule of it) to behave - like evaluation during training, you can overwrite its train() method. - """ - model.train() - - self.model = model - self.data_loader = data_loader - self._data_loader_iter = iter(data_loader) - self.optimizer = optimizer - - def run_step(self): - """ - Implement the standard training logic described above. - """ - assert self.model.training, "[SimpleTrainer] model was changed to eval mode!" - start = time.perf_counter() - """ - If you want to do something with the data, you can wrap the dataloader. - """ - data = next(self._data_loader_iter) - data_time = time.perf_counter() - start - - """ - If you want to do something with the losses, you can wrap the model. - """ - loss_dict = self.model(data) - if isinstance(loss_dict, torch.Tensor): - losses = loss_dict - loss_dict = {"total_loss": loss_dict} - else: - losses = sum(loss_dict.values()) - - """ - If you need to accumulate gradients or do something similar, you can - wrap the optimizer with your custom `zero_grad()` method. - """ - self.optimizer.zero_grad() - losses.backward() - - self._write_metrics(loss_dict, data_time) - - """ - If you need gradient clipping/scaling or other processing, you can - wrap the optimizer with your custom `step()` method. But it is - suboptimal as explained in https://arxiv.org/abs/2006.15704 Sec 3.2.4 - """ - self.optimizer.step() - - def _write_metrics( - self, - loss_dict: Mapping[str, torch.Tensor], - data_time: float, - prefix: str = "", - ) -> None: - SimpleTrainer.write_metrics(loss_dict, data_time, prefix) - - @staticmethod - def write_metrics( - loss_dict: Mapping[str, torch.Tensor], - data_time: float, - prefix: str = "", - ) -> None: - """ - Args: - loss_dict (dict): dict of scalar losses - data_time (float): time taken by the dataloader iteration - prefix (str): prefix for logging keys - """ - metrics_dict = {k: v.detach().cpu().item() for k, v in loss_dict.items()} - metrics_dict["data_time"] = data_time - - # Gather metrics among all workers for logging - # This assumes we do DDP-style training, which is currently the only - # supported method in detectron2. - all_metrics_dict = comm.gather(metrics_dict) - - if comm.is_main_process(): - storage = get_event_storage() - - # data_time among workers can have high variance. The actual latency - # caused by data_time is the maximum among workers. - data_time = np.max([x.pop("data_time") for x in all_metrics_dict]) - storage.put_scalar("data_time", data_time) - - # average the rest metrics - metrics_dict = { - k: np.mean([x[k] for x in all_metrics_dict]) for k in all_metrics_dict[0].keys() - } - total_losses_reduced = sum(metrics_dict.values()) - if not np.isfinite(total_losses_reduced): - raise FloatingPointError( - f"Loss became infinite or NaN at iteration={storage.iter}!\n" - f"loss_dict = {metrics_dict}" - ) - - storage.put_scalar("{}total_loss".format(prefix), total_losses_reduced) - if len(metrics_dict) > 1: - storage.put_scalars(**metrics_dict) - - def state_dict(self): - ret = super().state_dict() - ret["optimizer"] = self.optimizer.state_dict() - return ret - - def load_state_dict(self, state_dict): - super().load_state_dict(state_dict) - self.optimizer.load_state_dict(state_dict["optimizer"]) - - -class AMPTrainer(SimpleTrainer): - """ - Like :class:`SimpleTrainer`, but uses PyTorch's native automatic mixed precision - in the training loop. - """ - - def __init__(self, model, data_loader, optimizer, grad_scaler=None): - """ - Args: - model, data_loader, optimizer: same as in :class:`SimpleTrainer`. - grad_scaler: torch GradScaler to automatically scale gradients. - """ - unsupported = "AMPTrainer does not support single-process multi-device training!" - if isinstance(model, DistributedDataParallel): - assert not (model.device_ids and len(model.device_ids) > 1), unsupported - assert not isinstance(model, DataParallel), unsupported - - super().__init__(model, data_loader, optimizer) - - if grad_scaler is None: - from torch.cuda.amp import GradScaler - - grad_scaler = GradScaler() - self.grad_scaler = grad_scaler - - def run_step(self): - """ - Implement the AMP training logic. - """ - assert self.model.training, "[AMPTrainer] model was changed to eval mode!" - assert torch.cuda.is_available(), "[AMPTrainer] CUDA is required for AMP training!" - from torch.cuda.amp import autocast - - start = time.perf_counter() - data = next(self._data_loader_iter) - data_time = time.perf_counter() - start - - with autocast(): - loss_dict = self.model(data) - if isinstance(loss_dict, torch.Tensor): - losses = loss_dict - loss_dict = {"total_loss": loss_dict} - else: - losses = sum(loss_dict.values()) - - self.optimizer.zero_grad() - self.grad_scaler.scale(losses).backward() - - self._write_metrics(loss_dict, data_time) - - self.grad_scaler.step(self.optimizer) - self.grad_scaler.update() - - def state_dict(self): - ret = super().state_dict() - ret["grad_scaler"] = self.grad_scaler.state_dict() - return ret - - def load_state_dict(self, state_dict): - super().load_state_dict(state_dict) - self.grad_scaler.load_state_dict(state_dict["grad_scaler"]) diff --git a/spaces/Awiny/Image2Paragraph/models/region_semantic.py b/spaces/Awiny/Image2Paragraph/models/region_semantic.py deleted file mode 100644 index d4214c5a3ea334a447d2a75be814d16450b2ac84..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/region_semantic.py +++ /dev/null @@ -1,61 +0,0 @@ -from models.segment_models.semgent_anything_model import SegmentAnything -from models.segment_models.semantic_segment_anything_model import SemanticSegment -from models.segment_models.edit_anything_model import EditAnything - - -class RegionSemantic(): - def __init__(self, device, image_caption_model, region_classify_model='edit_anything', sam_arch='vit_b'): - self.device = device - self.sam_arch = sam_arch - self.image_caption_model = image_caption_model - self.region_classify_model = region_classify_model - self.init_models() - - def init_models(self): - self.segment_model = SegmentAnything(self.device, arch=self.sam_arch) - if self.region_classify_model == 'ssa': - self.semantic_segment_model = SemanticSegment(self.device) - elif self.region_classify_model == 'edit_anything': - self.edit_anything_model = EditAnything(self.image_caption_model) - print('initalize edit anything model') - else: - raise ValueError("semantic_class_model must be 'ssa' or 'edit_anything'") - - def semantic_prompt_gen(self, anns, topk=5): - """ - fliter too small objects and objects with low stability score - anns: [{'class_name': 'person', 'bbox': [0.0, 0.0, 0.0, 0.0], 'size': [0, 0], 'stability_score': 0.0}, ...] - semantic_prompt: "person: [0.0, 0.0, 0.0, 0.0]; ..." - """ - # Sort annotations by area in descending order - sorted_annotations = sorted(anns, key=lambda x: x['area'], reverse=True) - anns_len = len(sorted_annotations) - # Select the top 10 largest regions - top_10_largest_regions = sorted_annotations[:min(anns_len, topk)] - semantic_prompt = "" - for region in top_10_largest_regions: - semantic_prompt += region['class_name'] + ': ' + str(region['bbox']) + "; " - print(semantic_prompt) - print('\033[1;35m' + '*' * 100 + '\033[0m') - return semantic_prompt - - def region_semantic(self, img_src, region_classify_model='edit_anything'): - print('\033[1;35m' + '*' * 100 + '\033[0m') - print("\nStep3, Semantic Prompt:") - print('extract region segmentation with SAM model....\n') - anns = self.segment_model.generate_mask(img_src) - print('finished...\n') - if region_classify_model == 'ssa': - print('generate region supervision with blip2 model....\n') - anns_w_class = self.semantic_segment_model.semantic_class_w_mask(img_src, anns) - print('finished...\n') - elif region_classify_model == 'edit_anything': - print('generate region supervision with edit anything model....\n') - anns_w_class = self.edit_anything_model.semantic_class_w_mask(img_src, anns) - print('finished...\n') - else: - raise ValueError("semantic_class_model must be 'ssa' or 'edit_anything'") - return self.semantic_prompt_gen(anns_w_class) - - def region_semantic_debug(self, img_src): - return "region_semantic_debug" \ No newline at end of file diff --git a/spaces/AzinZ/vitscn/monotonic_align/__init__.py b/spaces/AzinZ/vitscn/monotonic_align/__init__.py deleted file mode 100644 index a323673bb16070d6d0fffddb939b657d0915ff1b..0000000000000000000000000000000000000000 --- a/spaces/AzinZ/vitscn/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537227KB.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537227KB.py deleted file mode 100644 index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537227KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Belligerent/word-sense-disambiguation/app.py b/spaces/Belligerent/word-sense-disambiguation/app.py deleted file mode 100644 index 2e0032621b58e7fe07614e9c2c0505f5acb62736..0000000000000000000000000000000000000000 --- a/spaces/Belligerent/word-sense-disambiguation/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Text2TextGenerationPipeline - -pipe = Text2TextGenerationPipeline(model = AutoModelForSeq2SeqLM.from_pretrained("jpelhaw/t5-word-sense-disambiguation"), -tokenizer = AutoTokenizer.from_pretrained("jpelhaw/t5-word-sense-disambiguation")) - -def wsd_gen(word, context, d1, d2, d3): - question = 'question: question: which description describes the word' + ' " ' + word + ' " ' - descriptions_context = 'best in the following context? \descriptions:[ " ' + d1 + '" , " ' + d2 + ' " , or " '+ d3 + ' " ] context: ' + context + "'" - raw_input = question + descriptions_context - output = pipe(raw_input)[0]['generated_text'] - return output - -examples = [["beat", 'The underdog team "beat" the reigning champion.', " A main accent or rhythmic unit in music or poetry. " , " To strike repeatedly and violently so as to hurt or injure.", " To defeat (someone) in a game or other competitive situation. "], ["shell", 'The first "shell" exploded in mid air taking out an enemy plane.', "The hard protective outer case of a mollusk or crustacean.", "An explosive artillery projectile or bomb.", "Something resembling or likened to a shell because of its shape or its function as an outer case."]] - -word_mask = gr.inputs.Textbox(lines=1, placeholder= "Enter word to disambiguate", default="", label = "Based on the context, which description best matches this word: ") -input_context = gr.inputs.Textbox(lines=1, placeholder="Enter context", default="", label = "context: ") -input_desc1 = gr.inputs.Textbox(lines=1, placeholder="Enter description", default="", label = "description 1: ") -input_desc2 = gr.inputs.Textbox(lines=1, placeholder="Enter description", default="", label = "description 2: ") -input_desc3 = gr.inputs.Textbox(lines=1, placeholder="Enter description", default="", label = "description 3: ") - -gr.Interface(wsd_gen, - inputs = [word_mask , input_context, input_desc1, input_desc2, input_desc3], - outputs= "textbox", - examples = examples, - title = "T5-Word Sense Disambiguation", - description = "Determines which 'sense' (meaning) of a word is activated by the use of the word in a particular context given three different descriptions.", - theme = "seafoam", - article = "This is an implementation of Google's T5-large model applied to Word Sense Disambiguation (WSD) and trained on the SemCor dataset. the SemCor dataset is a corpus made up of 352 documents for a total of 226,040 manually sense-annotated annotations used specifically used to train supervised WSD systems. The model used in this spaces was uploaded by Jan Philip Wahle (jpelhaw) in huggingface.", - allow_flagging="never").launch(inbrowser=True) \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Blxckie Ronda Mp4 Download.md b/spaces/Benson/text-generation/Examples/Blxckie Ronda Mp4 Download.md deleted file mode 100644 index 622c5f11cd3048eebe7a76e730cbf47593c71227..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Blxckie Ronda Mp4 Download.md +++ /dev/null @@ -1,132 +0,0 @@ - -

Descargar Blxckie Ronda MP4: Todo lo que necesitas saber

-

Si usted es un fan del hip hop sudafricano, es posible que haya oído hablar de blxckie ronda mp4 descargar. Es una popular opción de descarga de vídeo para la canción Ronda por Blxckie, uno de los más prometedores de la nueva era SA Hip Hop raperos de Durban. En este artículo, le diremos todo lo que necesita saber sobre la descarga de blxckie ronda mp4, incluyendo quién es Blxckie, qué significa Ronda, por qué el formato MP4 es ideal para videos y cómo descargar videos MP4 desde cualquier sitio web de forma gratuita.

-

¿Quién es Blxckie y cuáles son sus antecedentes?

-

Blxckie, cuyo verdadero nombre es Sihle Sithole, nació el 24 de noviembre de 1999 en Sydenham Heights, Durban. Comenzó a hacer música a la edad de 8 años con sus amigos y se matriculó en la Universidad de KwaZulu-Natal con un título en Psicología. Sin embargo, se retiró debido a la pandemia COVID-19 y se centró en su carrera musical.

-

blxckie ronda mp4 download


DOWNLOADhttps://bltlly.com/2v6MMo



-

Blxckie saltó a la fama en 2020 cuando lanzó varias canciones en SoundCloud y colaboró con otros artistas como Nasty C, LucasRaps, FLVME, Rowlene y LeoDaleo. También se convirtió en el primer artista sudafricano en ser nombrado Up Next por Apple Music en marzo de 2021.

-

Su álbum debut B4Now fue lanzado el 21 de mayo de 2021 y fue certificado oro en Sudáfrica. Cuenta con sus sencillos de éxito como David y Ye 4, que también fueron certificados de oro y doble platino respectivamente.

-

¿Qué es Ronda y qué significa?

-

Ronda es una de las canciones del álbum B4Now de Blxckie. Fue lanzado como sencillo el 30 de abril de 2021 junto con un video musical oficial.

-

La canción trata sobre la confianza y la ambición de Blxckie como rapero. Utiliza la palabra Ronda, que significa ronda o círculo en español, para referirse a su éxito y dominio en la industria de la música. También se compara con Ronda Rousey, un famoso artista marcial mixto estadounidense y ex campeón de UFC.

-

El coro de la canción va así:

-
- -Estoy dando vueltas como Ronda
-Estoy dando vueltas como Ronda
-Estoy dando vueltas como Ronda
-Estoy dando vueltas como Ronda
-Estoy dando vueltas como Ronda
-Estoy dando vueltas como Ronda
-Estoy dando vueltas como Ronda

-
-

¿Cuáles son las ventajas del formato MP4 para los vídeos?

-

MP4 es uno de los formatos de medios más comunes para la transmisión y descarga de vídeo desde Internet. Tiene muchas ventajas sobre otros formatos como AVI o MKV. Algunos de ellos son:

-

-
    -
  • Se puede utilizar en múltiples plataformas, lo que facilita su uso y distribución.
  • -
  • Tiene un alto grado de compresión, lo que resulta en tamaños de archivo más pequeños y tiempos de carga más rápidos.
  • -
  • Puede almacenar tipos de datos distintos de vídeo y audio, como subtítulos, imágenes, metadatos y funciones interactivas.
  • -
  • Tiene una salida de alta calidad que puede soportar resoluciones de hasta 4K.
  • -
-

¿Cómo descargar videos MP4 de cualquier sitio web gratis?

-

Si desea descargar gratis blx ckie ronda mp4 o cualquier otro video MP4 desde cualquier sitio web, puede usar uno de los siguientes métodos:

-
    -
  1. Utilice una herramienta de descarga de vídeo en línea. Hay muchos sitios web que ofrecen este servicio, como Y2Mate, SaveFrom y OnlineVideoConverter. Todo lo que necesita hacer es copiar y pegar la URL del video que desea descargar, elija el formato MP4 y haga clic en el botón de descarga.
  2. -
  3. Usa una extensión del navegador o un complemento. Algunos navegadores, como Chrome y Firefox, tienen extensiones o complementos que pueden ayudarte a descargar vídeos MP4 desde cualquier sitio web. Por ejemplo, Video DownloadHelper, Video Downloader Professional y Flash Video Downloader. Puede instalarlos desde la tienda web del navegador y utilizarlos para descargar vídeos con un solo clic.
  4. - -
-

Conclusión

-

Blxckie ronda mp4 descarga es una gran manera de disfrutar de la canción Ronda by Blxckie, uno de los artistas de hip hop más calientes de Sudáfrica en este momento. Puedes aprender más sobre los antecedentes de Blxckie, el significado de Ronda, los beneficios del formato MP4 y cómo descargar videos MP4 desde cualquier sitio web de forma gratuita en este artículo. Esperamos que le resulte útil e informativo.

-

Si te gustó este artículo, por favor compártelo con tus amigos y familiares que también son fans de Blxckie y el hip hop sudafricano. También puede dejar un comentario a continuación y háganos saber lo que piensa de Blxckie ronda mp4 descarga. Gracias por leer!

-

Preguntas frecuentes

-

¿Cuál es el mejor sitio web para descargar blxckie ronda mp4?

-

No hay una respuesta definitiva a esta pregunta, ya que diferentes sitios web pueden tener diferentes características y cualidades. Sin embargo, algunos de los factores que puede considerar al elegir un sitio web para descargar blxckie ronda mp4 son:

-
    -
  • La velocidad y fiabilidad del proceso de descarga.
  • -
  • La calidad y la resolución del vídeo.
  • -
  • La seguridad y privacidad del sitio web.
  • -
  • Disponibilidad y compatibilidad del sitio web.
  • -
-

Puedes probar diferentes sitios web y ver cuál funciona mejor para ti.

-

¿Cómo puedo convertir un disco mp4 a mp3?

-

Si desea convertir blxckie ronda mp4 a mp3, que es un formato de audio, puede utilizar uno de los siguientes métodos:

-
    -
  1. Utilice una herramienta de conversión de vídeo en línea. Hay muchos sitios web que ofrecen este servicio, como OnlineVideoConverter, Convert2MP3 y CloudConvert. Todo lo que necesitas hacer es subir el archivo blxckie ronda mp4 o pegar su URL, elegir el formato mp3, y haga clic en el botón convertir.
  2. - -
-

¿Cómo puedo ver blxckie ronda mp4 en mi TV?

-

Si quieres ver blxckie ronda mp4 en tu televisor, puedes usar uno de los siguientes métodos:

-
    -
  1. Usa un cable HDMI. Puedes conectar tu ordenador o dispositivo móvil que tenga el archivo blxckie ronda mp4 o acceder a su URL a tu televisor mediante un cable HDMI. Luego, puede seleccionar la entrada HDMI en su televisor y reproducir el video en su dispositivo.
  2. -
  3. Utilice un dispositivo de transmisión. Puede usar un dispositivo que puede transmitir videos en línea desde su computadora o dispositivo móvil a su televisor utilizando Wi-Fi o Bluetooth. Por ejemplo, Chromecast, Roku, Apple TV y Fire TV Stick. Puede configurar el dispositivo de acuerdo con sus instrucciones y usarlo para emitir o reflejar el video en su TV.
  4. -
-

¿Es legal el blxckie ronda mp4?

-

La legalidad de blxckie ronda mp4 depende de varios factores, como:

-
    -
  • La fuente y propiedad del video. Si el video es subido por Blxckie o su canal oficial, o si ha dado permiso a otros canales o sitios web para compartir su video, entonces es legal descargarlo y verlo. Sin embargo, si el video es subido por alguien que no tiene los derechos del video, o si viola los derechos de propiedad intelectual de Blxckie, entonces es ilegal descargarlo y verlo.
  • -
  • El propósito y uso del video. Si descarga y ve el video para uso personal y no comercial, como para entretenimiento o educación, por lo general es legal hacerlo. Sin embargo, si descarga y ve el video para uso comercial o malicioso, como para ganar dinero o dañar la reputación de Blxckie, entonces es ilegal hacerlo.
  • - -
-

Por lo tanto, blxckie ronda mp4 puede ser legal o ilegal dependiendo de estos factores. Debes tener cuidado y discreción al descargar y ver blxckie ronda mp4.

-

¿Cuáles son algunas otras canciones de Blxckie que puedo descargar?

-

Si te gusta blxckie ronda mp4, también te pueden gustar otras canciones de Blxckie que puedes descargar. Aquí están algunas de sus canciones más populares que puedes encontrar en varios sitios web y plataformas:

- - -Canción -Álbum -Fecha de publicación - - -David -B4Now -21 de mayo de 2021 - - -Ye 4 -B4Now -21 de mayo de 2021 - - -Gran Sh'lappa -B4Now -21 de mayo de 2021 - - -Rayas -B4Now -21 de mayo de 2021 - - -Mantener -B4Now -21 de mayo de 2021 - - -Ladrido Hond -Ladrido Hond - Sencillo -11 de junio de 2021 - - -Salsa -Salsa - Sencillo -18 de junio de 2021 - -Gas -Gas - Sencillo -25 de junio de 2021 - - -Steppin -Steppin - Single -2 de julio de 2021 - - -Uppity -Uppity - Single -9 de julio de 2021 - - -

También puede consultar el sitio web oficial de Blxckie, el canal de YouTube, Instagram, Twitter y Facebook para obtener más actualizaciones e información sobre su música y su carrera.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Camin Simulador ltima Piel Del Camin.md b/spaces/Benson/text-generation/Examples/Camin Simulador ltima Piel Del Camin.md deleted file mode 100644 index b9cabbad1471086d2c0d3a33f0c16249afcb97a6..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Camin Simulador ltima Piel Del Camin.md +++ /dev/null @@ -1,50 +0,0 @@ - -

Camión Simulador Ultimate Truck Skin Descargar: Cómo cambiar la apariencia de su camión y tener más diversión

-

Si eres un fan de los juegos de simulación, es posible que hayas oído hablar de Truck Simulator Ultimate, un juego que te permite conducir varios camiones a través de diferentes países y ciudades. El juego es desarrollado por Zuuks Games, la misma compañía que produjo Bus Simulator Ultimate, que tiene más de 300 millones de jugadores en todo el mundo. Truck Simulator Ultimate combina elementos de simulación y magnate, lo que le permite no solo conducir su camión, sino también gestionar su propio negocio, contratar empleados, ampliar su flota y participar en subastas y carreras.

-

camión simulador última piel del camión


Download File --->>> https://bltlly.com/2v6Jim



-

Una de las características más agradables de Truck Simulator Ultimate es que puede personalizar sus camiones con diferentes pieles, que son básicamente diferentes diseños y colores para el exterior de su camión. Las pieles pueden hacer que su camión se vea más realista, elegante o único, dependiendo de su preferencia. Puede elegir entre camiones oficiales con licencia de Mercedes-Benz u otras marcas como BMW, Ford, DAF, MAN, Volvo y más. También puedes encontrar skins inspirados en empresas famosas, países, gasolineras, o incluso películas y dibujos animados.

-

En este artículo, le mostraremos cómo instalar pieles de camiones en Truck Simulator Ultimate, y cuáles son algunas de las mejores pieles de camiones para este juego. Siguiendo estos sencillos pasos, puede cambiar la apariencia de su camión y divertirse más conduciéndolo.

-

Cómo instalar pieles de camiones en Truck Simulator Ultimate

-

Hay dos maneras de instalar pieles de camiones en Truck Simulator Ultimate: descargarlos desde la tienda de aplicaciones o la web, o copiar su URL y pegarlo en la configuración del juego. He aquí cómo hacer ambas cosas:

-

-

Descargar skins desde la tienda de aplicaciones o la web

- -

Si está utilizando un dispositivo iOS, o si desea encontrar más pieles en línea, puede visitar sitios web que ofrecen mods para Truck Simulator Ultimate. Los mods son modificaciones que añaden nuevas características o contenido al juego. Uno de los sitios web más populares para los mods es TSU Mods, que tiene más de 30 mods para diferentes camiones, coches, vehículos de policía, ambulancias, remolques y más. También puedes encontrar otros sitios web buscando "truck simulator ultimate mod" en la web.

-

Copiar la URL de la piel y pegarla en la configuración del juego

-

Una vez que haya descargado una aplicación skin o un archivo mod, debe copiar su URL (la dirección web que comienza con http:// o https://) y pegarla en la configuración del juego. Para hacer esto, siga estos pasos:

-
    -
  1. Open Truck Simulator Ultimate y toque en el icono del menú en la esquina superior izquierda.
  2. -
  3. Toque en Configuración y luego en DLC Mods.
  4. -
  5. Toque en Agregar URL Mod y pegar la URL de la piel que desea utilizar.
  6. -
  7. Toque en Guardar URL de mod y luego en Aplicar mods.
  8. -
  9. Volver al menú principal y toque en Garaje.
  10. -
  11. Seleccione su camión y toque en Personalizar.
  12. Toque en Skins y elija la piel que ha instalado. -
  13. Toque en Aplicar y disfrutar de la nueva piel del camión.
  14. -
-

¡Eso es todo! Ha instalado con éxito una piel de camión en Truck Simulator Ultimate. Puedes repetir estos pasos para cualquier otra piel que quieras usar.

-

Las mejores pieles de camiones para Truck Simulator Ultimate

-

Ahora que sabes cómo instalar pieles de camiones en Truck Simulator Ultimate, es posible que te estés preguntando cuáles son algunas de las mejores pieles de camiones para este juego. Por supuesto, esto depende de su gusto personal y preferencia, pero aquí están algunas de nuestras recomendaciones:

-

Camiones con licencia de Mercedes-Benz con detalles realistas

- -

BMW F90 M5 2020 con diseño deportivo y rendimiento

-

Si está buscando velocidad y estilo, es posible que desee probar la piel BMW F90 M5 2020, que es un mod que reemplaza el coche BMW original en el juego con una versión más potente y elegante. El BMW F90 M5 2020 es un sedán de alto rendimiento que tiene un diseño deportivo y un motor V8 de doble turbocompresor que puede alcanzar hasta 305 km/h. La piel también tiene características realistas, como faros, luces traseras, escapes, alerones y llantas. Puede encontrar esta piel en TSU Mods.

-

TOFAŞ Şahin con estilo turco clásico y nostalgia

-

Si buscas nostalgia y diversión, es posible que quieras probar la piel TOFAŞ Şahin, que es un mod que reemplaza el coche Fiat original en el juego con un coche turco clásico que fue popular en los años 1980 y 1990. El TOFAŞ Şahin es un sedán compacto que tiene un diseño simple pero encantador y una base de fans leales en Turquía. La piel también tiene características realistas, como placas de matrícula, pegatinas, parachoques y cuernos. Puede encontrar esta piel en TSU Mods.

-

Conclusión

-

Truck Simulator Ultimate es un juego que ofrece mucha diversión y emoción para los amantes de la simulación. Una de las maneras de mejorar su experiencia de juego es utilizar pieles de camiones, que son diferentes diseños y colores para el exterior de su camión. Las pieles de camiones pueden hacer que su camión se vea más realista, elegante o único, dependiendo de su preferencia.

-

En este artículo, te mostramos cómo instalar skins de camiones en Truck Simulator Ultimate descargándolos desde la tienda de aplicaciones o la web, o copiando su URL y pegándolo en la configuración del juego. También le dimos algunos ejemplos de las mejores pieles de camiones para Truck Simulator Ultimate, como camiones con licencia de Mercedes-Benz, BMW F90 M5 2020 y TOFAŞ Şahin.

- -

Preguntas frecuentes

-

Aquí están algunas de las preguntas más frecuentes sobre Truck Simulator Ultimate:

-

¿Cuáles son los requisitos del sistema para Truck Simulator Ultimate?

-

Los requisitos mínimos del sistema para Truck Simulator Ultimate son: - Android: Android 7.0 o superior; 3 GB de RAM; 1 GB de espacio libre - iOS: iOS 11 o superior; iPhone 6S o superior; iPad Air 2 o superior; iPad Mini 4 o superior; iPod Touch (7a generación) o superior; 1 GB de espacio libre Los requisitos del sistema recomendados para Truck Simulator Ultimate son: - Android: Android 9.0 o superior; 4 GB de RAM; 2 GB de espacio libre - iOS: iOS 13 o superior; iPhone X o superior; iPad Pro (2017) o superior; iPad Air (2019) o superior; iPad Mini (2019) o superior; iPod Touch (7a generación) o superior; 2 GB de espacio libre

-

¿Cómo puedo participar en el modo multijugador y las carreras?

-

Para participar en el modo multijugador y carreras en Truck Simulator Ultimate, necesitas tener una conexión a Internet y una cuenta de Zuuks. Puede crear una cuenta Z uks tocando el icono del menú en la esquina superior izquierda, luego tocando en Perfil y luego tocando en Registro. También puedes iniciar sesión con tu cuenta de Facebook o Google. Una vez que tenga una cuenta de Zuuks, puede unirse o crear salas multijugador y carreras tocando el icono del menú, luego tocando en Multijugador, y luego elegir la opción que desee. También puedes invitar a tus amigos a jugar contigo tocando el botón Invitar amigos.

-

¿Cómo puedo gestionar mi propio negocio y flota en el juego?

- -

¿Cómo puedo personalizar mis camiones con otros accesorios y modificaciones?

-

Para personalizar sus camiones con otros accesorios y modificaciones en Truck Simulator Ultimate, necesita tener suficiente dinero y reputación. Puedes ganar dinero y reputación completando entregas, participando en subastas y carreras, y cumpliendo contratos. También puede gastar dinero real para comprar monedas o diamantes, que son las monedas premium en el juego. Una vez que tenga suficiente dinero y reputación, puede personalizar sus camiones con diferentes accesorios y modificaciones tocando el icono del menú, luego tocando en Garaje, luego seleccionando su camión y luego tocando en Personalizar. Puede cambiar varios aspectos de su camión, como motor, transmisión, suspensión, frenos, neumáticos, llantas, luces, bocinas, espejos, alerones, escapes, pintura, pegatinas y más.

-

¿Cómo puedo contactar a los desarrolladores para sugerencias y quejas?

-

Para contactar a los desarrolladores de Truck Simulator Ultimate para sugerencias y quejas, puede usar uno de los siguientes métodos: - Correo electrónico: info@zuuks.com - Facebook: https://www.facebook.com/zuuks.games - Instagram: https:/www.instagram.com/zuuksgames --Instagram: https://www.instagram.com Twitter: https://twitter.com/ZuuksGames - YouTube: https://www.youtube.com/channel/UCSZ5daJft7LuWzSyjdp_8HA Los desarrolladores siempre están abiertos a la retroalimentación y sugerencias de sus jugadores. También actualizan el juego regularmente con nuevas características y mejoras.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Carretes Descargar Instagram Mp3.md b/spaces/Benson/text-generation/Examples/Carretes Descargar Instagram Mp3.md deleted file mode 100644 index 208a112207902ff44555e546346cc5ed6271d2c7..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Carretes Descargar Instagram Mp3.md +++ /dev/null @@ -1,140 +0,0 @@ - -

Cómo descargar Instagram Reel Audio como MP3

-

Instagram Reels son videos cortos, divertidos y atractivos que puedes crear y compartir en la aplicación. Son una gran manera de mostrar su creatividad, personalidad y talento. Pero a veces, es posible que se encuentre con un carrete que tiene un clip de audio increíble que desea descargar y utilizar para sus propios videos u otros fines. ¿Cómo se hace eso?

-

En este artículo, te mostraremos cómo descargar audio de Instagram Reel como MP3 usando diferentes métodos y herramientas. También explicaremos cómo guardar los clips de audio de Reel para usarlos más tarde en la aplicación. Si desea descargar una canción pegadiza, un efecto de sonido divertido, o una voz en off de tendencia, tenemos todo cubierto.

-

carretes descargar instagram mp3


Download ✶✶✶ https://bltlly.com/2v6Lrr



-

¿Puedes descargar audio de Instagram Reels?

-

La respuesta corta es sí, pero no directamente desde la aplicación. Instagram no tiene una función integrada que te permita descargar o guardar el audio de un Reel. Sin embargo, hay algunas formas no oficiales de hacerlo usando herramientas o aplicaciones de terceros.

-

Estos métodos implican copiar el enlace del Carrete y pegarlo en un sitio web o una aplicación que puede extraer el archivo de audio del video. Alternativamente, también puedes guardar el audio de un Carrete en tu cuenta de Instagram y usarlo más tarde para tus propios videos.

-

Sin embargo, antes de descargar o guardar cualquier audio de carrete, asegúrese de respetar los derechos y permisos del creador original. No utilice su audio sin darles crédito o pedir su consentimiento. Además, no viole ninguna ley de derechos de autor o términos de servicio de Instagram.

-

Cómo descargar Instagram Reel audio usando herramientas de terceros

-

Una forma de descargar Instagram Reel audio como MP3 es utilizar un sitio web de terceros que puede convertir el enlace de vídeo en un archivo de audio. Hay muchos de estos sitios web disponibles en línea, pero le mostraremos cuatro de ellos que son gratuitos y fáciles de usar.

-

ReelSave.App

- -

Pasos a seguir:

-
    -
  1. Elija el audio del carrete que desea descargar y toque el icono de compartir en el lado derecho. Parece un avión de papel.
  2. -
  3. Toca la opción Copiar enlace en la parte inferior de la pantalla emergente.
  4. -
  5. Vaya a ReelSave.App en su navegador y pegue el enlace en el cuadro.
  6. -
  7. Pulse Descargar y espere a que el sitio web procese su solicitud.
  8. -
  9. Pulse Descargar MP3 y guardar el archivo en su dispositivo.
  10. -
-

ReelsDownloader.io

-

Este es otro sitio web que puede ayudarle a descargar Instagram Reel audio como MP3 con facilidad. También funciona de manera similar a ReelSave.App, pero tiene algunas características adicionales que puede encontrar útiles.

-

Pasos a seguir:

-
    -
  1. Elija el audio del carrete que desea descargar y toque el icono de compartir en el lado derecho. Parece un avión de papel.
  2. -
  3. Toca la opción Copiar enlace en la parte inferior de la pantalla emergente.
  4. -
  5. Vaya a Reel Saver en la Chrome Web Store y haga clic en Añadir a Chrome.
  6. -
  7. Confirme su instalación haciendo clic en Agregar extensión.
  8. -
  9. Vaya a Instagram.com en su navegador e inicie sesión en su cuenta.
  10. -
  11. Elija el audio del carrete que desea descargar y haga clic en él para abrirlo en pantalla completa.
  12. -
  13. Haga clic en el icono Reel Saver en la esquina superior derecha de su navegador. Parece un círculo azul con una flecha blanca dentro.
  14. -
  15. Seleccione Descargar MP3 y guarde el archivo en su dispositivo.
  16. -
-

Cómo descargar Instagram Reel audio usando aplicaciones

-

Si prefiere usar aplicaciones en lugar de sitios web o extensiones, también hay algunas opciones para usted. Aquí hay dos aplicaciones que pueden ayudarle a descargar Instagram Reel audio como MP3 en su dispositivo móvil. Ambos son gratuitos y están disponibles para usuarios de Android e iOS.

-

Editor de vídeo InShot

-

Esta es una aplicación de edición de video popular que también puede ayudarlo a descargar Instagram Reel audio como MP3. Tiene muchas características y herramientas que puedes usar para crear videos increíbles, pero nos centraremos en cómo usarlo para descargar clips de audio de carrete.

-

Pasos a seguir:

-
    -
  1. Elija el audio del carrete que desea descargar y toque el icono de compartir en el lado derecho. Parece un avión de papel.
  2. -
  3. Pulse Copiar enlace en la parte inferior de la pantalla emergente.
  4. -
  5. Abra InShot Video Editor en su dispositivo y toque Video en la esquina inferior izquierda.
  6. -
  7. Pulse Nuevo en la esquina superior derecha y seleccione Instagram de la lista de fuentes.
  8. -
  9. Pegar el enlace del carrete en el cuadro y pulse OK.
  10. - -
  11. La aplicación guardará el archivo de vídeo en su dispositivo. Pulse Hecho en la esquina inferior derecha y vuelva a la pantalla principal de la aplicación.
  12. -
  13. Toca Música en la esquina inferior izquierda y selecciona Mi música de la lista de opciones.
  14. -
  15. Encuentra y selecciona el archivo de video que acabas de guardar y toca Usar.
  16. -
  17. La aplicación extraerá el audio del vídeo y lo añadirá a su editor. Pulse Guardar en la esquina superior derecha y seleccione Exportar MP3.
  18. -
  19. La aplicación guardará el archivo de audio en su dispositivo. Pulse Hecho en la esquina inferior derecha y salga de la aplicación.
  20. -
-

Convertidor de vídeo a MP3

-

Esta es una aplicación sencilla y directa que puede ayudarle a descargar Instagram Reel audio como MP3. No tiene características ni herramientas adicionales, pero hace su trabajo bien y rápido.

-

Pasos a seguir:

-
    -
  1. Elija el audio del carrete que desea descargar y toque el icono de compartir en el lado derecho. Parece un avión de papel.
  2. -
  3. Pulse Copiar enlace en la parte inferior de la pantalla emergente.
  4. -
  5. Abra Video to MP3 Converter en su dispositivo y toque Pegar URL en la parte superior de la pantalla.
  6. -
  7. Pega el enlace del Carrete en la caja y toca Convertir.
  8. -
  9. La aplicación descargará y convertirá el vídeo Reel en un archivo de audio. Pulse Descargar en la parte inferior de la pantalla y guarde el archivo en su dispositivo.
  10. -
-

Cómo guardar el audio de Instagram Reels para usarlo más tarde en la aplicación

-

Si no quieres descargar Instagram Reel audio como MP3, pero quieres usarlo más tarde para tus propios videos en la aplicación, hay una manera de hacerlo. Instagram tiene una función que le permite guardar clips de audio de carrete a su cuenta y acceder a ellos en cualquier momento que desee.

-

Pasos a seguir:

-
    -
  1. Elija el audio del carrete que desea guardar y toque en él para abrirlo en pantalla completa.
  2. -
  3. Toque en el nombre de audio en la parte inferior de la pantalla. Parece una nota de música con algún texto al lado.
  4. - -
  5. La aplicación guardará el clip de audio en su cuenta. Puede encontrarlo en la sección Guardado en Audio.
  6. -
  7. Para utilizarlo para sus propios vídeos, toque en Crear carrete en la parte inferior de la pantalla. Parece un icono de la cámara con un signo más.
  8. -
  9. Toque en Audio en la esquina superior izquierda de la pantalla y seleccione Guardado de la lista de opciones.
  10. -
  11. Encuentra y selecciona el clip de audio que guardaste y comienza a grabar tu video con él.
  12. -
-

Conclusión

-

En este artículo, te hemos mostrado cómo descargar audio de Instagram Reel como MP3 usando diferentes métodos y herramientas. También hemos explicado cómo guardar los clips de audio de Reel para usarlos más adelante en la aplicación. Esperamos que este artículo le haya resultado útil e informativo. Si tiene alguna pregunta o comentario, háganoslo saber en los comentarios a continuación.

-

Preguntas frecuentes

-

¿Cómo encuentro el audio original de un Instagram Reel?

-

Si quieres saber de dónde viene un audio de Instagram Reel, puedes tocar el nombre del audio en la parte inferior de la pantalla. Te llevará a una página donde podrás ver todos los vídeos que utilizan ese clip de audio. También puede ver quién creó o subió el audio original pulsando en su imagen de perfil o nombre.

-

¿Cómo puedo crear mi propio audio para Instagram Reels?

-

Si quieres crear tu propio audio para Instagram Reels, puedes usar cualquier aplicación de grabación de sonido o dispositivo que pueda producir un archivo MP3. También puedes usar cualquier música o efectos de sonido que tengas en tu dispositivo o en línea. Una vez que tengas tu archivo de audio listo, puedes subirlo a Instagram siguiendo estos pasos:

-
    -
  1. Toque en Crear carrete en la parte inferior de la pantalla. Parece un icono de la cámara con un signo más.
  2. -
  3. Toque en Audio en la esquina superior izquierda de la pantalla y seleccione Examinar de la lista de opciones.
  4. -
  5. Pulse sobre el icono Subir en la esquina superior derecha de la pantalla. Parece un cuadrado con una flecha apuntando hacia arriba.
  6. - -
  7. Espere a que la aplicación procese y cargue su archivo de audio.
  8. -
  9. Comienza a grabar tu video con tu propio audio.
  10. -
-

¿Cómo edito el audio de un Instagram Reel?

-

Si desea editar el audio de un Instagram Reel, puede usar las herramientas integradas en la aplicación o cualquier aplicación externa que pueda editar archivos de audio. Estas son algunas de las cosas que puedes hacer con las herramientas integradas:

-
    -
  • Puede recortar o cortar el clip de audio para adaptarse a su longitud de vídeo arrastrando el control deslizante en la parte inferior de la pantalla.
  • -
  • Puede ajustar el volumen del clip de audio pulsando en Volumen en la esquina superior derecha de la pantalla y moviendo el control deslizante hacia arriba o hacia abajo.
  • -
  • Puede mezclar el clip de audio con su sonido original tocando Mix Audio en la esquina superior derecha de la pantalla y moviendo el control deslizante hacia la izquierda o hacia la derecha.
  • -
-

¿Cómo comparto un Instagram Reel con un audio específico?

-

Si quieres compartir un Instagram Reel con un audio específico, puedes usar la opción Compartir audio en la aplicación. Esto le permitirá enviar un mensaje directo a cualquier persona en Instagram con un enlace a su carrete y su audio. Estos son los pasos a seguir:

-
    -
  1. Elija el carrete que desea compartir y toque en él para abrirlo en pantalla completa.
  2. -
  3. Toque en el icono de compartir en el lado derecho. Parece un avión de papel.
  4. -
  5. Toque en Compartir audio en la parte inferior de la pantalla emergente.
  6. -
  7. Selecciona a quién quieres enviarlo desde tus contactos o busca a alguien en Instagram.
  8. -
  9. Agrega un mensaje si lo deseas y toca Enviar.
  10. -
-

¿Cómo silencio el audio de un Instagram Reel?

-

Si desea silenciar el audio de un Instagram Reel, puede utilizar el botón de silencio en la aplicación. Esto le permitirá ver el video sin ningún sonido. Estos son los pasos a seguir:

-
    -
  1. Elija el carrete que desea silenciar y toque en él para abrirlo en pantalla completa.
  2. - -
  3. La aplicación silenciará el audio de ese carrete y cualquier otro carrete que vea después de eso.
  4. -
  5. Para desactivar, pulse el botón de silencio de nuevo. Se verá como un altavoz con ondas sonoras que salen de él.
  6. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Carx Street Android Hack Apk.md b/spaces/Benson/text-generation/Examples/Carx Street Android Hack Apk.md deleted file mode 100644 index 9c607313f4932cab001f3d5611e385772bbaf436..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Carx Street Android Hack Apk.md +++ /dev/null @@ -1,60 +0,0 @@ -
-

CarX Street Android Hack APK: Cómo obtener dinero ilimitado y desbloquear todos los coches

-

Si eres un fan de los juegos de carreras realistas, es posible que hayas oído hablar de CarX Street, un juego de simulación que ofrece gráficos impresionantes, física y personalización. CarX Street le permite explorar un gran mundo abierto con diferentes tipos de mapas, desde las concurridas calles de la ciudad hasta las carreteras de montaña en espiral y las carreteras costeras. También puede elegir entre una variedad de coches, desde los clásicos coches musculares hasta los modernos supercoches, y sintonizarlos a su gusto. Puedes competir con otros jugadores en carreras de red reales, o unirte a clubes y desafiar jefes.

-

Sin embargo, tan divertido como CarX Street es, también puede ser frustrante si no tienes suficiente dinero para comprar coches nuevos o piezas, o si quieres desbloquear todos los coches y modos en el juego. Es por eso que algunos jugadores buscan un apk hack para CarX Street, que es una versión modificada del juego que le da dinero ilimitado, desbloquea todos los coches y modos, y le permite personalizar la configuración del juego. Con un hack apk, se puede disfrutar de CarX Street sin limitaciones o restricciones.

-

carx street android hack apk


DOWNLOAD ►►►►► https://bltlly.com/2v6Myp



-

Pero antes de descargar e instalar un hack apk para CarX Street, usted debe ser consciente de los beneficios y riesgos de usar uno. En este artículo, le mostraremos cómo encontrar, instalar y utilizar un hack apk para CarX Street, así como algunas características y consejos para el juego. Sigue leyendo para saber más.

-

Cómo descargar e instalar CarX Street Hack APK

-

El primer paso para utilizar un hack apk para CarX Street es encontrar una fuente confiable para ella. Hay muchos sitios web que afirman ofrecer apks hack para varios juegos, pero no todos ellos son dignos de confianza. Algunos de ellos pueden contener virus, malware o spyware que pueden dañar su dispositivo o robar su información personal. Algunos de ellos también pueden proporcionar versiones falsas o obsoletas de la apk hack que no funcionan o causar problemas con el juego.

- -

Una vez que haya encontrado una fuente confiable para el apk hack para CarX Street, es necesario habilitar fuentes desconocidas en su dispositivo Android. Esto se debe a que los dispositivos Android normalmente no permiten instalar aplicaciones desde fuentes distintas de Google Play Store. Para habilitar fuentes desconocidas, vaya a Configuración > Seguridad > Fuentes desconocidas y enciéndala. También es posible que tenga que conceder algunos permisos para el hack apk al instalarlo.

-

Después de habilitar fuentes desconocidas, puede descargar e instalar el apk hack para CarX Street siguiendo estos pasos:

-
    -
  1. Descargar el archivo apk hack de la fuente que ha elegido.
  2. -
  3. Busque el archivo en el almacenamiento de su dispositivo y toque en él.
  4. -
  5. Siga las instrucciones en la pantalla para instalar el hack apk.
  6. -
  7. Iniciar el juego y disfrutar.
  8. -
-

Se puede verificar que el hack apk está funcionando mediante la comprobación de si usted tiene dinero ilimitado y todos los coches y modos desbloqueados en el juego. También puede acceder al menú mod tocando el icono en la esquina superior izquierda de la pantalla. El menú mod te permite personalizar la configuración del juego, como velocidad, aceleración, manejo, gravedad, daños y más. También puede activar o desactivar algunas funciones, como nitro, drift, tráfico y policía.

-

Cómo utilizar CarX Street Hack APK

-

Ahora que ha instalado el apk hack para CarX Street, se puede utilizar para disfrutar del juego sin limitaciones o restricciones. Estas son algunas de las cosas que puede hacer con el hack apk:

-
    -
  • Obtener dinero ilimitado y comprar cualquier coche o parte que desee. Puedes acceder a la tienda desde el menú principal y navegar por las diferentes categorías de coches y piezas. Usted puede comprar cualquier coche o parte que te gusta sin preocuparse por el precio. También puede actualizar sus coches y piezas para mejorar su rendimiento y apariencia.
  • - -
  • Personalizar la configuración del juego a su preferencia. Puedes acceder al menú mod desde la esquina superior izquierda de la pantalla y ajustar la configuración del juego a tu gusto. Puede cambiar la velocidad, aceleración, manejo, gravedad, daños y más de su coche. También puede activar o desactivar algunas funciones, como nitro, drift, tráfico y policía. Puedes experimentar con diferentes configuraciones y ver cómo afectan tu juego.
  • -
-

Características y consejos del juego de CarX Street

-

CarX Street es un juego de simulación que ofrece gráficos realistas, física y personalización. Es uno de los juegos de carreras más populares en dispositivos Android. Estas son algunas de las principales características del juego CarX Street:

-
    -
  • Impresionantes gráficos y efectos de sonido. CarX Street utiliza tecnología gráfica avanzada para crear efectos visuales y de sonido realistas. Puede ver los detalles de su automóvil, el medio ambiente y el clima. También puede escuchar el sonido del motor, el chirrido de los neumáticos y el impacto de la colisión.
  • -
  • Física realista y mecánica de conducción. CarX Street utiliza un motor de física realista para simular el comportamiento de su automóvil en diferentes superficies y condiciones. Puede sentir el peso, la inercia, la tracción y la suspensión de su automóvil. También puede controlar su automóvil con diferentes técnicas de conducción, como dirección, frenado, aceleración, deriva y nitro.
  • -
  • Gran mundo abierto con diferentes tipos de mapas. CarX Street le permite explorar un gran mundo abierto con diferentes tipos de mapas, como calles de ciudades, carreteras de montaña en espiral, carreteras costeras y más. Cada tipo de mapa tiene sus propias características y desafíos. Puedes descubrir nuevos lugares y secretos en cada mapa.
  • - -
  • Características multijugador y club online. CarX Street te permite competir con otros jugadores en carreras de red reales. Puede unirse o crear un club y desafiar a otros clubes o jefes. También puede chatear con otros jugadores y compartir sus logros y consejos.
  • -
-

CarX Street es un juego que requiere habilidad y estrategia para dominar. Aquí hay algunos consejos y trucos para mejorar sus habilidades de carreras y rendimiento:

-

-
    -
  • Elige el coche y las piezas adecuadas para cada mapa y modo. Diferentes coches y piezas tienen diferentes ventajas y desventajas en diferentes mapas y modos. Por ejemplo, un coche con alta velocidad y aceleración puede ser bueno para las carreras de carretera, pero no para las carreras de la ciudad. Un automóvil con un alto manejo y frenado puede ser bueno para las carreteras de montaña, pero no para las carreteras costeras. Deberías experimentar con diferentes combinaciones y encontrar la mejor para cada situación.
  • -
  • Utilice las técnicas de conducción sabiamente. CarX Street ofrece diferentes técnicas de conducción, como dirección, frenado, aceleración, deriva y nitro. Usted debe utilizar sabiamente para controlar su coche y ganar una ventaja sobre sus oponentes. Por ejemplo, puede usar la dirección para evitar obstáculos y curvas, frenar para frenar y prepararse para giros, acelerar para acelerar y adelantar, ir a la deriva para mantener el impulso y ganar puntos, y nitro para aumentar su velocidad y rendimiento.
  • -
  • Cuidado con el tráfico y la policía. CarX Street cuenta con el tráfico y la policía en algunos mapas y modos. Usted debe tener cuidado con ellos y evitar chocar con ellos. El tráfico puede ralentizar y dañar su coche. La policía puede perseguirte y darte multas o arrestarte. Puede utilizar el mapa en la esquina superior derecha de la pantalla para ver las ubicaciones de tráfico y policía.
  • -
-

Conclusión

- -

En este artículo, le mostramos cómo encontrar, instalar y utilizar un hack apk para CarX Street, así como algunas características y consejos para el juego. Esperamos que haya encontrado este artículo útil e informativo. Sin embargo, también queremos recordarle que el uso de un hack apk para CarX Street no es legal o ético, y puede causar problemas con el juego o su dispositivo. Debe usarlo bajo su propio riesgo y discreción.

-

Si tienes algún comentario o preguntas sobre este artículo o el juego CarX Street, no dudes en dejar un comentario a continuación. Nos encantaría saber de ti.

-

Preguntas frecuentes

-
    -
  • Q: ¿Es CarX Street libre para jugar?
  • -
  • A: Sí, CarX Street es gratis para descargar y jugar en dispositivos Android. Sin embargo, también contiene compras en la aplicación que requieren dinero real.
  • -
  • Q: ¿CarX Street es compatible con mi dispositivo?
  • -
  • A: CarX Street requiere Android 6.0 o superior y al menos 2 GB de RAM para funcionar sin problemas. Puede comprobar la compatibilidad de su dispositivo en Google Play Store.
  • -
  • Q: ¿Cómo puedo actualizar CarX Street?
  • -
  • A: Puede actualizar CarX Street desde Google Play Store o desde el sitio web oficial del juego. Sin embargo, si usted está utilizando un hack apk para CarX Street, no puede ser capaz de actualizarlo o acceder a las últimas características del juego.
  • -
  • Q: ¿Cómo puedo contactar a los desarrolladores de CarX Street?
  • -
  • A: Puede ponerse en contacto con los desarrolladores de CarX Street enviando un correo electrónico a support@carx-tech.com o visitando su página de Facebook.
  • -
  • Q: ¿Cómo puedo reportar un error o un problema con CarX Street?
  • -
  • A: Puede reportar un error o un problema con CarX Street enviando un correo electrónico a support@carx-tech.com o utilizando la opción de retroalimentación en la configuración del juego.
  • -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Error Genshin Impacto.md b/spaces/Benson/text-generation/Examples/Descargar Error Genshin Impacto.md deleted file mode 100644 index 76afa92cc091646fa29e4520be293f5161655155..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Error Genshin Impacto.md +++ /dev/null @@ -1,82 +0,0 @@ - -

Cómo corregir el error de descarga Genshin impacto en Windows PC

-

Genshin Impact es un popular juego de rol de acción que es gratuito, pero ofrece compras en el juego para objetos y personajes adicionales. El juego fue lanzado en 2020 por miHoYo, una compañía de desarrollo de videojuegos con sede en Shanghai, China. Genshin Impact ha recibido críticas positivas de críticos y jugadores por igual por sus impresionantes gráficos, un juego atractivo y una rica historia.

-

descargar error genshin impacto


Download File ☆☆☆ https://bltlly.com/2v6JR4



-

Sin embargo, algunos usuarios de PC con Windows han informado que enfrentan un error de descarga al intentar instalar o actualizar el juego. El mensaje de error dice "Error de descarga de archivos del juego. Compruebe la configuración de red e inténtelo de nuevo." Este error puede impedirle disfrutar del juego y puede ser frustrante para hacer frente a.

-

En este artículo, explicaremos qué causa este error y cómo puede solucionarlo usando cinco métodos simples. También responderemos algunas preguntas frecuentes sobre el juego y sus problemas de descarga.

-

¿Qué es el error de descarga Genshin impacto?

-

Descargar error genshin impacto es un error que se produce cuando se intenta descargar o actualizar los archivos de juego de Genshin impacto en su PC con Windows. El error puede detener el proceso de descarga y dañar los archivos del juego, haciéndolos inutilizables.

-

-

Causas del error de descarga -

Hay varias causas posibles para este error, como:

-
    -
  • Conexión a Internet inestable o lenta
  • -
  • Software antivirus o firewall bloquea la descarga
  • -
  • Archivos de juego dañados o incompletos
  • -
  • Configuración DNS incorrecta
  • -
  • Problemas o mantenimiento del servidor
  • -
-

Síntomas de descarga Error Genshin Impact

-

Algunos de los síntomas comunes de este error son:

-
    -
  • La descarga se detiene en un cierto porcentaje o tamaño de archivo
  • -
  • La velocidad de descarga es muy lenta o fluctúa
  • -
  • El mensaje de error aparece repetidamente
  • -
  • El lanzador del juego se bloquea o se congela
  • -
  • El juego no se ejecuta correctamente
  • -
- -

Afortunadamente, hay algunas formas fáciles y eficaces de corregir este error y reanudar su descarga. Aquí hay cinco métodos que puede probar:

-

Método 1: Reinicie su enrutador y compruebe su velocidad de Internet

-

Lo primero que debe hacer es comprobar su conexión a Internet y asegurarse de que es estable y lo suficientemente rápido para descargar los archivos del juego. Puede utilizar una herramienta de prueba de velocidad en línea para medir su velocidad de Internet y compararla con la velocidad recomendada para descargar Genshin Impact.

-

La velocidad recomendada para descargar Genshin Impact es de al menos 5 Mbps tanto para subir como para descargar. Si su velocidad es menor que eso, puede experimentar descargas lentas o interrumpidas.

-

Para mejorar su velocidad de Internet, puede probar los siguientes pasos:

-
    -
  1. Reinicie su router desconectándolo de la fuente de alimentación durante unos segundos y conectándolo de nuevo.
  2. -
  3. Acerque su router a su PC o use una conexión por cable en lugar de Wi-Fi.
  4. -
  5. Evite usar otros dispositivos o aplicaciones que consumen ancho de banda mientras descarga el juego.
  6. -
  7. Póngase en contacto con su proveedor de servicios de Internet si sus necesidades y preferencias, pero asegúrese de leer los comentarios y calificaciones antes de elegir uno. Algunos de los servicios VPN populares son ExpressVPN, NordVPN, Surfshark y CyberGhost.

    -

    Estos son los pasos para usar un servicio VPN para corregir el error de descarga:

    -
      -
    1. Descargue e instale un servicio VPN de su elección en su PC.
    2. -
    3. Inicie el servicio VPN e inicie sesión con su cuenta.
    4. -
    5. Seleccione una ubicación de servidor que esté cerca del servidor de descarga. Por ejemplo, si está descargando desde el servidor de Asia, puede elegir un servidor en Japón, Corea o Singapur.
    6. -
    7. Conéctese al servidor y espere a que se establezca la conexión.
    8. -
    9. Intenta descargar el juego de nuevo y ver si el error está resuelto.
    10. -
    - -

    Método 5: Descargar manualmente los archivos del juego

    -

    El último método que puede probar es descargar manualmente los archivos del juego desde una fuente de terceros y copiarlos en su carpeta de juegos. Esto puede evitar el error de descarga y ahorrarle tiempo y ancho de banda. Sin embargo, este método no es recomendado por los desarrolladores de juegos oficiales y puede plantear algunos riesgos como infección de malware, pérdida de datos o prohibición de cuentas. Por lo tanto, solo debe usar este método bajo su propio riesgo y discreción.

    -

    Estos son los pasos para descargar manualmente los archivos del juego:

    -
      -
    1. Ir a un sitio web confiable que ofrece la última versión de los archivos del juego para Genshin Impact. Puede buscar en línea para estos sitios web o pedir recomendaciones a otros jugadores. Algunos de los sitios web que ofrecen este servicio son https://genshinimpact.fandom.com/wiki/Downloads, https://www.gensh.in/download-links, y https:/www.reddit.com/r/Genshin_Impact/comments/j1s3ng/genshin_impact_installationfiles/
    2. > -
    3. Seleccione el servidor que coincida con su región y descargue los archivos del juego como un archivo zip o rar.
    4. -
    5. Extrae los archivos del juego usando un programa extractor de archivos como WinRAR o 7-Zip.
    6. -
    7. Copia y pega los archivos del juego en tu carpeta de juego. La ubicación predeterminada de la carpeta del juego es C: Archivos de programa Genshin Impact Genshin Impact Game.
    8. -
    9. Ejecute el lanzador y verifique los archivos del juego. El lanzador comprobará si hay algún archivo perdido o desactualizado y lo descargará si es necesario.
    10. -
    11. Inicia el juego y disfruta jugando sin errores.
    12. -
    -

    Conclusión

    -

    Genshin Impact es un juego divertido e inmersivo que puedes jugar gratis en tu PC con Windows. Sin embargo, puede encontrar algunos errores de descarga que pueden impedirle instalar o actualizar el juego. Estos errores pueden ser causados por varios factores, como conexión a Internet, software antivirus, archivos de juegos dañados, configuración de DNS o problemas con el servidor.

    - -
      -
    • Reinicie su router y compruebe su velocidad de Internet
    • -
    • Desactivar o poner en una lista blanca su software antivirus
    • -
    • Desinstalar y volver a instalar el juego y el lanzador
    • -
    • Utilice una VPN para conectarse al servidor de descarga
    • -
    • Descargar manualmente los archivos del juego
    • -
    -

    Esperamos que estos métodos le ayudarán a solucionar el impacto genshin error de descarga en el PC con Windows y disfrutar de jugar el juego sin ningún problema. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.

    -

    Preguntas frecuentes

    -

    ¿Por qué Genshin no puede descargar?

    -

    Genshin puede seguir fallando en la descarga debido a varias razones tales como conexión a Internet inestable o lenta, software antivirus o firewall bloqueando la descarga, archivos de juegos dañados o incompletos, configuración incorrecta de DNS, o problemas o mantenimiento del servidor. Puede probar uno de los métodos que hemos discutido en este artículo para corregir el error de descarga y reanudar su descarga.

    -

    ¿Cuánto tiempo se tarda en descargar Genshin Impact?

    -

    El tiempo de descarga de Genshin Impact depende de la velocidad de Internet y el tamaño de los archivos del juego. Los archivos del juego son de aproximadamente 20 GB en total, pero pueden variar dependiendo del servidor y las actualizaciones. El tiempo promedio de descarga de Genshin Impact es de 1 a 2 horas, pero puede tardar más si su velocidad de Internet es lenta o si encuentra algún error de descarga.

    -

    ¿Cómo actualizo Genshin Impact en PC?

    -

    Para actualizar Genshin Impact en el PC, es necesario ejecutar el lanzador y haga clic en el botón Actualizar. El lanzador descargará e instalará automáticamente la última versión del juego. También puede consultar el sitio web oficial o las cuentas de redes sociales de Genshin Impact para cualquier noticia o anuncio sobre las actualizaciones.

    -

    ¿Cómo puedo verificar los archivos del juego en Genshin Impact?

    - -

    ¿Cómo cambio el servidor de descarga en Genshin Impact?

    -

    Para cambiar el servidor de descarga en Genshin Impact, debe ejecutar el lanzador y hacer clic en el icono de configuración en la esquina superior derecha. Luego, haga clic en la pestaña Servidor de juegos y seleccione el servidor que coincida con su región. Puede elegir entre Asia, Europa, América o TW, HK, MO. Después de seleccionar el servidor, haga clic en Guardar y reinicie el lanzador.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/CForGETaass/vits-uma-genshin-honkai/attentions.py b/spaces/CForGETaass/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/CForGETaass/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/test/half.h b/spaces/CVPR/LIVE/thrust/dependencies/cub/test/half.h deleted file mode 100644 index 842f9f730d2c6532ae909d5d07d3a254e8bb6ffd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/dependencies/cub/test/half.h +++ /dev/null @@ -1,317 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2011, Duane Merrill. All rights reserved. - * Copyright (c) 2011-2019, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ - -#pragma once - -/** - * \file - * Utilities for interacting with the opaque CUDA __half type - */ - -#include -#include -#include - -#include - -#ifdef __GNUC__ -// There's a ton of type-punning going on in this file. -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wstrict-aliasing" -#endif - - -/****************************************************************************** - * half_t - ******************************************************************************/ - -/** - * Host-based fp16 data type compatible and convertible with __half - */ -struct half_t -{ - uint16_t __x; - - /// Constructor from __half - __host__ __device__ __forceinline__ - half_t(const __half &other) - { - __x = reinterpret_cast(other); - } - - /// Constructor from integer - __host__ __device__ __forceinline__ - half_t(int a) - { - *this = half_t(float(a)); - } - - /// Default constructor - __host__ __device__ __forceinline__ - half_t() : __x(0) - {} - - /// Constructor from float - __host__ __device__ __forceinline__ - half_t(float a) - { - // Stolen from Norbert Juffa - uint32_t ia = *reinterpret_cast(&a); - uint16_t ir; - - ir = (ia >> 16) & 0x8000; - - if ((ia & 0x7f800000) == 0x7f800000) - { - if ((ia & 0x7fffffff) == 0x7f800000) - { - ir |= 0x7c00; /* infinity */ - } - else - { - ir = 0x7fff; /* canonical NaN */ - } - } - else if ((ia & 0x7f800000) >= 0x33000000) - { - int32_t shift = (int32_t) ((ia >> 23) & 0xff) - 127; - if (shift > 15) - { - ir |= 0x7c00; /* infinity */ - } - else - { - ia = (ia & 0x007fffff) | 0x00800000; /* extract mantissa */ - if (shift < -14) - { /* denormal */ - ir |= ia >> (-1 - shift); - ia = ia << (32 - (-1 - shift)); - } - else - { /* normal */ - ir |= ia >> (24 - 11); - ia = ia << (32 - (24 - 11)); - ir = ir + ((14 + shift) << 10); - } - /* IEEE-754 round to nearest of even */ - if ((ia > 0x80000000) || ((ia == 0x80000000) && (ir & 1))) - { - ir++; - } - } - } - - this->__x = ir; - } - - /// Cast to __half - __host__ __device__ __forceinline__ - operator __half() const - { - return reinterpret_cast(__x); - } - - /// Cast to float - __host__ __device__ __forceinline__ - operator float() const - { - // Stolen from Andrew Kerr - - int sign = ((this->__x >> 15) & 1); - int exp = ((this->__x >> 10) & 0x1f); - int mantissa = (this->__x & 0x3ff); - uint32_t f = 0; - - if (exp > 0 && exp < 31) - { - // normal - exp += 112; - f = (sign << 31) | (exp << 23) | (mantissa << 13); - } - else if (exp == 0) - { - if (mantissa) - { - // subnormal - exp += 113; - while ((mantissa & (1 << 10)) == 0) - { - mantissa <<= 1; - exp--; - } - mantissa &= 0x3ff; - f = (sign << 31) | (exp << 23) | (mantissa << 13); - } - else if (sign) - { - f = 0x80000000; // negative zero - } - else - { - f = 0x0; // zero - } - } - else if (exp == 31) - { - if (mantissa) - { - f = 0x7fffffff; // not a number - } - else - { - f = (0xff << 23) | (sign << 31); // inf - } - } - return *reinterpret_cast(&f); - } - - - /// Get raw storage - __host__ __device__ __forceinline__ - uint16_t raw() - { - return this->__x; - } - - /// Equality - __host__ __device__ __forceinline__ - bool operator ==(const half_t &other) - { - return (this->__x == other.__x); - } - - /// Inequality - __host__ __device__ __forceinline__ - bool operator !=(const half_t &other) - { - return (this->__x != other.__x); - } - - /// Assignment by sum - __host__ __device__ __forceinline__ - half_t& operator +=(const half_t &rhs) - { - *this = half_t(float(*this) + float(rhs)); - return *this; - } - - /// Multiply - __host__ __device__ __forceinline__ - half_t operator*(const half_t &other) - { - return half_t(float(*this) * float(other)); - } - - /// Add - __host__ __device__ __forceinline__ - half_t operator+(const half_t &other) - { - return half_t(float(*this) + float(other)); - } - - /// Less-than - __host__ __device__ __forceinline__ - bool operator<(const half_t &other) const - { - return float(*this) < float(other); - } - - /// Less-than-equal - __host__ __device__ __forceinline__ - bool operator<=(const half_t &other) const - { - return float(*this) <= float(other); - } - - /// Greater-than - __host__ __device__ __forceinline__ - bool operator>(const half_t &other) const - { - return float(*this) > float(other); - } - - /// Greater-than-equal - __host__ __device__ __forceinline__ - bool operator>=(const half_t &other) const - { - return float(*this) >= float(other); - } - - /// numeric_traits::max - __host__ __device__ __forceinline__ - static half_t max() { - uint16_t max_word = 0x7BFF; - return reinterpret_cast(max_word); - } - - /// numeric_traits::lowest - __host__ __device__ __forceinline__ - static half_t lowest() { - uint16_t lowest_word = 0xFBFF; - return reinterpret_cast(lowest_word); - } -}; - - -/****************************************************************************** - * I/O stream overloads - ******************************************************************************/ - -/// Insert formatted \p half_t into the output stream -std::ostream& operator<<(std::ostream &out, const half_t &x) -{ - out << (float)x; - return out; -} - - -/// Insert formatted \p __half into the output stream -std::ostream& operator<<(std::ostream &out, const __half &x) -{ - return out << half_t(x); -} - - -/****************************************************************************** - * Traits overloads - ******************************************************************************/ - -template <> -struct cub::FpLimits -{ - static __host__ __device__ __forceinline__ half_t Max() { return half_t::max(); } - - static __host__ __device__ __forceinline__ half_t Lowest() { return half_t::lowest(); } -}; - -template <> struct cub::NumericTraits : cub::BaseTraits {}; - - -#ifdef __GNUC__ -#pragma GCC diagnostic pop -#endif diff --git a/spaces/CVPR/monoscene_lite/monoscene/unet2d.py b/spaces/CVPR/monoscene_lite/monoscene/unet2d.py deleted file mode 100644 index 68fc659cee62b88212d99bb98c1a2e93a5c3e1e2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/monoscene_lite/monoscene/unet2d.py +++ /dev/null @@ -1,198 +0,0 @@ -""" -Code adapted from https://github.com/shariqfarooq123/AdaBins/blob/main/models/unet_adaptive_bins.py -""" -import torch -import torch.nn as nn -import torch.nn.functional as F -import os - - -class UpSampleBN(nn.Module): - def __init__(self, skip_input, output_features): - super(UpSampleBN, self).__init__() - self._net = nn.Sequential( - nn.Conv2d(skip_input, output_features, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(output_features), - nn.LeakyReLU(), - nn.Conv2d( - output_features, output_features, kernel_size=3, stride=1, padding=1 - ), - nn.BatchNorm2d(output_features), - nn.LeakyReLU(), - ) - - def forward(self, x, concat_with): - up_x = F.interpolate( - x, - size=(concat_with.shape[2], concat_with.shape[3]), - mode="bilinear", - align_corners=True, - ) - f = torch.cat([up_x, concat_with], dim=1) - return self._net(f) - - -class DecoderBN(nn.Module): - def __init__( - self, num_features, bottleneck_features, out_feature, use_decoder=True - ): - super(DecoderBN, self).__init__() - features = int(num_features) - self.use_decoder = use_decoder - - self.conv2 = nn.Conv2d( - bottleneck_features, features, kernel_size=1, stride=1, padding=1 - ) - - self.out_feature_1_1 = out_feature - self.out_feature_1_2 = out_feature - self.out_feature_1_4 = out_feature - self.out_feature_1_8 = out_feature - self.out_feature_1_16 = out_feature - self.feature_1_16 = features // 2 - self.feature_1_8 = features // 4 - self.feature_1_4 = features // 8 - self.feature_1_2 = features // 16 - self.feature_1_1 = features // 32 - - if self.use_decoder: - self.resize_output_1_1 = nn.Conv2d( - self.feature_1_1, self.out_feature_1_1, kernel_size=1 - ) - self.resize_output_1_2 = nn.Conv2d( - self.feature_1_2, self.out_feature_1_2, kernel_size=1 - ) - self.resize_output_1_4 = nn.Conv2d( - self.feature_1_4, self.out_feature_1_4, kernel_size=1 - ) - self.resize_output_1_8 = nn.Conv2d( - self.feature_1_8, self.out_feature_1_8, kernel_size=1 - ) - self.resize_output_1_16 = nn.Conv2d( - self.feature_1_16, self.out_feature_1_16, kernel_size=1 - ) - - self.up16 = UpSampleBN( - skip_input=features + 224, output_features=self.feature_1_16 - ) - self.up8 = UpSampleBN( - skip_input=self.feature_1_16 + 80, output_features=self.feature_1_8 - ) - self.up4 = UpSampleBN( - skip_input=self.feature_1_8 + 48, output_features=self.feature_1_4 - ) - self.up2 = UpSampleBN( - skip_input=self.feature_1_4 + 32, output_features=self.feature_1_2 - ) - self.up1 = UpSampleBN( - skip_input=self.feature_1_2 + 3, output_features=self.feature_1_1 - ) - else: - self.resize_output_1_1 = nn.Conv2d(3, out_feature, kernel_size=1) - self.resize_output_1_2 = nn.Conv2d(32, out_feature * 2, kernel_size=1) - self.resize_output_1_4 = nn.Conv2d(48, out_feature * 4, kernel_size=1) - - def forward(self, features): - x_block0, x_block1, x_block2, x_block3, x_block4 = ( - features[4], - features[5], - features[6], - features[8], - features[11], - ) - bs = x_block0.shape[0] - x_d0 = self.conv2(x_block4) - - if self.use_decoder: - x_1_16 = self.up16(x_d0, x_block3) - x_1_8 = self.up8(x_1_16, x_block2) - x_1_4 = self.up4(x_1_8, x_block1) - x_1_2 = self.up2(x_1_4, x_block0) - x_1_1 = self.up1(x_1_2, features[0]) - return { - "1_1": self.resize_output_1_1(x_1_1), - "1_2": self.resize_output_1_2(x_1_2), - "1_4": self.resize_output_1_4(x_1_4), - "1_8": self.resize_output_1_8(x_1_8), - "1_16": self.resize_output_1_16(x_1_16), - } - else: - x_1_1 = features[0] - x_1_2, x_1_4, x_1_8, x_1_16 = ( - features[4], - features[5], - features[6], - features[8], - ) - x_global = features[-1].reshape(bs, 2560, -1).mean(2) - return { - "1_1": self.resize_output_1_1(x_1_1), - "1_2": self.resize_output_1_2(x_1_2), - "1_4": self.resize_output_1_4(x_1_4), - "global": x_global, - } - - -class Encoder(nn.Module): - def __init__(self, backend): - super(Encoder, self).__init__() - self.original_model = backend - - def forward(self, x): - features = [x] - for k, v in self.original_model._modules.items(): - if k == "blocks": - for ki, vi in v._modules.items(): - features.append(vi(features[-1])) - else: - features.append(v(features[-1])) - return features - - -class UNet2D(nn.Module): - def __init__(self, backend, num_features, out_feature, use_decoder=True): - super(UNet2D, self).__init__() - self.use_decoder = use_decoder - self.encoder = Encoder(backend) - self.decoder = DecoderBN( - out_feature=out_feature, - use_decoder=use_decoder, - bottleneck_features=num_features, - num_features=num_features, - ) - - def forward(self, x, **kwargs): - encoded_feats = self.encoder(x) - unet_out = self.decoder(encoded_feats, **kwargs) - return unet_out - - def get_encoder_params(self): # lr/10 learning rate - return self.encoder.parameters() - - def get_decoder_params(self): # lr learning rate - return self.decoder.parameters() - - @classmethod - def build(cls, **kwargs): - basemodel_name = "tf_efficientnet_b7_ns" - num_features = 2560 - - print("Loading base model ()...".format(basemodel_name), end="") - basemodel = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", basemodel_name, pretrained=True - ) - print("Done.") - - # Remove last layer - print("Removing last two layers (global_pool & classifier).") - basemodel.global_pool = nn.Identity() - basemodel.classifier = nn.Identity() - - # Building Encoder-Decoder model - print("Building Encoder-Decoder model..", end="") - m = cls(basemodel, num_features=num_features, **kwargs) - print("Done.") - return m - -if __name__ == '__main__': - model = UNet2D.build(out_feature=256, use_decoder=True) diff --git a/spaces/CVPR/regionclip-demo/detectron2/export/README.md b/spaces/CVPR/regionclip-demo/detectron2/export/README.md deleted file mode 100644 index 9fcd33513fb81ef3aeb4d3c8d9732324dffa2646..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/export/README.md +++ /dev/null @@ -1,13 +0,0 @@ - -This directory contains code to prepare a detectron2 model for deployment. -Currently it supports exporting a detectron2 model to Caffe2 format through ONNX. - -Please see [documentation](https://detectron2.readthedocs.io/tutorials/deployment.html) for its usage. - - -### Acknowledgements - -Thanks to Mobile Vision team at Facebook for developing the Caffe2 conversion tools. - -Thanks to Computing Platform Department - PAI team at Alibaba Group (@bddpqq, @chenbohua3) who -help export Detectron2 models to TorchScript. diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/notice/notice.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/notice/notice.js deleted file mode 100644 index 443ad25e5a6bfe89abb9d3f01e52511e549f9f9f..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/notice/notice.js +++ /dev/null @@ -1,184 +0,0 @@ -import { sendSocketList, Config, Version } from '../../components/index.js' -import { setMsgMap } from '../../model/index.js' - -Bot.on('notice', async e => { - if (e.self_id == '88888'){ - if (e.group?.bot?.uin) { - e.self_id = e.group.bot.uin - } else if (e.friend?.bot?.uin) { - e.self_id = e.friend.bot.uin - } - e.bot = Bot[e.self_id] - } - if (Config.muteStop && (e.group?.mute_left > 0 || e.group?.all_muted)) return false - if (sendSocketList.length == 0) return false - if (e.group_id) { - // 判断云崽白名单 - const whiteGroup = Config.whiteGroup - if (Array.isArray(whiteGroup) && whiteGroup.length > 0) { - if (!whiteGroup.some(i => i == e.group_id)) return false - } - // 判断插件白名单 - const yesGroup = Config.yesGroup - if (Array.isArray(yesGroup) && yesGroup.length > 0) { - if (!yesGroup.some(i => i == e.group_id)) return false - } - // 判断云崽黑名单 - const blackGroup = Config.blackGroup - if (Array.isArray(blackGroup) && blackGroup.length > 0) { - if (blackGroup.some(i => i == e.group_id)) return false - } - // 判断插件黑名单 - const noGroup = Config.noGroup - if (Array.isArray(noGroup) && noGroup.length > 0) { - if (noGroup.some(i => i == e.group_id)) return false - } - } - e.reply = reply(e) - let other = {} - if (e.notice_type == 'group') { - other.group_id = e.group_id - other.user_id = e.user_id - other.operator_id = e.operator_id - switch (e.sub_type) { - //群员增加 - case 'increase': - if (!Config.groupIncrease) return false - other.notice_type = 'group_increase' - other.sub_type = 'approve' - other.operator_id = e.user_id - break; - //群员减少 - case 'decrease': - if (!Config.groupDecrease) return false - other.notice_type = 'group_decrease' - other.sub_type = e.operator_id == e.user_id ? 'leave' : 'kick' - if (e.user_id == Bot.uin) other.sub_type = 'kick_me' - break - //戳一戳 - case 'poke': - if (!Config.groupPoke) return false - other.notice_type = 'notify' - other.sub_type = 'poke' - other.user_id = e.operator_id - other.target_id = e.target_id - break - //群管理变动 - case 'admin': - if (!Config.groupAdmin) return false - other.notice_type = 'group_admin' - other.sub_type = e.set ? 'set' : 'unset' - break - //禁言 - case 'ban': - if (!Config.groupBan) return false - other.notice_type = 'group_ban' - other.sub_type = e.duration == 0 ? 'lift_ban' : 'ban' - other.duration = e.duration - break - //群消息撤回 - case 'recall': - if (!Config.groupRecall) return false - other.notice_type = 'group_recall' - other.message_id = e.rand - break - default: - return false - } - } else if (e.notice_type == 'friend') { - other.user_id = e.user_id - switch (e.sub_type) { - //好友添加 - case 'increase': - if (!Config.friendIncrease) return false - other.notice_type = 'friend_add' - break - //好友消息撤回 - case 'recall': - if (!Config.friendRecall) return false - other.notice_type = 'friend_recall' - other.message_id = e.rand - break - default: - return false - } - } else { - return false - } - let msg = { - time: Date.parse(new Date()) / 1000, - self_id: e.self_id, - post_type: 'notice', - ...other - } - msg = JSON.stringify(msg) - for (const i of sendSocketList) { - if (i.status == 1) { - switch (Number(i.type)) { - case 1: - case 2: - if (Version.isTrss) { - if (i.uin != e.self_id) continue - if (!Version.protocol.some(i => i == e.bot?.version?.name)) continue - } - i.ws.send(msg) - break; - default: - break; - } - } - } -}) - -function reply(e) { - if (!Version.isTrss) { - const replyNew = e.reply - return async function (massage, quote = false, data = {}) { - const ret = await replyNew(massage, quote, data) - if (ret) { - setMsgMap({ - message_id: ret.message_id, - time: ret.time, - seq: ret.seq, - rand: ret.rand, - user_id: e.user_id, - group_id: e.group_id, - onebot_id: Math.floor(Math.random() * Math.pow(2, 32)) | 0, - }) - } - return ret - } - } else { - if (e.bot?.version?.name == 'ICQQ') { - return async function (massage, quote = false) { - let ret - if (e.isGroup) { - if (e.group?.sendMsg) { - ret = await e.group.sendMsg(massage, quote) - } else { - ret = await e.bot.pickGroup(e.group_id).sendMsg(massage, quote) - } - } else { - if (e.friend?.sendMsg) { - ret = await e.friend.sendMsg(massage, quote) - } else { - ret = await e.bot.pickFriend(e.user_id).sendMsg(massage, quote) - } - } - if (ret) { - setMsgMap({ - message_id: ret.message_id, - time: ret.time, - seq: ret.seq, - rand: ret.rand, - user_id: e.user_id, - group_id: e.group_id, - onebot_id: Math.floor(Math.random() * Math.pow(2, 32)) | 0, - }) - } - return ret - } - } - return e.reply - } -} \ No newline at end of file diff --git a/spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/app.py b/spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/app.py deleted file mode 100644 index 69c1b6b0c6269d5b3a1020f8b5b2ed8430fb8a26..0000000000000000000000000000000000000000 --- a/spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Cosmo-Hug/FeverDream").launch() \ No newline at end of file diff --git a/spaces/Cpp4App/Cpp4App/run_sem_test.py b/spaces/Cpp4App/Cpp4App/run_sem_test.py deleted file mode 100644 index bd8ce8dd4fba06cdc617615debcf695646aa9b30..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/run_sem_test.py +++ /dev/null @@ -1,23 +0,0 @@ -from SEM.run_single_sem import run_single_pp -from bs4 import BeautifulSoup -import shutil - -# file = open('examples/6.html', encoding='utf-8') - -# file_content = file.read().decode('utf-8') -# file_content = file - -# file_content = 'examples/6.html' -# pp_root = 'demo_pp.html' -# with open(pp_root, 'wb') as file: -# with open(file_content, 'rb') as html: -# shutil.copyfileobj(html, file) - -with open("examples/6.html", "r") as file: - example_file_content = file.read() - -run_single_pp(example_file_content) - -# soup = BeautifulSoup(file, features="html.parser") -# print("soup.contents: ", soup.contents) - diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/build.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/build.py deleted file mode 100644 index 24fbc5c1f4897b40cb13c204767315e549c18d28..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/build.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import bisect -import copy -import logging - -import torch.utils.data -from maskrcnn_benchmark.utils.comm import get_world_size -from maskrcnn_benchmark.utils.imports import import_file - -from . import datasets as D -from . import samplers - -from .collate_batch import BatchCollator -from .transforms import build_transforms - - -def build_dataset(dataset_list, transforms, dataset_catalog, is_train=True): - """ - Arguments: - dataset_list (list[str]): Contains the names of the datasets, i.e., - coco_2014_trian, coco_2014_val, etc - transforms (callable): transforms to apply to each (image, target) sample - dataset_catalog (DatasetCatalog): contains the information on how to - construct a dataset. - is_train (bool): whether to setup the dataset for training or testing - """ - if not isinstance(dataset_list, (list, tuple)): - raise RuntimeError( - "dataset_list should be a list of strings, got {}".format(dataset_list) - ) - datasets = [] - for dataset_name in dataset_list: - data = dataset_catalog.get(dataset_name) - factory = getattr(D, data["factory"]) - args = data["args"] - # for COCODataset, we want to remove images without annotations - # during training - if data["factory"] in ["COCODataset", - "WordDataset"]: - args["remove_images_without_annotations"] = is_train - if data["factory"] == "PascalVOCDataset": - args["use_difficult"] = not is_train - args["transforms"] = transforms - # make dataset from factory - dataset = factory(**args) - datasets.append(dataset) - - # for testing, return a list of datasets - if not is_train: - return datasets - - # for training, concatenate all datasets into a single one - dataset = datasets[0] - if len(datasets) > 1: - dataset = D.ConcatDataset(datasets) - - return [dataset] - - -def make_data_sampler(dataset, shuffle, distributed): - if distributed: - return samplers.DistributedSampler(dataset, shuffle=shuffle) - if shuffle: - sampler = torch.utils.data.sampler.RandomSampler(dataset) - else: - sampler = torch.utils.data.sampler.SequentialSampler(dataset) - return sampler - - -def _quantize(x, bins): - bins = copy.copy(bins) - bins = sorted(bins) - quantized = list(map(lambda y: bisect.bisect_right(bins, y), x)) - return quantized - - -def _compute_aspect_ratios(dataset): - aspect_ratios = [] - for i in range(len(dataset)): - img_info = dataset.get_img_info(i) - aspect_ratio = float(img_info["height"]) / float(img_info["width"]) - aspect_ratios.append(aspect_ratio) - return aspect_ratios - - -def make_batch_data_sampler( - dataset, sampler, aspect_grouping, images_per_batch, num_iters=None, start_iter=0 -): - if aspect_grouping: - if not isinstance(aspect_grouping, (list, tuple)): - aspect_grouping = [aspect_grouping] - aspect_ratios = _compute_aspect_ratios(dataset) - group_ids = _quantize(aspect_ratios, aspect_grouping) - batch_sampler = samplers.GroupedBatchSampler( - sampler, group_ids, images_per_batch, drop_uneven=False - ) - else: - batch_sampler = torch.utils.data.sampler.BatchSampler( - sampler, images_per_batch, drop_last=False - ) - if num_iters is not None: - batch_sampler = samplers.IterationBasedBatchSampler( - batch_sampler, num_iters, start_iter - ) - return batch_sampler - - -def make_data_loader(cfg, is_train=True, is_distributed=False, start_iter=0): - num_gpus = get_world_size() - if is_train: - images_per_batch = cfg.SOLVER.IMS_PER_BATCH - assert ( - images_per_batch % num_gpus == 0 - ), "SOLVER.IMS_PER_BATCH ({}) must be divisible by the number " - "of GPUs ({}) used.".format(images_per_batch, num_gpus) - images_per_gpu = images_per_batch // num_gpus - shuffle = True - num_iters = cfg.SOLVER.MAX_ITER - else: - images_per_batch = cfg.TEST.IMS_PER_BATCH - assert ( - images_per_batch % num_gpus == 0 - ), "TEST.IMS_PER_BATCH ({}) must be divisible by the number " - "of GPUs ({}) used.".format(images_per_batch, num_gpus) - images_per_gpu = images_per_batch // num_gpus - shuffle = False if not is_distributed else True - num_iters = None - start_iter = 0 - - if images_per_gpu > 1: - logger = logging.getLogger(__name__) - logger.warning( - "When using more than one image per GPU you may encounter " - "an out-of-memory (OOM) error if your GPU does not have " - "sufficient memory. If this happens, you can reduce " - "SOLVER.IMS_PER_BATCH (for training) or " - "TEST.IMS_PER_BATCH (for inference). For training, you must " - "also adjust the learning rate and schedule length according " - "to the linear scaling rule. See for example: " - "https://github.com/facebookresearch/Detectron/blob/master/configs/getting_started/tutorial_1gpu_e2e_faster_rcnn_R-50-FPN.yaml#L14" - ) - - # group images which have similar aspect ratio. In this case, we only - # group in two cases: those with width / height > 1, and the other way around, - # but the code supports more general grouping strategy - aspect_grouping = [1] if cfg.DATALOADER.ASPECT_RATIO_GROUPING else [] - - paths_catalog = import_file( - "maskrcnn_benchmark.config.paths_catalog", cfg.PATHS_CATALOG, True - ) - DatasetCatalog = paths_catalog.DatasetCatalog - dataset_list = cfg.DATASETS.TRAIN if is_train else cfg.DATASETS.TEST - - transforms = build_transforms(cfg, is_train) - datasets = build_dataset(dataset_list, transforms, DatasetCatalog, is_train) - - data_loaders = [] - for dataset in datasets: - sampler = make_data_sampler(dataset, shuffle, is_distributed) - batch_sampler = make_batch_data_sampler( - dataset, sampler, aspect_grouping, images_per_gpu, num_iters, start_iter - ) - collator = BatchCollator(cfg.DATALOADER.SIZE_DIVISIBILITY) - num_workers = cfg.DATALOADER.NUM_WORKERS - data_loader = torch.utils.data.DataLoader( - dataset, - num_workers=num_workers, - batch_sampler=batch_sampler, - collate_fn=collator, - ) - data_loaders.append(data_loader) - if is_train: - # during training, a single (possibly concatenated) data_loader is returned - assert len(data_loaders) == 1 - return data_loaders[0] - return data_loaders diff --git a/spaces/DaleChen/AutoGPT/tests/unit/test_browse_scrape_links.py b/spaces/DaleChen/AutoGPT/tests/unit/test_browse_scrape_links.py deleted file mode 100644 index 0a3340e7397a997da96b8ab9828954230e1a3c20..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/tests/unit/test_browse_scrape_links.py +++ /dev/null @@ -1,118 +0,0 @@ -# Generated by CodiumAI - -# Dependencies: -# pip install pytest-mock -import pytest - -from autogpt.commands.web_requests import scrape_links - -""" -Code Analysis - -Objective: -The objective of the 'scrape_links' function is to scrape hyperlinks from a -given URL and return them in a formatted way. - -Inputs: -- url: a string representing the URL to be scraped. - -Flow: -1. Send a GET request to the given URL using the requests library and the user agent header from the config file. -2. Check if the response contains an HTTP error. If it does, return "error". -3. Parse the HTML content of the response using the BeautifulSoup library. -4. Remove any script and style tags from the parsed HTML. -5. Extract all hyperlinks from the parsed HTML using the 'extract_hyperlinks' function. -6. Format the extracted hyperlinks using the 'format_hyperlinks' function. -7. Return the formatted hyperlinks. - -Outputs: -- A list of formatted hyperlinks. - -Additional aspects: -- The function uses the 'requests' and 'BeautifulSoup' libraries to send HTTP -requests and parse HTML content, respectively. -- The 'extract_hyperlinks' function is called to extract hyperlinks from the parsed HTML. -- The 'format_hyperlinks' function is called to format the extracted hyperlinks. -- The function checks for HTTP errors and returns "error" if any are found. -""" - - -class TestScrapeLinks: - # Tests that the function returns a list of formatted hyperlinks when - # provided with a valid url that returns a webpage with hyperlinks. - def test_valid_url_with_hyperlinks(self): - url = "https://www.google.com" - result = scrape_links(url) - assert len(result) > 0 - assert isinstance(result, list) - assert isinstance(result[0], str) - - # Tests that the function returns correctly formatted hyperlinks when given a valid url. - def test_valid_url(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = ( - "Google" - ) - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a valid URL - result = scrape_links("https://www.example.com") - - # Assert that the function returns correctly formatted hyperlinks - assert result == ["Google (https://www.google.com)"] - - # Tests that the function returns "error" when given an invalid url. - def test_invalid_url(self, mocker): - # Mock the requests.get() function to return an HTTP error response - mock_response = mocker.Mock() - mock_response.status_code = 404 - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with an invalid URL - result = scrape_links("https://www.invalidurl.com") - - # Assert that the function returns "error" - assert "Error:" in result - - # Tests that the function returns an empty list when the html contains no hyperlinks. - def test_no_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with sample HTML containing no hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = "

    No hyperlinks here

    " - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a URL containing no hyperlinks - result = scrape_links("https://www.example.com") - - # Assert that the function returns an empty list - assert result == [] - - # Tests that scrape_links() correctly extracts and formats hyperlinks from - # a sample HTML containing a few hyperlinks. - def test_scrape_links_with_few_hyperlinks(self, mocker): - # Mock the requests.get() function to return a response with a sample HTML containing hyperlinks - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = """ - - - - - - - - """ - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function being tested - result = scrape_links("https://www.example.com") - - # Assert that the function returns a list of formatted hyperlinks - assert isinstance(result, list) - assert len(result) == 3 - assert result[0] == "Google (https://www.google.com)" - assert result[1] == "GitHub (https://github.com)" - assert result[2] == "CodiumAI (https://www.codium.ai)" diff --git a/spaces/DhanushPrabhuS/pothole_yolov8_nano/app.py b/spaces/DhanushPrabhuS/pothole_yolov8_nano/app.py deleted file mode 100644 index 2a8b66c577549226f509d49142377bbe6d5fdbd9..0000000000000000000000000000000000000000 --- a/spaces/DhanushPrabhuS/pothole_yolov8_nano/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import gradio as gr -import cv2 -import requests -import os - -from ultralytics import YOLO - -file_urls = [ - 'https://www.dropbox.com/s/b5g97xo901zb3ds/pothole_example.jpg?dl=1', - 'https://www.dropbox.com/s/86uxlxxlm1iaexa/pothole_screenshot.png?dl=1', - 'https://www.dropbox.com/s/7sjfwncffg8xej2/video_7.mp4?dl=1' -] - -def download_file(url, save_name): - url = url - if not os.path.exists(save_name): - file = requests.get(url) - open(save_name, 'wb').write(file.content) - -for i, url in enumerate(file_urls): - if 'mp4' in file_urls[i]: - download_file( - file_urls[i], - f"video.mp4" - ) - else: - download_file( - file_urls[i], - f"image_{i}.jpg" - ) - -model = YOLO('best.pt') -path = [['image_0.jpg'], ['image_1.jpg']] -video_path = [['video.mp4']] - -def show_preds_image(image_path): - image = cv2.imread(image_path) - outputs = model.predict(source=image_path) - results = outputs[0].cpu().numpy() - for i, det in enumerate(results.boxes.xyxy): - cv2.rectangle( - image, - (int(det[0]), int(det[1])), - (int(det[2]), int(det[3])), - color=(0, 0, 255), - thickness=2, - lineType=cv2.LINE_AA - ) - return cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - -inputs_image = [ - gr.components.Image(type="filepath", label="Input Image"), -] -outputs_image = [ - gr.components.Image(type="numpy", label="Output Image"), -] -interface_image = gr.Interface( - fn=show_preds_image, - inputs=inputs_image, - outputs=outputs_image, - title="Pothole detector app", - examples=path, - cache_examples=False, -) - -def show_preds_video(video_path): - cap = cv2.VideoCapture(video_path) - while(cap.isOpened()): - ret, frame = cap.read() - if ret: - frame_copy = frame.copy() - outputs = model.predict(source=frame) - results = outputs[0].cpu().numpy() - for i, det in enumerate(results.boxes.xyxy): - cv2.rectangle( - frame_copy, - (int(det[0]), int(det[1])), - (int(det[2]), int(det[3])), - color=(0, 0, 255), - thickness=2, - lineType=cv2.LINE_AA - ) - yield cv2.cvtColor(frame_copy, cv2.COLOR_BGR2RGB) - -inputs_video = [ - gr.components.Video(type="filepath", label="Input Video"), - -] -outputs_video = [ - gr.components.Image(type="numpy", label="Output Image"), -] -interface_video = gr.Interface( - fn=show_preds_video, - inputs=inputs_video, - outputs=outputs_video, - title="Pothole detector", - examples=video_path, - cache_examples=False, -) - -gr.TabbedInterface( - [interface_image, interface_video], - tab_names=['Image inference', 'Video inference'] -).queue().launch() \ No newline at end of file diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/pidfile.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/pidfile.py deleted file mode 100644 index 96a66814326bad444606ad829307fe225f4135e1..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/pidfile.py +++ /dev/null @@ -1,81 +0,0 @@ -''' -Utility for simple distribution of work on multiple processes, by -making sure only one process is working on a job at once. -''' - -import os, errno, socket, atexit, time, sys - -def exit_if_job_done(directory): - if pidfile_taken(os.path.join(directory, 'lockfile.pid'), verbose=True): - sys.exit(0) - if os.path.isfile(os.path.join(directory, 'done.txt')): - with open(os.path.join(directory, 'done.txt')) as f: - msg = f.read() - print(msg) - sys.exit(0) - -def mark_job_done(directory): - with open(os.path.join(directory, 'done.txt'), 'w') as f: - f.write('Done by %d@%s %s at %s' % - (os.getpid(), socket.gethostname(), - os.getenv('STY', ''), - time.strftime('%c'))) - -def pidfile_taken(path, verbose=False): - ''' - Usage. To grab an exclusive lock for the remaining duration of the - current process (and exit if another process already has the lock), - do this: - - if pidfile_taken('job_423/lockfile.pid', verbose=True): - sys.exit(0) - - To do a batch of jobs, just run a script that does them all on - each available machine, sharing a network filesystem. When each - job grabs a lock, then this will automatically distribute the - jobs so that each one is done just once on one machine. - ''' - - # Try to create the file exclusively and write my pid into it. - try: - os.makedirs(os.path.dirname(path), exist_ok=True) - fd = os.open(path, os.O_CREAT | os.O_EXCL | os.O_RDWR) - except OSError as e: - if e.errno == errno.EEXIST: - # If we cannot because there was a race, yield the conflicter. - conflicter = 'race' - try: - with open(path, 'r') as lockfile: - conflicter = lockfile.read().strip() or 'empty' - except: - pass - if verbose: - print('%s held by %s' % (path, conflicter)) - return conflicter - else: - # Other problems get an exception. - raise - # Register to delete this file on exit. - lockfile = os.fdopen(fd, 'r+') - atexit.register(delete_pidfile, lockfile, path) - # Write my pid into the open file. - lockfile.write('%d@%s %s\n' % (os.getpid(), socket.gethostname(), - os.getenv('STY', ''))) - lockfile.flush() - os.fsync(lockfile) - # Return 'None' to say there was not a conflict. - return None - -def delete_pidfile(lockfile, path): - ''' - Runs at exit after pidfile_taken succeeds. - ''' - if lockfile is not None: - try: - lockfile.close() - except: - pass - try: - os.unlink(path) - except: - pass diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/__init__.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/__init__.py deleted file mode 100644 index 76b40a0a36bc2976f185dbdc344c5a7c09b65920..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .models import ModelBuilder, SegmentationModule diff --git a/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/autosummary.py b/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/autosummary.py deleted file mode 100644 index ede0f23dc3106112d241c70a8d4c17b2fa2af50d..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/autosummary.py +++ /dev/null @@ -1,193 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -"""Helper for adding automatically tracked values to Tensorboard. - -Autosummary creates an identity op that internally keeps track of the input -values and automatically shows up in TensorBoard. The reported value -represents an average over input components. The average is accumulated -constantly over time and flushed when save_summaries() is called. - -Notes: -- The output tensor must be used as an input for something else in the - graph. Otherwise, the autosummary op will not get executed, and the average - value will not get accumulated. -- It is perfectly fine to include autosummaries with the same name in - several places throughout the graph, even if they are executed concurrently. -- It is ok to also pass in a python scalar or numpy array. In this case, it - is added to the average immediately. -""" - -from collections import OrderedDict -import numpy as np -import tensorflow as tf -from tensorboard import summary as summary_lib -from tensorboard.plugins.custom_scalar import layout_pb2 - -from . import tfutil -from .tfutil import TfExpression -from .tfutil import TfExpressionEx - -# Enable "Custom scalars" tab in TensorBoard for advanced formatting. -# Disabled by default to reduce tfevents file size. -enable_custom_scalars = False - -_dtype = tf.float64 -_vars = OrderedDict() # name => [var, ...] -_immediate = OrderedDict() # name => update_op, update_value -_finalized = False -_merge_op = None - - -def _create_var(name: str, value_expr: TfExpression) -> TfExpression: - """Internal helper for creating autosummary accumulators.""" - assert not _finalized - name_id = name.replace("/", "_") - v = tf.cast(value_expr, _dtype) - - if v.shape.is_fully_defined(): - size = np.prod(v.shape.as_list()) - size_expr = tf.constant(size, dtype=_dtype) - else: - size = None - size_expr = tf.reduce_prod(tf.cast(tf.shape(v), _dtype)) - - if size == 1: - if v.shape.ndims != 0: - v = tf.reshape(v, []) - v = [size_expr, v, tf.square(v)] - else: - v = [size_expr, tf.reduce_sum(v), tf.reduce_sum(tf.square(v))] - v = tf.cond(tf.is_finite(v[1]), lambda: tf.stack(v), lambda: tf.zeros(3, dtype=_dtype)) - - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.control_dependencies(None): - var = tf.Variable(tf.zeros(3, dtype=_dtype), trainable=False) # [sum(1), sum(x), sum(x**2)] - update_op = tf.cond(tf.is_variable_initialized(var), lambda: tf.assign_add(var, v), lambda: tf.assign(var, v)) - - if name in _vars: - _vars[name].append(var) - else: - _vars[name] = [var] - return update_op - - -def autosummary(name: str, value: TfExpressionEx, passthru: TfExpressionEx = None, condition: TfExpressionEx = True) -> TfExpressionEx: - """Create a new autosummary. - - Args: - name: Name to use in TensorBoard - value: TensorFlow expression or python value to track - passthru: Optionally return this TF node without modifications but tack an autosummary update side-effect to this node. - - Example use of the passthru mechanism: - - n = autosummary('l2loss', loss, passthru=n) - - This is a shorthand for the following code: - - with tf.control_dependencies([autosummary('l2loss', loss)]): - n = tf.identity(n) - """ - tfutil.assert_tf_initialized() - name_id = name.replace("/", "_") - - if tfutil.is_tf_expression(value): - with tf.name_scope("summary_" + name_id), tf.device(value.device): - condition = tf.convert_to_tensor(condition, name='condition') - update_op = tf.cond(condition, lambda: tf.group(_create_var(name, value)), tf.no_op) - with tf.control_dependencies([update_op]): - return tf.identity(value if passthru is None else passthru) - - else: # python scalar or numpy array - assert not tfutil.is_tf_expression(passthru) - assert not tfutil.is_tf_expression(condition) - if condition: - if name not in _immediate: - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.device(None), tf.control_dependencies(None): - update_value = tf.placeholder(_dtype) - update_op = _create_var(name, update_value) - _immediate[name] = update_op, update_value - update_op, update_value = _immediate[name] - tfutil.run(update_op, {update_value: value}) - return value if passthru is None else passthru - - -def finalize_autosummaries() -> None: - """Create the necessary ops to include autosummaries in TensorBoard report. - Note: This should be done only once per graph. - """ - global _finalized - tfutil.assert_tf_initialized() - - if _finalized: - return None - - _finalized = True - tfutil.init_uninitialized_vars([var for vars_list in _vars.values() for var in vars_list]) - - # Create summary ops. - with tf.device(None), tf.control_dependencies(None): - for name, vars_list in _vars.items(): - name_id = name.replace("/", "_") - with tfutil.absolute_name_scope("Autosummary/" + name_id): - moments = tf.add_n(vars_list) - moments /= moments[0] - with tf.control_dependencies([moments]): # read before resetting - reset_ops = [tf.assign(var, tf.zeros(3, dtype=_dtype)) for var in vars_list] - with tf.name_scope(None), tf.control_dependencies(reset_ops): # reset before reporting - mean = moments[1] - std = tf.sqrt(moments[2] - tf.square(moments[1])) - tf.summary.scalar(name, mean) - if enable_custom_scalars: - tf.summary.scalar("xCustomScalars/" + name + "/margin_lo", mean - std) - tf.summary.scalar("xCustomScalars/" + name + "/margin_hi", mean + std) - - # Setup layout for custom scalars. - layout = None - if enable_custom_scalars: - cat_dict = OrderedDict() - for series_name in sorted(_vars.keys()): - p = series_name.split("/") - cat = p[0] if len(p) >= 2 else "" - chart = "/".join(p[1:-1]) if len(p) >= 3 else p[-1] - if cat not in cat_dict: - cat_dict[cat] = OrderedDict() - if chart not in cat_dict[cat]: - cat_dict[cat][chart] = [] - cat_dict[cat][chart].append(series_name) - categories = [] - for cat_name, chart_dict in cat_dict.items(): - charts = [] - for chart_name, series_names in chart_dict.items(): - series = [] - for series_name in series_names: - series.append(layout_pb2.MarginChartContent.Series( - value=series_name, - lower="xCustomScalars/" + series_name + "/margin_lo", - upper="xCustomScalars/" + series_name + "/margin_hi")) - margin = layout_pb2.MarginChartContent(series=series) - charts.append(layout_pb2.Chart(title=chart_name, margin=margin)) - categories.append(layout_pb2.Category(title=cat_name, chart=charts)) - layout = summary_lib.custom_scalar_pb(layout_pb2.Layout(category=categories)) - return layout - -def save_summaries(file_writer, global_step=None): - """Call FileWriter.add_summary() with all summaries in the default graph, - automatically finalizing and merging them on the first call. - """ - global _merge_op - tfutil.assert_tf_initialized() - - if _merge_op is None: - layout = finalize_autosummaries() - if layout is not None: - file_writer.add_summary(layout) - with tf.device(None), tf.control_dependencies(None): - _merge_op = tf.summary.merge_all() - - file_writer.add_summary(_merge_op.eval(), global_step) diff --git a/spaces/Eddycrack864/Applio-Inference/i18n/locale_diff.py b/spaces/Eddycrack864/Applio-Inference/i18n/locale_diff.py deleted file mode 100644 index 387ddfe1b16c2f9f32b6b9682b61353837b06bd8..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/i18n/locale_diff.py +++ /dev/null @@ -1,45 +0,0 @@ -import json -import os -from collections import OrderedDict - -# Define the standard file name -standard_file = "en_US.json" - -# Find all JSON files in the directory -dir_path = "./" -languages = [ - f for f in os.listdir(dir_path) if f.endswith(".json") and f != standard_file -] - -# Load the standard file -with open(standard_file, "r", encoding="utf-8") as f: - standard_data = json.load(f, object_pairs_hook=OrderedDict) - -# Loop through each language file -for lang_file in languages: - # Load the language file - with open(lang_file, "r", encoding="utf-8") as f: - lang_data = json.load(f, object_pairs_hook=OrderedDict) - - # Find the difference between the language file and the standard file - diff = set(standard_data.keys()) - set(lang_data.keys()) - - miss = set(lang_data.keys()) - set(standard_data.keys()) - - # Add any missing keys to the language file - for key in diff: - lang_data[key] = key - - # Del any extra keys to the language file - for key in miss: - del lang_data[key] - - # Sort the keys of the language file to match the order of the standard file - lang_data = OrderedDict( - sorted(lang_data.items(), key=lambda x: list(standard_data.keys()).index(x[0])) - ) - - # Save the updated language file - with open(lang_file, "w", encoding="utf-8") as f: - json.dump(lang_data, f, ensure_ascii=False, indent=4) - f.write("\n") diff --git a/spaces/EuroPython2022/latr-vqa/README.md b/spaces/EuroPython2022/latr-vqa/README.md deleted file mode 100644 index 76299be1d93c147aff4d5163dac561e056c008b4..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/latr-vqa/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Latr Vqa -emoji: 🌖 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FoxMeo/fire-detector/detect.py b/spaces/FoxMeo/fire-detector/detect.py deleted file mode 100644 index 5e0c4416a4672584c43e4967d27b13e045a76843..0000000000000000000000000000000000000000 --- a/spaces/FoxMeo/fire-detector/detect.py +++ /dev/null @@ -1,196 +0,0 @@ -import argparse -import time -from pathlib import Path - -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random - -from models.experimental import attempt_load -from utils.datasets import LoadStreams, LoadImages -from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \ - scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path -from utils.plots import plot_one_box -from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel - - -def detect(save_img=False): - source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace - save_img = not opt.nosave and not source.endswith('.txt') # save inference images - webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith( - ('rtsp://', 'rtmp://', 'http://', 'https://')) - - # Directories - save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Initialize - set_logging() - device = select_device(opt.device) - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - stride = int(model.stride.max()) # model stride - imgsz = check_img_size(imgsz, s=stride) # check img_size - - if trace: - model = TracedModel(model, device, opt.img_size) - - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = check_imshow() - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz, stride=stride) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in names] - - # Run inference - if device.type != 'cpu': - model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once - old_img_w = old_img_h = imgsz - old_img_b = 1 - - t0 = time.time() - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Warmup - if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]): - old_img_b = img.shape[0] - old_img_h = img.shape[2] - old_img_w = img.shape[3] - for i in range(3): - model(img, augment=opt.augment)[0] - - # Inference - t1 = time_synchronized() - with torch.no_grad(): # Calculating gradients would cause a GPU memory leak - pred = model(img, augment=opt.augment)[0] - t2 = time_synchronized() - - # Apply NMS - pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) - t3 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count - else: - p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # img.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh) # label format - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img or view_img: # Add bbox to image - label = f'{names[int(cls)]} {conf:.2f}' - plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=1) - - # Print time (inference + NMS) - print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS') - - # Stream results - if view_img: - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - print(f" The image with the result is saved in: {save_path}") - else: # 'video' or 'stream' - if vid_path != save_path: # new video - vid_path = save_path - if isinstance(vid_writer, cv2.VideoWriter): - vid_writer.release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path += '.mp4' - vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer.write(im0) - - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - #print(f"Results saved to {save_dir}{s}") - - print(f'Done. ({time.time() - t0:.3f}s)') - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)') - parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam - parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='display results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default='runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--no-trace', action='store_true', help='don`t trace model') - opt = parser.parse_args() - print(opt) - #check_requirements(exclude=('pycocotools', 'thop')) - - with torch.no_grad(): - if opt.update: # update all models (to fix SourceChangeWarning) - for opt.weights in ['yolov7.pt']: - detect() - strip_optimizer(opt.weights) - else: - detect() diff --git a/spaces/Fr33d0m21/google-flan-t5-xxl/app.py b/spaces/Fr33d0m21/google-flan-t5-xxl/app.py deleted file mode 100644 index fced8846b9b730030ff3059c124c2857ec7fc104..0000000000000000000000000000000000000000 --- a/spaces/Fr33d0m21/google-flan-t5-xxl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/google/flan-t5-xxl").launch() \ No newline at end of file diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/fp16_util.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/fp16_util.py deleted file mode 100644 index b69341c706f17ccf9ac9b08e966d10c630c72129..0000000000000000000000000000000000000000 --- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/fp16_util.py +++ /dev/null @@ -1,25 +0,0 @@ -""" -Helpers to inference with 16-bit precision. -""" - -import torch.nn as nn - - -def convert_module_to_f16(l): - """ - Convert primitive modules to float16. - """ - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - -def convert_module_to_f32(l): - """ - Convert primitive modules to float32, undoing convert_module_to_f16(). - """ - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): - l.weight.data = l.weight.data.float() - if l.bias is not None: - l.bias.data = l.bias.data.float() diff --git a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/i18n.py b/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/i18n.py deleted file mode 100644 index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/i18n.py +++ /dev/null @@ -1,28 +0,0 @@ -import locale -import json -import os - - -def load_language_list(language): - with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f: - language_list = json.load(f) - return language_list - - -class I18nAuto: - def __init__(self, language=None): - if language in ["Auto", None]: - language = "es_ES" - if not os.path.exists(f"./i18n/{language}.json"): - language = "es_ES" - language = "es_ES" - self.language = language - # print("Use Language:", language) - self.language_map = load_language_list(language) - - def __call__(self, key): - return self.language_map.get(key, key) - - def print(self): - # print("Use Language:", self.language) - print("") diff --git a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h b/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h deleted file mode 100644 index a21f3446e06b5826af7b554c8a7d9c5d80848b62..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h +++ /dev/null @@ -1,216 +0,0 @@ -#pragma once - -#include -#include -#include // [[since C++14]]: std::exchange -#include -#include -#include -#include -#include -#include -#include // assert - -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/rw_lock.h" - -#include "libipc/utility/log.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" - -namespace ipc { -namespace detail { - -class queue_conn { -protected: - circ::cc_t connected_ = 0; - shm::handle elems_h_; - - template - Elems* open(char const * name) { - if (name == nullptr || name[0] == '\0') { - ipc::error("fail open waiter: name is empty!\n"); - return nullptr; - } - if (!elems_h_.acquire(name, sizeof(Elems))) { - return nullptr; - } - auto elems = static_cast(elems_h_.get()); - if (elems == nullptr) { - ipc::error("fail acquire elems: %s\n", name); - return nullptr; - } - elems->init(); - return elems; - } - - void close() { - elems_h_.release(); - } - -public: - queue_conn() = default; - queue_conn(const queue_conn&) = delete; - queue_conn& operator=(const queue_conn&) = delete; - - bool connected() const noexcept { - return connected_ != 0; - } - - circ::cc_t connected_id() const noexcept { - return connected_; - } - - template - auto connect(Elems* elems) noexcept - /*needs 'optional' here*/ - -> std::tuple().cursor())> { - if (elems == nullptr) return {}; - // if it's already connected, just return - if (connected()) return {connected(), false, 0}; - connected_ = elems->connect_receiver(); - return {connected(), true, elems->cursor()}; - } - - template - bool disconnect(Elems* elems) noexcept { - if (elems == nullptr) return false; - // if it's already disconnected, just return false - if (!connected()) return false; - elems->disconnect_receiver(std::exchange(connected_, 0)); - return true; - } -}; - -template -class queue_base : public queue_conn { - using base_t = queue_conn; - -public: - using elems_t = Elems; - using policy_t = typename elems_t::policy_t; - -protected: - elems_t * elems_ = nullptr; - decltype(std::declval().cursor()) cursor_ = 0; - bool sender_flag_ = false; - -public: - using base_t::base_t; - - queue_base() = default; - - explicit queue_base(char const * name) - : queue_base{} { - elems_ = open(name); - } - - explicit queue_base(elems_t * elems) noexcept - : queue_base{} { - assert(elems != nullptr); - elems_ = elems; - } - - /* not virtual */ ~queue_base() { - base_t::close(); - } - - elems_t * elems() noexcept { return elems_; } - elems_t const * elems() const noexcept { return elems_; } - - bool ready_sending() noexcept { - if (elems_ == nullptr) return false; - return sender_flag_ || (sender_flag_ = elems_->connect_sender()); - } - - void shut_sending() noexcept { - if (elems_ == nullptr) return; - if (!sender_flag_) return; - elems_->disconnect_sender(); - } - - bool connect() noexcept { - auto tp = base_t::connect(elems_); - if (std::get<0>(tp) && std::get<1>(tp)) { - cursor_ = std::get<2>(tp); - return true; - } - return std::get<0>(tp); - } - - bool disconnect() noexcept { - return base_t::disconnect(elems_); - } - - std::size_t conn_count() const noexcept { - return (elems_ == nullptr) ? static_cast(invalid_value) : elems_->conn_count(); - } - - bool valid() const noexcept { - return elems_ != nullptr; - } - - bool empty() const noexcept { - return !valid() || (cursor_ == elems_->cursor()); - } - - template - bool push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

    (params)...); - }); - } - - template - bool force_push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->force_push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

    (params)...); - }); - } - - template - bool pop(T& item, F&& out) { - if (elems_ == nullptr) { - return false; - } - return elems_->pop(this, &(this->cursor_), [&item](void* p) { - ::new (&item) T(std::move(*static_cast(p))); - }, std::forward(out)); - } -}; - -} // namespace detail - -template -class queue final : public detail::queue_base> { - using base_t = detail::queue_base>; - -public: - using value_t = T; - - using base_t::base_t; - - template - bool push(P&&... params) { - return base_t::template push(std::forward

    (params)...); - } - - template - bool force_push(P&&... params) { - return base_t::template force_push(std::forward

    (params)...); - } - - bool pop(T& item) { - return base_t::pop(item, [](bool) {}); - } - - template - bool pop(T& item, F&& out) { - return base_t::pop(item, std::forward(out)); - } -}; - -} // namespace ipc diff --git a/spaces/GodParticle69/minor_demo/mrcnn/parallel_model.py b/spaces/GodParticle69/minor_demo/mrcnn/parallel_model.py deleted file mode 100644 index 26e997d0c29794541e53fa766c941899b5cf3d4b..0000000000000000000000000000000000000000 --- a/spaces/GodParticle69/minor_demo/mrcnn/parallel_model.py +++ /dev/null @@ -1,173 +0,0 @@ -""" -Mask R-CNN -Multi-GPU Support for Keras. - -Copyright (c) 2017 Matterport, Inc. -Licensed under the MIT License (see LICENSE for details) -Written by Waleed Abdulla - -Ideas and a small code snippets from these sources: -https://github.com/fchollet/keras/issues/2436 -https://medium.com/@kuza55/transparent-multi-gpu-training-on-tensorflow-with-keras-8b0016fd9012 -https://github.com/avolkov1/keras_experiments/blob/master/keras_exp/multigpu/ -https://github.com/fchollet/keras/blob/master/keras/utils/training_utils.py -""" - -import tensorflow as tf -import keras.backend as K -import keras.layers as KL -import keras.models as KM - - -class ParallelModel(KM.Model): - """Subclasses the standard Keras Model and adds multi-GPU support. - It works by creating a copy of the model on each GPU. Then it slices - the inputs and sends a slice to each copy of the model, and then - merges the outputs together and applies the loss on the combined - outputs. - """ - - def __init__(self, keras_model, gpu_count): - """Class constructor. - keras_model: The Keras model to parallelize - gpu_count: Number of GPUs. Must be > 1 - """ - self.inner_model = keras_model - self.gpu_count = gpu_count - merged_outputs = self.make_parallel() - super(ParallelModel, self).__init__(inputs=self.inner_model.inputs, - outputs=merged_outputs) - - def __getattribute__(self, attrname): - """Redirect loading and saving methods to the inner model. That's where - the weights are stored.""" - if 'load' in attrname or 'save' in attrname: - return getattr(self.inner_model, attrname) - return super(ParallelModel, self).__getattribute__(attrname) - - def summary(self, *args, **kwargs): - """Override summary() to display summaries of both, the wrapper - and inner models.""" - super(ParallelModel, self).summary(*args, **kwargs) - self.inner_model.summary(*args, **kwargs) - - def make_parallel(self): - """Creates a new wrapper model that consists of multiple replicas of - the original model placed on different GPUs. - """ - # Slice inputs. Slice inputs on the CPU to avoid sending a copy - # of the full inputs to all GPUs. Saves on bandwidth and memory. - input_slices = {name: tf.split(x, self.gpu_count) - for name, x in zip(self.inner_model.input_names, - self.inner_model.inputs)} - - output_names = self.inner_model.output_names - outputs_all = [] - for i in range(len(self.inner_model.outputs)): - outputs_all.append([]) - - # Run the model call() on each GPU to place the ops there - for i in range(self.gpu_count): - with tf.device('/gpu:%d' % i): - with tf.name_scope('tower_%d' % i): - # Run a slice of inputs through this replica - zipped_inputs = zip(self.inner_model.input_names, - self.inner_model.inputs) - inputs = [ - KL.Lambda(lambda s: input_slices[name][i], - output_shape=lambda s: (None,) + s[1:])(tensor) - for name, tensor in zipped_inputs] - # Create the model replica and get the outputs - outputs = self.inner_model(inputs) - if not isinstance(outputs, list): - outputs = [outputs] - # Save the outputs for merging back together later - for l, o in enumerate(outputs): - outputs_all[l].append(o) - - # Merge outputs on CPU - with tf.device('/cpu:0'): - merged = [] - for outputs, name in zip(outputs_all, output_names): - # If outputs are numbers without dimensions, add a batch dim. - def add_dim(tensor): - """Add a dimension to tensors that don't have any.""" - if K.int_shape(tensor) == (): - return KL.Lambda(lambda t: K.reshape(t, [1, 1]))(tensor) - return tensor - outputs = list(map(add_dim, outputs)) - - # Concatenate - merged.append(KL.Concatenate(axis=0, name=name)(outputs)) - return merged - - -if __name__ == "__main__": - # Testing code below. It creates a simple model to train on MNIST and - # tries to run it on 2 GPUs. It saves the graph so it can be viewed - # in TensorBoard. Run it as: - # - # python3 parallel_model.py - - import os - import numpy as np - import keras.optimizers - from keras.datasets import mnist - from keras.preprocessing.image import ImageDataGenerator - - GPU_COUNT = 2 - - # Root directory of the project - ROOT_DIR = os.path.abspath("../") - - # Directory to save logs and trained model - MODEL_DIR = os.path.join(ROOT_DIR, "logs") - - def build_model(x_train, num_classes): - # Reset default graph. Keras leaves old ops in the graph, - # which are ignored for execution but clutter graph - # visualization in TensorBoard. - tf.reset_default_graph() - - inputs = KL.Input(shape=x_train.shape[1:], name="input_image") - x = KL.Conv2D(32, (3, 3), activation='relu', padding="same", - name="conv1")(inputs) - x = KL.Conv2D(64, (3, 3), activation='relu', padding="same", - name="conv2")(x) - x = KL.MaxPooling2D(pool_size=(2, 2), name="pool1")(x) - x = KL.Flatten(name="flat1")(x) - x = KL.Dense(128, activation='relu', name="dense1")(x) - x = KL.Dense(num_classes, activation='softmax', name="dense2")(x) - - return KM.Model(inputs, x, "digit_classifier_model") - - # Load MNIST Data - (x_train, y_train), (x_test, y_test) = mnist.load_data() - x_train = np.expand_dims(x_train, -1).astype('float32') / 255 - x_test = np.expand_dims(x_test, -1).astype('float32') / 255 - - print('x_train shape:', x_train.shape) - print('x_test shape:', x_test.shape) - - # Build data generator and model - datagen = ImageDataGenerator() - model = build_model(x_train, 10) - - # Add multi-GPU support. - model = ParallelModel(model, GPU_COUNT) - - optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, clipnorm=5.0) - - model.compile(loss='sparse_categorical_crossentropy', - optimizer=optimizer, metrics=['accuracy']) - - model.summary() - - # Train - model.fit_generator( - datagen.flow(x_train, y_train, batch_size=64), - steps_per_epoch=50, epochs=10, verbose=1, - validation_data=(x_test, y_test), - callbacks=[keras.callbacks.TensorBoard(log_dir=MODEL_DIR, - write_graph=True)] - ) diff --git a/spaces/Godrose0728/sound-link/modules.py b/spaces/Godrose0728/sound-link/modules.py deleted file mode 100644 index 3484f6a1f4c1c06855c37a1ff4e66c58864acb38..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/sound-link/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dilated and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/cascade_rcnn.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/cascade_rcnn.py deleted file mode 100644 index d873dceb7e4efdf8d1e7d282badfe9b7118426b9..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/cascade_rcnn.py +++ /dev/null @@ -1,46 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class CascadeRCNN(TwoStageDetector): - r"""Implementation of `Cascade R-CNN: Delving into High Quality Object - Detection `_""" - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(CascadeRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def show_result(self, data, result, **kwargs): - """Show prediction results of the detector. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - if self.with_mask: - ms_bbox_result, ms_segm_result = result - if isinstance(ms_bbox_result, dict): - result = (ms_bbox_result['ensemble'], - ms_segm_result['ensemble']) - else: - if isinstance(result, dict): - result = result['ensemble'] - return super(CascadeRCNN, self).show_result(data, result, **kwargs) diff --git a/spaces/HaloMaster/chinesesummary/fengshen/pipelines/base.py b/spaces/HaloMaster/chinesesummary/fengshen/pipelines/base.py deleted file mode 100644 index f8e4a109c3d8a232201a255ba1a5bb77f008a78c..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/pipelines/base.py +++ /dev/null @@ -1,2 +0,0 @@ -_CONFIG_MODEL_TYPE = 'fengshen_model_type' -_CONFIG_TOKENIZER_TYPE = 'fengshen_tokenizer_type' diff --git a/spaces/HappyElephant/TextToSpeech/app.py b/spaces/HappyElephant/TextToSpeech/app.py deleted file mode 100644 index fef8d32b9303aea1a04778419b516fae4b3d4631..0000000000000000000000000000000000000000 --- a/spaces/HappyElephant/TextToSpeech/app.py +++ /dev/null @@ -1,90 +0,0 @@ -import gradio as gr -from TTS.api import TTS - -# Init TTS -tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=False) - - -def text_to_speech(text: str, speaker_wav, speaker_wav_file, language: str): - if speaker_wav_file and not speaker_wav: - speaker_wav = speaker_wav_file - file_path = "output.wav" - if language == "zh-CN": - # if speaker_wav is not None: - # zh_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path) - # else: - zh_tts.tts_to_file(text, file_path=file_path) - elif language == "de": - # if speaker_wav is not None: - # de_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path) - # else: - de_tts.tts_to_file(text, file_path=file_path) - elif language == "es": - # if speaker_wav is not None: - # es_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path) - # else: - es_tts.tts_to_file(text, file_path=file_path) - else: - if speaker_wav is not None: - tts.tts_to_file(text, speaker_wav=speaker_wav, language=language, file_path=file_path) - else: - tts.tts_to_file(text, speaker=tts.speakers[0], language=language, file_path=file_path) - return file_path - - - -# inputs = [gr.Textbox(label="Input the text", value="", max_lines=3), -# gr.Audio(label="Voice to clone", source="microphone", type="filepath"), -# gr.Audio(label="Voice to clone", type="filepath"), -# gr.Radio(label="Language", choices=["en", "zh-CN", "fr-fr", "pt-br", "de", "es"], value="en"), -# gr.Text(intro_text, font_size=14)] -# outputs = gr.Audio(label="Output") - -# demo = gr.Interface(fn=text_to_speech, inputs=inputs, outputs=outputs) - -# demo.launch() - - -title = "Voice-Cloning-Demo" - -def toggle(choice): - if choice == "mic": - return gr.update(visible=True, value=None), gr.update(visible=False, value=None) - else: - return gr.update(visible=False, value=None), gr.update(visible=True, value=None) - -def handle_language_change(choice): - if choice == "zh-CN" or choice == "de" or choice == "es": - return gr.update(visible=False), gr.update(visible=False), gr.update(visible=False) - else: - return gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -warming_text = """Please note that Chinese, German, and Spanish are currently not supported for voice cloning.""" - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - text_input = gr.Textbox(label="Input the text", value="", max_lines=3) - lan_input = gr.Radio(label="Language", choices=["en", "fr-fr", "pt-br"], value="en") - gr.Markdown(warming_text) - radio = gr.Radio(["mic", "file"], value="mic", - label="How would you like to upload your audio?") - audio_input_mic = gr.Audio(label="Voice to clone", source="microphone", type="filepath", visible=True) - audio_input_file = gr.Audio(label="Voice to clone", type="filepath", visible=False) - - with gr.Row(): - with gr.Column(): - btn_clear = gr.Button("Clear") - with gr.Column(): - btn = gr.Button("Submit", variant="primary") - with gr.Column(): - audio_output = gr.Audio(label="Output") - - # gr.Examples(examples, fn=inference, inputs=[audio_file, text_input], - # outputs=audio_output, cache_examples=True) - btn.click(text_to_speech, inputs=[text_input, audio_input_mic, - audio_input_file, lan_input], outputs=audio_output) - radio.change(toggle, radio, [audio_input_mic, audio_input_file]) - lan_input.change(handle_language_change, lan_input, [radio, audio_input_mic, audio_input_file]) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_lda_mllt.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_lda_mllt.sh deleted file mode 100644 index 9d8c319ce848e431ec47a3548156347ae3b50ced..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_lda_mllt.sh +++ /dev/null @@ -1,239 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2012 Johns Hopkins University (Author: Daniel Povey) -# -# LDA+MLLT refers to the way we transform the features after computing -# the MFCCs: we splice across several frames, reduce the dimension (to 40 -# by default) using Linear Discriminant Analysis), and then later estimate, -# over multiple iterations, a diagonalizing transform known as MLLT or STC. -# See http://kaldi-asr.org/doc/transform.html for more explanation. -# -# Apache 2.0. - -# Begin configuration. -cmd=run.pl -config= -stage=-5 -scale_opts="--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1" -realign_iters="10 20 30"; -mllt_iters="2 4 6 12"; -num_iters=35 # Number of iterations of training -max_iter_inc=25 # Last iter to increase #Gauss on. -dim=40 -beam=10 -retry_beam=40 -careful=false -boost_silence=1.0 # Factor by which to boost silence likelihoods in alignment -power=0.25 # Exponent for number of gaussians according to occurrence counts -randprune=4.0 # This is approximately the ratio by which we will speed up the - # LDA and MLLT calculations via randomized pruning. -splice_opts= -cluster_thresh=-1 # for build-tree control final bottom-up clustering of leaves -norm_vars=false # deprecated. Prefer --cmvn-opts "--norm-vars=false" -cmvn_opts= -context_opts= # use "--context-width=5 --central-position=2" for quinphone. -# End configuration. -train_tree=true # if false, don't actually train the tree. -use_lda_mat= # If supplied, use this LDA[+MLLT] matrix. -num_nonsil_states=3 - -echo "$0 $@" # Print the command line for logging - -[ -f path.sh ] && . ./path.sh -. parse_options.sh || exit 1; - -if [ $# != 6 ]; then - echo "Usage: steps/train_lda_mllt.sh [options] <#leaves> <#gauss>

    " - echo " e.g.: steps/train_lda_mllt.sh 2500 15000 data/train_si84 data/lang exp/tri1_ali_si84 exp/tri2b" - echo "Main options (for others, see top of script file)" - echo " --cmd (utils/run.pl|utils/queue.pl ) # how to run jobs." - echo " --config # config containing options" - echo " --stage # stage to do partial re-run from." - exit 1; -fi - -numleaves=$1 -totgauss=$2 -data=$3 -lang=$4 -alidir=$5 -dir=$6 - -for f in $alidir/final.mdl $alidir/ali.1.gz $data/feats.scp $lang/phones.txt; do - [ ! -f $f ] && echo "train_lda_mllt.sh: no such file $f" && exit 1; -done - -numgauss=$numleaves -incgauss=$[($totgauss-$numgauss)/$max_iter_inc] # per-iter #gauss increment -oov=`cat $lang/oov.int` || exit 1; -nj=`cat $alidir/num_jobs` || exit 1; -silphonelist=`cat $lang/phones/silence.csl` || exit 1; -ciphonelist=`cat $lang/phones/context_indep.csl` || exit 1; - -mkdir -p $dir/log - -utils/lang/check_phones_compatible.sh $lang/phones.txt $alidir/phones.txt || exit 1; -cp $lang/phones.txt $dir || exit 1; - -echo $nj >$dir/num_jobs -echo "$splice_opts" >$dir/splice_opts # keep track of frame-splicing options - # so that later stages of system building can know what they were. - - -[ $(cat $alidir/cmvn_opts 2>/dev/null | wc -c) -gt 1 ] && [ -z "$cmvn_opts" ] && \ - echo "$0: warning: ignoring CMVN options from source directory $alidir" -$norm_vars && cmvn_opts="--norm-vars=true $cmvn_opts" -echo $cmvn_opts > $dir/cmvn_opts # keep track of options to CMVN. - -sdata=$data/split$nj; -split_data.sh $data $nj || exit 1; - -splicedfeats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- | splice-feats $splice_opts ark:- ark:- |" -# Note: $feats gets overwritten later in the script. -feats="$splicedfeats transform-feats $dir/0.mat ark:- ark:- |" - - - -if [ $stage -le -5 ]; then - if [ -z "$use_lda_mat" ]; then - echo "$0: Accumulating LDA statistics." - rm $dir/lda.*.acc 2>/dev/null - $cmd JOB=1:$nj $dir/log/lda_acc.JOB.log \ - ali-to-post "ark:gunzip -c $alidir/ali.JOB.gz|" ark:- \| \ - weight-silence-post 0.0 $silphonelist $alidir/final.mdl ark:- ark:- \| \ - acc-lda --rand-prune=$randprune $alidir/final.mdl "$splicedfeats" ark,s,cs:- \ - $dir/lda.JOB.acc || exit 1; - est-lda --write-full-matrix=$dir/full.mat --dim=$dim $dir/0.mat $dir/lda.*.acc \ - 2>$dir/log/lda_est.log || exit 1; - rm $dir/lda.*.acc - else - echo "$0: Using supplied LDA matrix $use_lda_mat" - cp $use_lda_mat $dir/0.mat || exit 1; - [ ! -z "$mllt_iters" ] && \ - echo "$0: Warning: using supplied LDA matrix $use_lda_mat but we will do MLLT," && \ - echo " which you might not want; to disable MLLT, specify --mllt-iters ''" && \ - sleep 5 - fi -fi - -cur_lda_iter=0 - -if [ $stage -le -4 ] && $train_tree; then - echo "$0: Accumulating tree stats" - $cmd JOB=1:$nj $dir/log/acc_tree.JOB.log \ - acc-tree-stats $context_opts \ - --ci-phones=$ciphonelist $alidir/final.mdl "$feats" \ - "ark:gunzip -c $alidir/ali.JOB.gz|" $dir/JOB.treeacc || exit 1; - [ `ls $dir/*.treeacc | wc -w` -ne "$nj" ] && echo "$0: Wrong #tree-accs" && exit 1; - $cmd $dir/log/sum_tree_acc.log \ - sum-tree-stats $dir/treeacc $dir/*.treeacc || exit 1; - rm $dir/*.treeacc -fi - - -if [ $stage -le -3 ] && $train_tree; then - echo "$0: Getting questions for tree clustering." - # preparing questions, roots file... - cluster-phones --pdf-class-list=$(($num_nonsil_states / 2)) $context_opts $dir/treeacc $lang/phones/sets.int \ - $dir/questions.int 2> $dir/log/questions.log || exit 1; - cat $lang/phones/extra_questions.int >> $dir/questions.int - compile-questions $context_opts $lang/topo $dir/questions.int \ - $dir/questions.qst 2>$dir/log/compile_questions.log || exit 1; - - echo "$0: Building the tree" - $cmd $dir/log/build_tree.log \ - build-tree $context_opts --verbose=1 --max-leaves=$numleaves \ - --cluster-thresh=$cluster_thresh $dir/treeacc $lang/phones/roots.int \ - $dir/questions.qst $lang/topo $dir/tree || exit 1; -fi - -if [ $stage -le -2 ]; then - echo "$0: Initializing the model" - if $train_tree; then - gmm-init-model --write-occs=$dir/1.occs \ - $dir/tree $dir/treeacc $lang/topo $dir/1.mdl 2> $dir/log/init_model.log || exit 1; - grep 'no stats' $dir/log/init_model.log && echo "This is a bad warning."; - rm $dir/treeacc - else - cp $alidir/tree $dir/ || exit 1; - $cmd JOB=1 $dir/log/init_model.log \ - gmm-init-model-flat $dir/tree $lang/topo $dir/1.mdl \ - "$feats subset-feats ark:- ark:-|" || exit 1; - fi -fi - - -if [ $stage -le -1 ]; then - # Convert the alignments. - echo "$0: Converting alignments from $alidir to use current tree" - $cmd JOB=1:$nj $dir/log/convert.JOB.log \ - convert-ali $alidir/final.mdl $dir/1.mdl $dir/tree \ - "ark:gunzip -c $alidir/ali.JOB.gz|" "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; -fi - -if [ $stage -le 0 ] && [ "$realign_iters" != "" ]; then - echo "$0: Compiling graphs of transcripts" - $cmd JOB=1:$nj $dir/log/compile_graphs.JOB.log \ - compile-train-graphs --read-disambig-syms=$lang/phones/disambig.int $dir/tree $dir/1.mdl $lang/L.fst \ - "ark:utils/sym2int.pl --map-oov $oov -f 2- $lang/words.txt < $data/split$nj/JOB/text |" \ - "ark:|gzip -c >$dir/fsts.JOB.gz" || exit 1; -fi - - -x=1 -while [ $x -lt $num_iters ]; do - echo Training pass $x - if echo $realign_iters | grep -w $x >/dev/null && [ $stage -le $x ]; then - echo Aligning data - mdl="gmm-boost-silence --boost=$boost_silence `cat $lang/phones/optional_silence.csl` $dir/$x.mdl - |" - $cmd JOB=1:$nj $dir/log/align.$x.JOB.log \ - gmm-align-compiled $scale_opts --beam=$beam --retry-beam=$retry_beam --careful=$careful "$mdl" \ - "ark:gunzip -c $dir/fsts.JOB.gz|" "$feats" \ - "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; - fi - if echo $mllt_iters | grep -w $x >/dev/null; then - if [ $stage -le $x ]; then - echo "$0: Estimating MLLT" - $cmd JOB=1:$nj $dir/log/macc.$x.JOB.log \ - ali-to-post "ark:gunzip -c $dir/ali.JOB.gz|" ark:- \| \ - weight-silence-post 0.0 $silphonelist $dir/$x.mdl ark:- ark:- \| \ - gmm-acc-mllt --rand-prune=$randprune $dir/$x.mdl "$feats" ark:- $dir/$x.JOB.macc \ - || exit 1; - est-mllt $dir/$x.mat.new $dir/$x.*.macc 2> $dir/log/mupdate.$x.log || exit 1; - gmm-transform-means $dir/$x.mat.new $dir/$x.mdl $dir/$x.mdl \ - 2> $dir/log/transform_means.$x.log || exit 1; - compose-transforms --print-args=false $dir/$x.mat.new $dir/$cur_lda_iter.mat $dir/$x.mat || exit 1; - rm $dir/$x.*.macc - fi - feats="$splicedfeats transform-feats $dir/$x.mat ark:- ark:- |" - cur_lda_iter=$x - fi - - if [ $stage -le $x ]; then - $cmd JOB=1:$nj $dir/log/acc.$x.JOB.log \ - gmm-acc-stats-ali $dir/$x.mdl "$feats" \ - "ark,s,cs:gunzip -c $dir/ali.JOB.gz|" $dir/$x.JOB.acc || exit 1; - $cmd $dir/log/update.$x.log \ - gmm-est --write-occs=$dir/$[$x+1].occs --mix-up=$numgauss --power=$power \ - $dir/$x.mdl "gmm-sum-accs - $dir/$x.*.acc |" $dir/$[$x+1].mdl || exit 1; - rm $dir/$x.mdl $dir/$x.*.acc $dir/$x.occs - fi - [ $x -le $max_iter_inc ] && numgauss=$[$numgauss+$incgauss]; - x=$[$x+1]; -done - -rm $dir/final.{mdl,mat,occs} 2>/dev/null -ln -s $x.mdl $dir/final.mdl -ln -s $x.occs $dir/final.occs -ln -s $cur_lda_iter.mat $dir/final.mat - -steps/diagnostic/analyze_alignments.sh --cmd "$cmd" $lang $dir - -# Summarize warning messages... -utils/summarize_warnings.pl $dir/log - -steps/info/gmm_dir_info.pl $dir - -echo "$0: Done training system with LDA+MLLT features in $dir" - -exit 0 diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/models/vqgan.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/models/vqgan.py deleted file mode 100644 index faa659451e01aea3a08dbdb590e6d71cd7b1afc2..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/models/vqgan.py +++ /dev/null @@ -1,649 +0,0 @@ -import torch -import torch.nn.functional as F -import pytorch_lightning as pl - -from celle_taming_main import instantiate_from_config - -from taming.modules.diffusionmodules.model import Encoder, Decoder -from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer -from taming.modules.vqvae.quantize import GumbelQuantize -from taming.modules.vqvae.quantize import EMAVectorQuantizer - - -class VQModel(pl.LightningModule): - def __init__( - self, - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - remap=None, - sane_index_shape=False, # tell vector quantizer to return indices as bhw - ): - super().__init__() - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - self.quantize = VectorQuantizer( - n_embed, - embed_dim, - beta=0.25, - remap=remap, - sane_index_shape=sane_index_shape, - ) - self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - self.image_key = image_key - if colorize_nlabels is not None: - assert type(colorize_nlabels) == int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - def encode(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - quant, emb_loss, info = self.quantize(h) - return quant, emb_loss, info - - def decode(self, quant): - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - - def decode_code(self, code_b): - quant_b = self.quantize.embed_code(code_b) - dec = self.decode(quant_b) - return dec - - def forward(self, input): - quant, diff, _ = self.encode(input) - dec = self.decode(quant) - return dec, diff - - def get_input(self, batch, k): - - if k == "mixed": - keys = ["nucleus", "target"] - index = torch.randint(low=0, high=2, size=(1,), dtype=int).item() - k = keys[index] - - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - - # x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format) - return x - - def training_step(self, batch, batch_idx=None, optimizer_idx=0): - - if type(batch) == dict: - - x = self.get_input(batch, self.image_key) - - else: - x = batch - - xrec, qloss = self( - x, - ) - - if optimizer_idx == 0: - # autoencode - aeloss, log_dict_ae = self.loss( - qloss, - x, - xrec, - optimizer_idx, - self.global_step, - last_layer=self.get_last_layer(), - split="train", - ) - - self.log( - "train/aeloss", - aeloss, - prog_bar=True, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - self.log_dict( - log_dict_ae, - prog_bar=False, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - return aeloss - - if optimizer_idx == 1: - # discriminator - discloss, log_dict_disc = self.loss( - qloss, - x, - xrec, - optimizer_idx, - self.global_step, - last_layer=self.get_last_layer(), - split="train", - ) - self.log( - "train/discloss", - discloss, - prog_bar=True, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - self.log_dict( - log_dict_disc, - prog_bar=False, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - return discloss - - def validation_step(self, batch, batch_idx): - - if type(batch) == dict: - - x = self.get_input(batch, self.image_key) - - else: - x = batch - - xrec, qloss = self(x) - aeloss, log_dict_ae = self.loss( - qloss, - x, - xrec, - 0, - self.global_step, - last_layer=self.get_last_layer(), - split="val", - ) - - discloss, log_dict_disc = self.loss( - qloss, - x, - xrec, - 1, - self.global_step, - last_layer=self.get_last_layer(), - split="val", - ) - # rec_loss = log_dict_ae["val/rec_loss"] - # self.log( - # "val/rec_loss", - # rec_loss, - # prog_bar=True, - # logger=True, - # on_step=True, - # on_epoch=True, - # sync_dist=True, - # ) - # self.log( - # "val/aeloss", - # aeloss, - # prog_bar=True, - # logger=True, - # on_step=True, - # on_epoch=True, - # sync_dist=True, - # ) - - for key, value in log_dict_disc.items(): - if key in log_dict_ae: - log_dict_ae[key].extend(value) - else: - log_dict_ae[key] = value - - self.log_dict(log_dict_ae, sync_dist=True) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam( - list(self.encoder.parameters()) - + list(self.decoder.parameters()) - + list(self.quantize.parameters()) - + list(self.quant_conv.parameters()) - + list(self.post_quant_conv.parameters()), - lr=lr, - betas=(0.5, 0.9), - ) - opt_disc = torch.optim.Adam( - self.loss.discriminator.parameters(), lr=lr, betas=(0.5, 0.9) - ) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - def log_images(self, batch, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - xrec, _ = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["inputs"] = x - log["reconstructions"] = xrec - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.0 * (x - x.min()) / (x.max() - x.min()) - 1.0 - return x - - -class VQSegmentationModel(VQModel): - def __init__(self, n_labels, *args, **kwargs): - super().__init__(*args, **kwargs) - self.register_buffer("colorize", torch.randn(3, n_labels, 1, 1)) - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam( - list(self.encoder.parameters()) - + list(self.decoder.parameters()) - + list(self.quantize.parameters()) - + list(self.quant_conv.parameters()) - + list(self.post_quant_conv.parameters()), - lr=lr, - betas=(0.5, 0.9), - ) - return opt_ae - - def training_step(self, batch, batch_idx): - x = self.get_input(batch, self.image_key) - xrec, qloss = self(x) - aeloss, log_dict_ae = self.loss(qloss, x, xrec, split="train") - self.log_dict( - log_dict_ae, - prog_bar=False, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - return aeloss - - def validation_step(self, batch, batch_idx): - x = self.get_input(batch, self.image_key) - xrec, qloss = self(x) - aeloss, log_dict_ae = self.loss(qloss, x, xrec, split="val") - self.log_dict( - log_dict_ae, - prog_bar=False, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - total_loss = log_dict_ae["val/total_loss"] - self.log( - "val/total_loss", - total_loss, - prog_bar=True, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - return aeloss - - @torch.no_grad() - def log_images(self, batch, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - xrec, _ = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - # convert logits to indices - xrec = torch.argmax(xrec, dim=1, keepdim=True) - xrec = F.one_hot(xrec, num_classes=x.shape[1]) - xrec = xrec.squeeze(1).permute(0, 3, 1, 2).float() - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["inputs"] = x - log["reconstructions"] = xrec - return log - - -class VQNoDiscModel(VQModel): - def __init__( - self, - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - ): - super().__init__( - ddconfig=ddconfig, - lossconfig=lossconfig, - n_embed=n_embed, - embed_dim=embed_dim, - ckpt_path=ckpt_path, - ignore_keys=ignore_keys, - image_key=image_key, - colorize_nlabels=colorize_nlabels, - ) - - def training_step(self, batch, batch_idx): - x = self.get_input(batch, self.image_key) - xrec, qloss = self(x) - # autoencode - aeloss, log_dict_ae = self.loss(qloss, x, xrec, self.global_step, split="train") - output = pl.TrainResult(minimize=aeloss) - output.log( - "train/aeloss", - aeloss, - prog_bar=True, - logger=True, - on_step=True, - on_epoch=True, - ) - output.log_dict( - log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True - ) - return output - - def validation_step(self, batch, batch_idx): - x = self.get_input(batch, self.image_key) - xrec, qloss = self(x) - aeloss, log_dict_ae = self.loss(qloss, x, xrec, self.global_step, split="val") - rec_loss = log_dict_ae["val/rec_loss"] - output = pl.EvalResult(checkpoint_on=rec_loss) - output.log( - "val/rec_loss", - rec_loss, - prog_bar=True, - logger=True, - on_step=True, - on_epoch=True, - ) - output.log( - "val/aeloss", - aeloss, - prog_bar=True, - logger=True, - on_step=True, - on_epoch=True, - ) - output.log_dict(log_dict_ae) - - return output - - def configure_optimizers(self): - optimizer = torch.optim.Adam( - list(self.encoder.parameters()) - + list(self.decoder.parameters()) - + list(self.quantize.parameters()) - + list(self.quant_conv.parameters()) - + list(self.post_quant_conv.parameters()), - lr=self.learning_rate, - betas=(0.5, 0.9), - ) - return optimizer - - -class GumbelVQ(VQModel): - def __init__( - self, - ddconfig, - lossconfig, - n_embed, - embed_dim, - temperature_scheduler_config, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - kl_weight=1e-8, - remap=None, - ): - - z_channels = ddconfig["z_channels"] - super().__init__( - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=ignore_keys, - image_key=image_key, - colorize_nlabels=colorize_nlabels, - monitor=monitor, - ) - - self.loss.n_classes = n_embed - self.vocab_size = n_embed - - self.quantize = GumbelQuantize( - z_channels, - embed_dim, - n_embed=n_embed, - kl_weight=kl_weight, - temp_init=1.0, - remap=remap, - ) - - self.temperature_scheduler = instantiate_from_config( - temperature_scheduler_config - ) # annealing of temp - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - - def temperature_scheduling(self): - self.quantize.temperature = self.temperature_scheduler(self.global_step) - - def encode_to_prequant(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - return h - - def decode_code(self, code_b): - raise NotImplementedError - - def training_step(self, batch, batch_idx=None, optimizer_idx=0): - self.temperature_scheduling() - x = self.get_input(batch, self.image_key) - xrec, qloss = self(x) - - if optimizer_idx == 0: - # autoencode - aeloss, log_dict_ae = self.loss( - qloss, - x, - xrec, - optimizer_idx, - self.global_step, - last_layer=self.get_last_layer(), - split="train", - ) - - self.log_dict( - log_dict_ae, - prog_bar=False, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - self.log( - "temperature", - self.quantize.temperature, - prog_bar=False, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - return aeloss - - if optimizer_idx == 1: - # discriminator - discloss, log_dict_disc = self.loss( - qloss, - x, - xrec, - optimizer_idx, - self.global_step, - last_layer=self.get_last_layer(), - split="train", - ) - self.log_dict( - log_dict_disc, - prog_bar=False, - logger=True, - on_step=True, - on_epoch=True, - sync_dist=True, - ) - return discloss - - def validation_step(self, batch, batch_idx): - x = self.get_input(batch, self.image_key) - xrec, qloss = self(x) - aeloss, log_dict_ae = self.loss( - qloss, - x, - xrec, - 0, - self.global_step, - last_layer=self.get_last_layer(), - split="val", - ) - - discloss, log_dict_disc = self.loss( - qloss, - x, - xrec, - 1, - self.global_step, - last_layer=self.get_last_layer(), - split="val", - ) - rec_loss = log_dict_ae["val/rec_loss"] - self.log( - "val/rec_loss", - rec_loss, - prog_bar=True, - logger=True, - on_step=False, - on_epoch=True, - sync_dist=True, - ) - self.log( - "val/aeloss", - aeloss, - prog_bar=True, - logger=True, - on_step=False, - on_epoch=True, - sync_dist=True, - ) - self.log_dict(log_dict_ae, sync_dist=True) - self.log_dict(log_dict_disc, sync_dist=True) - return self.log_dict - - def log_images(self, batch, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - # encode - h = self.encoder(x) - h = self.quant_conv(h) - quant, _, _ = self.quantize(h) - # decode - x_rec = self.decode(quant) - log["inputs"] = x - log["reconstructions"] = x_rec - return log - - -class EMAVQ(VQModel): - def __init__( - self, - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - remap=None, - sane_index_shape=False, # tell vector quantizer to return indices as bhw - ): - super().__init__( - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=ignore_keys, - image_key=image_key, - colorize_nlabels=colorize_nlabels, - monitor=monitor, - ) - self.quantize = EMAVectorQuantizer( - n_embed=n_embed, embedding_dim=embed_dim, beta=0.25, remap=remap - ) - - def configure_optimizers(self): - lr = self.learning_rate - # Remove self.quantize from parameter list since it is updated via EMA - opt_ae = torch.optim.Adam( - list(self.encoder.parameters()) - + list(self.decoder.parameters()) - + list(self.quant_conv.parameters()) - + list(self.post_quant_conv.parameters()), - lr=lr, - betas=(0.5, 0.9), - ) - opt_disc = torch.optim.Adam( - self.loss.discriminator.parameters(), lr=lr, betas=(0.5, 0.9) - ) - return [opt_ae, opt_disc], [] diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/multilingual_utils.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/multilingual_utils.py deleted file mode 100644 index b4e0f9828cabfdbe375d05d9152b58bdbd6de7dc..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/multilingual_utils.py +++ /dev/null @@ -1,63 +0,0 @@ -from enum import Enum -from typing import Dict, List, Optional, Sequence - -import torch -from fairseq.data import Dictionary - - -class EncoderLangtok(Enum): - """ - Prepend to the beginning of source sentence either the - source or target language token. (src/tgt). - """ - - src = "src" - tgt = "tgt" - - -class LangTokSpec(Enum): - main = "main" - mono_dae = "mono_dae" - - -class LangTokStyle(Enum): - multilingual = "multilingual" - mbart = "mbart" - - -@torch.jit.export -def get_lang_tok( - lang: str, lang_tok_style: str, spec: str = LangTokSpec.main.value -) -> str: - # TOKEN_STYLES can't be defined outside this fn since it needs to be - # TorchScriptable. - TOKEN_STYLES: Dict[str, str] = { - LangTokStyle.mbart.value: "[{}]", - LangTokStyle.multilingual.value: "__{}__", - } - - if spec.endswith("dae"): - lang = f"{lang}_dae" - elif spec.endswith("mined"): - lang = f"{lang}_mined" - style = TOKEN_STYLES[lang_tok_style] - return style.format(lang) - - -def augment_dictionary( - dictionary: Dictionary, - language_list: List[str], - lang_tok_style: str, - langtoks_specs: Sequence[str] = (LangTokSpec.main.value,), - extra_data: Optional[Dict[str, str]] = None, -) -> None: - for spec in langtoks_specs: - for language in language_list: - dictionary.add_symbol( - get_lang_tok(lang=language, lang_tok_style=lang_tok_style, spec=spec) - ) - - if lang_tok_style == LangTokStyle.mbart.value or ( - extra_data is not None and LangTokSpec.mono_dae.value in extra_data - ): - dictionary.add_symbol("") diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/transformer_lm.py b/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/transformer_lm.py deleted file mode 100644 index dc52f6e8dd3899b6bf9bebae7415cee20baf9884..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/transformer_lm.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -from fairseq.model_parallel.models.transformer import ModelParallelTransformerDecoder -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer_lm import TransformerLanguageModel - - -try: - from fairseq.model_parallel.megatron.mpu import VocabParallelEmbedding - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@register_model("model_parallel_transformer_lm") -class ModelParallelTransformerLanguageModel(TransformerLanguageModel): - - @staticmethod - def add_args(parser): - TransformerLanguageModel.add_args(parser) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - - # make sure all arguments are present in older models - base_lm_architecture(args) - - task.source_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - task.target_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - - if args.character_embeddings: - raise NotImplementedError( - "Character embeddings is not supported for model parallel" - ) - elif args.adaptive_input: - raise NotImplementedError( - "Adaptive input is not supported for model parallel" - ) - else: - embed_tokens = cls.build_embedding( - args, task.source_dictionary, args.decoder_input_dim - ) - - decoder = ModelParallelTransformerDecoder( - args, - task.target_dictionary, - embed_tokens, - no_encoder_attn=True, - ) - return cls(decoder) - - @staticmethod - def add_args(parser): - TransformerLanguageModel.add_args(parser) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - def _vocab_init(tensor, **kwargs): - nn.init.normal_(tensor, mean=0, std=embed_dim ** -0.5) - nn.init.constant_(tensor[1], 0) - - embed_tokens = VocabParallelEmbedding( - len(dictionary), embed_dim, dictionary.pad(), init_method=_vocab_init - ) - return embed_tokens - - -def base_lm_architecture(args): - # backward compatibility for older model checkpoints - if hasattr(args, "no_tie_adaptive_proj"): - # previous models defined --no-tie-adaptive-proj, so use the existence of - # that option to determine if this is an "old" model checkpoint - args.no_decoder_final_norm = True # old models always set this to True - if args.no_tie_adaptive_proj is False: - args.tie_adaptive_proj = True - if hasattr(args, "decoder_final_norm"): - args.no_decoder_final_norm = not args.decoder_final_norm - - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - # Model training is not stable without this - args.decoder_normalize_before = True - args.no_decoder_final_norm = getattr(args, "no_decoder_final_norm", False) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.character_embeddings = getattr(args, "character_embeddings", False) - args.character_filters = getattr( - args, - "character_filters", - "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - ) - args.character_embedding_dim = getattr(args, "character_embedding_dim", 4) - args.char_embedder_highway_layers = getattr(args, "char_embedder_highway_layers", 2) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None) - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.decoder_layers_to_keep = getattr(args, "decoder_layers_to_keep", None) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0.0) - args.quant_noise_pq_block_size = getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = getattr(args, "quant_noise_scalar", 0.0) - args.add_bos_token = getattr(args, "add_bos_token", False) - - -@register_model_architecture("model_parallel_transformer_lm", "transformer_lm_megatron") -def transformer_lm_megatron(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 3072) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 3072 * 4) - args.decoder_layers = getattr(args, "decoder_layers", 72) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 32) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_fn = getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture( - "model_parallel_transformer_lm", "transformer_lm_megatron_11b" -) -def transformer_lm_megatron_11b(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 3072) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 3072 * 6) - args.decoder_layers = getattr(args, "decoder_layers", 72) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 32) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_fn = getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/modules/encoders/__init__.py b/spaces/Iceclear/StableSR/StableSR/ldm/modules/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py deleted file mode 100644 index 8bd45a930d3dc84912e58659ee575be08e9038f0..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- -# File : test_numeric_batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. - -import unittest - -import torch -import torch.nn as nn -from torch.autograd import Variable - -from sync_batchnorm.unittest import TorchTestCase - - -def handy_var(a, unbias=True): - n = a.size(0) - asum = a.sum(dim=0) - as_sum = (a ** 2).sum(dim=0) # a square sum - sumvar = as_sum - asum * asum / n - if unbias: - return sumvar / (n - 1) - else: - return sumvar / n - - -class NumericTestCase(TorchTestCase): - def testNumericBatchNorm(self): - a = torch.rand(16, 10) - bn = nn.BatchNorm2d(10, momentum=1, eps=1e-5, affine=False) - bn.train() - - a_var1 = Variable(a, requires_grad=True) - b_var1 = bn(a_var1) - loss1 = b_var1.sum() - loss1.backward() - - a_var2 = Variable(a, requires_grad=True) - a_mean2 = a_var2.mean(dim=0, keepdim=True) - a_std2 = torch.sqrt(handy_var(a_var2, unbias=False).clamp(min=1e-5)) - # a_std2 = torch.sqrt(a_var2.var(dim=0, keepdim=True, unbiased=False) + 1e-5) - b_var2 = (a_var2 - a_mean2) / a_std2 - loss2 = b_var2.sum() - loss2.backward() - - self.assertTensorClose(bn.running_mean, a.mean(dim=0)) - self.assertTensorClose(bn.running_var, handy_var(a)) - self.assertTensorClose(a_var1.data, a_var2.data) - self.assertTensorClose(b_var1.data, b_var2.data) - self.assertTensorClose(a_var1.grad, a_var2.grad) - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/Izal887/rvc-hutao/README.md b/spaces/Izal887/rvc-hutao/README.md deleted file mode 100644 index f077cd85340c26ebfcb0857816d0f1f511408242..0000000000000000000000000000000000000000 --- a/spaces/Izal887/rvc-hutao/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ardha27/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JUNGU/SuperGlue-Image-Matching/models/superpoint.py b/spaces/JUNGU/SuperGlue-Image-Matching/models/superpoint.py deleted file mode 100644 index b837d938f755850180ddc168e957742e874adacd..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/SuperGlue-Image-Matching/models/superpoint.py +++ /dev/null @@ -1,202 +0,0 @@ -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from pathlib import Path -import torch -from torch import nn - -def simple_nms(scores, nms_radius: int): - """ Fast Non-maximum suppression to remove nearby points """ - assert(nms_radius >= 0) - - def max_pool(x): - return torch.nn.functional.max_pool2d( - x, kernel_size=nms_radius*2+1, stride=1, padding=nms_radius) - - zeros = torch.zeros_like(scores) - max_mask = scores == max_pool(scores) - for _ in range(2): - supp_mask = max_pool(max_mask.float()) > 0 - supp_scores = torch.where(supp_mask, zeros, scores) - new_max_mask = supp_scores == max_pool(supp_scores) - max_mask = max_mask | (new_max_mask & (~supp_mask)) - return torch.where(max_mask, scores, zeros) - - -def remove_borders(keypoints, scores, border: int, height: int, width: int): - """ Removes keypoints too close to the border """ - mask_h = (keypoints[:, 0] >= border) & (keypoints[:, 0] < (height - border)) - mask_w = (keypoints[:, 1] >= border) & (keypoints[:, 1] < (width - border)) - mask = mask_h & mask_w - return keypoints[mask], scores[mask] - - -def top_k_keypoints(keypoints, scores, k: int): - if k >= len(keypoints): - return keypoints, scores - scores, indices = torch.topk(scores, k, dim=0) - return keypoints[indices], scores - - -def sample_descriptors(keypoints, descriptors, s: int = 8): - """ Interpolate descriptors at keypoint locations """ - b, c, h, w = descriptors.shape - keypoints = keypoints - s / 2 + 0.5 - keypoints /= torch.tensor([(w*s - s/2 - 0.5), (h*s - s/2 - 0.5)], - ).to(keypoints)[None] - keypoints = keypoints*2 - 1 # normalize to (-1, 1) - args = {'align_corners': True} if torch.__version__ >= '1.3' else {} - descriptors = torch.nn.functional.grid_sample( - descriptors, keypoints.view(b, 1, -1, 2), mode='bilinear', **args) - descriptors = torch.nn.functional.normalize( - descriptors.reshape(b, c, -1), p=2, dim=1) - return descriptors - - -class SuperPoint(nn.Module): - """SuperPoint Convolutional Detector and Descriptor - - SuperPoint: Self-Supervised Interest Point Detection and - Description. Daniel DeTone, Tomasz Malisiewicz, and Andrew - Rabinovich. In CVPRW, 2019. https://arxiv.org/abs/1712.07629 - - """ - default_config = { - 'descriptor_dim': 256, - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': -1, - 'remove_borders': 4, - } - - def __init__(self, config): - super().__init__() - self.config = {**self.default_config, **config} - - self.relu = nn.ReLU(inplace=True) - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - c1, c2, c3, c4, c5 = 64, 64, 128, 128, 256 - - self.conv1a = nn.Conv2d(1, c1, kernel_size=3, stride=1, padding=1) - self.conv1b = nn.Conv2d(c1, c1, kernel_size=3, stride=1, padding=1) - self.conv2a = nn.Conv2d(c1, c2, kernel_size=3, stride=1, padding=1) - self.conv2b = nn.Conv2d(c2, c2, kernel_size=3, stride=1, padding=1) - self.conv3a = nn.Conv2d(c2, c3, kernel_size=3, stride=1, padding=1) - self.conv3b = nn.Conv2d(c3, c3, kernel_size=3, stride=1, padding=1) - self.conv4a = nn.Conv2d(c3, c4, kernel_size=3, stride=1, padding=1) - self.conv4b = nn.Conv2d(c4, c4, kernel_size=3, stride=1, padding=1) - - self.convPa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convPb = nn.Conv2d(c5, 65, kernel_size=1, stride=1, padding=0) - - self.convDa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convDb = nn.Conv2d( - c5, self.config['descriptor_dim'], - kernel_size=1, stride=1, padding=0) - - path = Path(__file__).parent / 'weights/superpoint_v1.pth' - self.load_state_dict(torch.load(str(path))) - - mk = self.config['max_keypoints'] - if mk == 0 or mk < -1: - raise ValueError('\"max_keypoints\" must be positive or \"-1\"') - - print('Loaded SuperPoint model') - - def forward(self, data): - """ Compute keypoints, scores, descriptors for image """ - # Shared Encoder - x = self.relu(self.conv1a(data['image'])) - x = self.relu(self.conv1b(x)) - x = self.pool(x) - x = self.relu(self.conv2a(x)) - x = self.relu(self.conv2b(x)) - x = self.pool(x) - x = self.relu(self.conv3a(x)) - x = self.relu(self.conv3b(x)) - x = self.pool(x) - x = self.relu(self.conv4a(x)) - x = self.relu(self.conv4b(x)) - - # Compute the dense keypoint scores - cPa = self.relu(self.convPa(x)) - scores = self.convPb(cPa) - scores = torch.nn.functional.softmax(scores, 1)[:, :-1] - b, _, h, w = scores.shape - scores = scores.permute(0, 2, 3, 1).reshape(b, h, w, 8, 8) - scores = scores.permute(0, 1, 3, 2, 4).reshape(b, h*8, w*8) - scores = simple_nms(scores, self.config['nms_radius']) - - # Extract keypoints - keypoints = [ - torch.nonzero(s > self.config['keypoint_threshold']) - for s in scores] - scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] - - # Discard keypoints near the image borders - keypoints, scores = list(zip(*[ - remove_borders(k, s, self.config['remove_borders'], h*8, w*8) - for k, s in zip(keypoints, scores)])) - - # Keep the k keypoints with highest score - if self.config['max_keypoints'] >= 0: - keypoints, scores = list(zip(*[ - top_k_keypoints(k, s, self.config['max_keypoints']) - for k, s in zip(keypoints, scores)])) - - # Convert (h, w) to (x, y) - keypoints = [torch.flip(k, [1]).float() for k in keypoints] - - # Compute the dense descriptors - cDa = self.relu(self.convDa(x)) - descriptors = self.convDb(cDa) - descriptors = torch.nn.functional.normalize(descriptors, p=2, dim=1) - - # Extract descriptors - descriptors = [sample_descriptors(k[None], d[None], 8)[0] - for k, d in zip(keypoints, descriptors)] - - return { - 'keypoints': keypoints, - 'scores': scores, - 'descriptors': descriptors, - } diff --git a/spaces/JUNGU/latex-ocr-wthGPT/app.py b/spaces/JUNGU/latex-ocr-wthGPT/app.py deleted file mode 100644 index 1a6443ca8e514d07fb4b1be59b90e691e6818fb5..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/latex-ocr-wthGPT/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import gradio as gr -from transformers import TrOCRProcessor, VisionEncoderDecoderModel -import requests -from PIL import Image - -url = 'https://huggingface.co/yhshin/latex-ocr/raw/main/tokenizer-wordlevel.json' -r = requests.get(url) -open('tokenizer-wordlevel.json' , 'wb').write(r.content) - - -processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-printed") -model = VisionEncoderDecoderModel.from_pretrained("yhshin/latex-ocr") - -from tokenizers import Tokenizer -tokenizer = Tokenizer.from_file("tokenizer-wordlevel.json") - -# load image examples - -def process_image(image): - # prepare image - pixel_values = processor(image, return_tensors="pt").pixel_values - - # generate (no beam search) - generated_ids = model.generate(pixel_values) - - # decode - generated_text = tokenizer.decode_batch(generated_ids.tolist(), skip_special_tokens=True)[0] - - # Strip spaces - generated_text = generated_text.replace(" ", "") - - return generated_text - -# !ls examples | grep png - -# + -title = "Convert image to LaTeX source code" - -with open('article.md',mode='r') as file: - article = file.read() - -description = """ -This is a demo of machine learning model trained to reconstruct the LaTeX source code of an equation from an image. -To use it, simply upload an image or use one of the example images below and click 'submit'. -Results will show up in a few seconds. - -Try rendering the generated LaTeX [here](https://quicklatex.com/) to compare with the original. -(The model is not perfect yet, so you may need to edit the resulting LaTeX a bit to get it to render a good match.) - -""" - -examples = [ - [ "examples/1d32874f02.png" ], - [ "examples/1e466b180d.png" ], - [ "examples/2d3503f427.png" ], - [ "examples/2f9d3c4e43.png" ], - [ "examples/51c5cc2ff5.png" ], - [ "examples/545a492388.png" ], - [ "examples/6a51a30502.png" ], - [ "examples/6bf6832adb.png" ], - [ "examples/7afdeff0e6.png" ], - [ "examples/b8f1e64b1f.png" ], -] -# - - -iface = gr.Interface(fn=process_image, - inputs=[gr.inputs.Image(type="pil")], - outputs=gr.outputs.Textbox(), - title=title, - description=description, - article=article, - examples=examples) -iface.launch() - diff --git a/spaces/JUNGU/pixera_gen/app.py b/spaces/JUNGU/pixera_gen/app.py deleted file mode 100644 index e5c8b62db9368dc72573086a286ecc553e454f7a..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/pixera_gen/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -import cv2 -import torch -import warnings -import numpy as np -import gradio as gr -import paddlehub as hub -from PIL import Image -from methods.img2pixl import pixL -from examples.pixelArt.combine import combine -from methods.media import Media - -warnings.filterwarnings("ignore") - -U2Net = hub.Module(name='U2Net') -device = "cuda" if torch.cuda.is_available() else "cpu" -face2paint = torch.hub.load("bryandlee/animegan2-pytorch:main", "face2paint", device=device, size=512) -model = torch.hub.load("bryandlee/animegan2-pytorch", "generator", device=device).eval() - - -def initilize(media,pixel_size,checkbox1): - #Author: Alican Akca - if media.name.endswith('.gif'): - return Media().split(media.name,pixel_size, 'gif') - elif media.name.endswith('.mp4'): - return None #Media().split(media.name,pixel_size, "video") - else: - media = Image.open(media.name).convert("RGB") - media = cv2.cvtColor(np.asarray(face2paint(model, media)), cv2.COLOR_BGR2RGB) - if checkbox1: - result = U2Net.Segmentation(images=[media], - paths=None, - batch_size=1, - input_size=320, - output_dir='output', - visualization=True) - result = combine().combiner(images = pixL().toThePixL([result[0]['front'][:,:,::-1], result[0]['mask']], - pixel_size), - background_image = media) - else: - result = pixL().toThePixL([media], pixel_size) - result = Image.fromarray(result) - result.save('cache.png') - return [None, result, 'cache.png'] - -inputs = [gr.File(label="Media"), - gr.Slider(4, 100, value=12, step = 2, label="Pixel Size"), - gr.Checkbox(label="Object-Oriented Inference", value=False)] - -outputs = [gr.Video(label="Pixed Media"), - gr.Image(label="Pixed Media"), - gr.File(label="Download")] - -title = "픽세라: 사진,그림을 픽셀아트로 만들어보세요" -description = """현재는 사진,그림만 가능합니다. 동영상은 추후 지원""" - -gr.Interface(fn = initilize, - inputs = inputs, - outputs = outputs, - title=title, - description=description).launch() \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/experimental/README.md b/spaces/Jackflack09/diffuse-custom/diffusers/experimental/README.md deleted file mode 100644 index 81a9de81c73728ea41eb6e8617a5429c3c9645ff..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/experimental/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# 🧨 Diffusers Experimental - -We are adding experimental code to support novel applications and usages of the Diffusers library. -Currently, the following experiments are supported: -* Reinforcement learning via an implementation of the [Diffuser](https://arxiv.org/abs/2205.09991) model. \ No newline at end of file diff --git a/spaces/JoPmt/Short_Bedtime_Stories/index.html b/spaces/JoPmt/Short_Bedtime_Stories/index.html deleted file mode 100644 index 9d9f5d991c45457204cdfd70d9162203489ccbed..0000000000000000000000000000000000000000 --- a/spaces/JoPmt/Short_Bedtime_Stories/index.html +++ /dev/null @@ -1,183 +0,0 @@ - - - - HuggingFace.API.Model.Chain.Childrens.Bedtime.Stories.Generator - - - - - - -

    HuggingFace Children's Bedtime Story Generator?

    -
    StabilityAI OpenJourney Runwayml Stable-Diffusion AI Models API Chaining text-to-image,text-to-text Demo
    -
    - -
    - -
    - -
    -
    -
    - - - \ No newline at end of file diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/llama_func.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/llama_func.py deleted file mode 100644 index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/llama_func.py +++ /dev/null @@ -1,166 +0,0 @@ -import os -import logging - -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * -from modules.config import local_embedding - - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filepath)[1] - logging.info(f"loading file: {filename}") - try: - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, "rb") as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif file_type == ".docx": - logging.debug("Loading Word...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_list = excel_to_string(filepath) - for elem in text_list: - documents.append(Document(elem)) - continue - else: - logging.debug("Loading text file...") - with open(filepath, "r", encoding="utf-8") as f: - text_raw = f.read() - except Exception as e: - logging.error(f"Error loading file: {filename}") - pass - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", -): - from langchain.chat_models import ChatOpenAI - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding - - if api_key: - os.environ["OPENAI_API_KEY"] = api_key - else: - # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY - os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx" - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - prompt_helper = PromptHelper( - max_input_size=max_input_size, - num_output=num_outputs, - max_chunk_overlap=max_chunk_overlap, - embedding_limit=embedding_limit, - chunk_size_limit=600, - separator=separator, - ) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - if local_embedding: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - logging.info("构建索引中……") - with retrieve_proxy(): - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, - chunk_size_limit=chunk_size_limit, - embed_model=embed_model, - ) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/JustinLin610/ImageBind_zeroshot_demo/model_card.md b/spaces/JustinLin610/ImageBind_zeroshot_demo/model_card.md deleted file mode 100644 index c7bb26500b6590b64ffa6350f37be80dc88612d8..0000000000000000000000000000000000000000 --- a/spaces/JustinLin610/ImageBind_zeroshot_demo/model_card.md +++ /dev/null @@ -1,94 +0,0 @@ -# Model Card for ImageBind - -Multimodal joint embedding model for image/video, text, audio, depth, IMU, and thermal images. -Input any of the six modalities and get the same sized embedding that can be used for cross-modal and multimodal tasks. - -# Model Details - -## Model Description - - -Multimodal joint embedding model for image/video, text, audio, depth, IMU, and thermal images - -- **Developed by:** Meta AI -- **Model type:** Multimodal model -- **Language(s) (NLP):** en -- **License:** CC BY-NC-SA 4.0 -- **Resources for more information:** - - [GitHub Repo](https://github.com/facebookresearch/ImageBind) - - -# Uses - - -This model is intended only for research purposes. It provides a joint embedding space for different modalities -- image/video, text, audio, depth, IMU and thermal images. -We hope that these joint embeddings can be used for a variety of different cross-modal research, e.g., cross-modal retrieval and combining embeddings from different modalities. - -## Out-of-Scope Use - - - - -This model is *NOT* intended to be used in any real world application -- commercial or otherwise. -It may produce harmful associations with different inputs. -The model needs to be investigated and likely re-trained on specific data for any such application. -The model is expected to work better on web-based visual data since it was trained on such data. -The text encoder is likely to work only on English language text because of the underlying training datasets. - -# Bias, Risks, and Limitations - - -Open-domain joint embedding models are prone to producing specific biases, e.g., study from [CLIP](https://github.com/openai/CLIP/blob/main/model-card.md#bias-and-fairness). -Since our model uses such models as initialization, it will exhibit such biases too. -Moreover, for learning joint embeddings for other modalities such as audio, thermal, depth, and IMU we leverage datasets that are relatively small. These joint embeddings are thus limited to the concepts present in the datasets. For example, the thermal datasets we used are limited to outdoor street scenes, while the depth datasets are limited to indoor scenes. - - - -# Training Details - -## Training Data - - - -ImageBind uses image-paired data for training -- (image, X) where X is one of text, audio, depth, IMU or thermal data. -In particular, we initialize and freeze the image and text encoders using an OpenCLIP ViT-H encoder. -We train audio embeddings using Audioset, depth embeddings using the SUN RGB-D dataset, IMU using the Ego4D dataset and thermal embeddings using the LLVIP dataset. -We provide the exact training data details in the paper. - - -## Training Procedure - - -Please refer to the research paper and github repo for exact details on this. - -# Evaluation - -## Testing Data, Factors & Metrics - -We evaluate the model on a variety of different classification benchmarks for each modality. -The evaluation details are presented in the paper. -The models performance is measured using standard classification metrics such as accuracy and mAP. - -# Citation - - - -**BibTeX:** -``` -@inproceedings{girdhar2023imagebind, - title={ImageBind: One Embedding Space To Bind Them All}, - author={Girdhar, Rohit and El-Nouby, Alaaeldin and Liu, Zhuang -and Singh, Mannat and Alwala, Kalyan Vasudev and Joulin, Armand and Misra, Ishan}, - booktitle={CVPR}, - year={2023} -} -``` - - -# Model Card Contact - -Please reach out to the authors at: rgirdhar@meta.com imisra@meta.com alaaelnouby@gmail.com - -# How to Get Started with the Model - -Our github repo provides a simple example to extract embeddings from images, audio etc. diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/ddim.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/ddim.py deleted file mode 100644 index fb31215db5c3f3f703f15987d7eee6a179c9f7ec..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,241 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \ - extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec \ No newline at end of file diff --git a/spaces/Korakoe/convert-sd-ckpt-cpu/app.py b/spaces/Korakoe/convert-sd-ckpt-cpu/app.py deleted file mode 100644 index 246db2b74de4c1a16b02c81398fb189a9954a08e..0000000000000000000000000000000000000000 --- a/spaces/Korakoe/convert-sd-ckpt-cpu/app.py +++ /dev/null @@ -1,279 +0,0 @@ -import io -import os -import shutil -import zipfile - -import gradio as gr -import requests -from huggingface_hub import create_repo, upload_folder, whoami - -from convert import convert_full_checkpoint - -MODELS_DIR = "models/" -CKPT_FILE = MODELS_DIR + "model.ckpt" -HF_MODEL_DIR = MODELS_DIR + "diffusers_model" -ZIP_FILE = MODELS_DIR + "model.zip" - - -def download_ckpt(url, out_path): - with open(out_path, "wb") as out_file: - with requests.get(url, stream=True) as r: - r.raise_for_status() - for chunk in r.iter_content(chunk_size=8192): - out_file.write(chunk) - - -def zip_model(model_path, zip_path): - with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_STORED) as zip_file: - for root, dirs, files in os.walk(model_path): - for file in files: - zip_file.write( - os.path.join(root, file), - os.path.relpath( - os.path.join(root, file), os.path.join(model_path, "..") - ), - ) - - -def download_checkpoint_and_config(ckpt_url, config_url): - ckpt_url = ckpt_url.strip() - config_url = config_url.strip() - - if not ckpt_url.startswith("http://") and not ckpt_url.startswith("https://"): - raise ValueError("Invalid checkpoint URL") - - if config_url.startswith("http://") or config_url.startswith("https://"): - response = requests.get(config_url) - response.raise_for_status() - config_file = io.BytesIO(response.content) - elif config_url != "": - raise ValueError("Invalid config URL") - else: - config_file = open("original_config.yaml", "r") - - download_ckpt(ckpt_url, CKPT_FILE) - - return CKPT_FILE, config_file - - -def convert_and_download(ckpt_url, config_url, scheduler_type, extract_ema): - shutil.rmtree(MODELS_DIR, ignore_errors=True) - os.makedirs(HF_MODEL_DIR) - - ckpt_path, config_file = download_checkpoint_and_config(ckpt_url, config_url) - - convert_full_checkpoint( - ckpt_path, - config_file, - scheduler_type=scheduler_type, - extract_ema=(extract_ema == "EMA"), - output_path=HF_MODEL_DIR, - ) - zip_model(HF_MODEL_DIR, ZIP_FILE) - - return ZIP_FILE - - -def convert_and_upload( - ckpt_url, config_url, scheduler_type, extract_ema, token, model_name -): - shutil.rmtree(MODELS_DIR, ignore_errors=True) - os.makedirs(HF_MODEL_DIR) - - try: - ckpt_path, config_file = download_checkpoint_and_config(ckpt_url, config_url) - - username = whoami(token)["name"] - repo_name = f"{username}/{model_name}" - repo_url = create_repo(repo_name, token=token, exist_ok=True) - convert_full_checkpoint( - ckpt_path, - config_file, - scheduler_type=scheduler_type, - extract_ema=(extract_ema == "EMA"), - output_path=HF_MODEL_DIR, - ) - upload_folder(repo_id=repo_name, folder_path=HF_MODEL_DIR, token=token, commit_message=f"Upload diffusers weights") - except Exception as e: - return f"#### Error: {e}" - return f"#### Success! Model uploaded to [{repo_url}]({repo_url})" - - -TTILE_IMAGE = """ -
    - -
    -""" - -TITLE = """ -
    -

    - Convert Stable Diffusion `.ckpt` files to Hugging Face Diffusers 🔥 -

    -
    -""" - -with gr.Blocks() as interface: - gr.HTML(TTILE_IMAGE) - gr.HTML(TITLE) - gr.Markdown("We will perform all of the checkpoint surgery for you, and create a clean diffusers model!") - gr.Markdown("This converter will also remove any pickled code from third-party checkpoints.") - - with gr.Row(): - with gr.Column(scale=50): - gr.Markdown("### 1. Paste a URL to your .ckpt file") - ckpt_url = gr.Textbox( - max_lines=1, - label="URL to .ckpt", - placeholder="https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt", - ) - - with gr.Column(scale=50): - gr.Markdown("### (Optional) paste a URL to your .yaml file") - config_url = gr.Textbox( - max_lines=1, - label="URL to .yaml", - placeholder="https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-inference.yaml", - ) - gr.Markdown( - "**If you don't provide a config file, we'll try to use" - " [v1-inference.yaml](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-inference.yaml).*" - ) - with gr.Accordion("Advanced Settings"): - scheduler_type = gr.Dropdown( - label="Choose a scheduler type (if not sure, keep the PNDM default)", - choices=["PNDM", "K-LMS", "Euler", "EulerAncestral", "DDIM"], - value="PNDM", - ) - extract_ema = gr.Radio( - label=( - "EMA weights usually yield higher quality images for inference." - " Non-EMA weights are usually better to continue fine-tuning." - ), - choices=["EMA", "Non-EMA"], - value="EMA", - interactive=True, - ) - - gr.Markdown("### 2. Choose what to do with the converted model") - model_choice = gr.Radio( - show_label=False, - choices=[ - "Download the model as an archive", - "Host the model on the Hugging Face Hub", - # "Submit a PR with the model for an existing Hub repository", - ], - type="index", - value="Download the model as an archive", - interactive=True, - ) - - download_panel = gr.Column(visible=True) - upload_panel = gr.Column(visible=False) - # pr_panel = gr.Column(visible=False) - - model_choice.change( - fn=lambda i: gr.update(visible=(i == 0)), - inputs=model_choice, - outputs=download_panel, - ) - model_choice.change( - fn=lambda i: gr.update(visible=(i == 1)), - inputs=model_choice, - outputs=upload_panel, - ) - # model_choice.change( - # fn=lambda i: gr.update(visible=(i == 2)), - # inputs=model_choice, - # outputs=pr_panel, - # ) - - with download_panel: - gr.Markdown("### 3. Convert and download") - - down_btn = gr.Button("Convert") - output_file = gr.File( - label="Download the converted model", - type="binary", - interactive=False, - visible=True, - ) - - down_btn.click( - fn=convert_and_download, - inputs=[ckpt_url, config_url, scheduler_type, extract_ema], - outputs=output_file, - ) - - with upload_panel: - gr.Markdown("### 3. Convert and host on the Hub") - gr.Markdown( - "This will create a new repository if it doesn't exist yet, and upload the model to the Hugging Face Hub.\n\n" - "Paste a WRITE token from [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)" - " and make up a model name." - ) - up_token = gr.Textbox( - max_lines=1, - label="Hugging Face token", - ) - up_model_name = gr.Textbox( - max_lines=1, - label="Hub model name (e.g. `artistic-diffusion-v1`)", - placeholder="my-awesome-model", - ) - - upload_btn = gr.Button("Convert and upload") - with gr.Box(): - output_text = gr.Markdown() - upload_btn.click( - fn=convert_and_upload, - inputs=[ - ckpt_url, - config_url, - scheduler_type, - extract_ema, - up_token, - up_model_name, - ], - outputs=output_text, - ) - - # with pr_panel: - # gr.Markdown("### 3. Convert and submit as a PR") - # gr.Markdown( - # "This will open a Pull Request on the original model repository, if it already exists on the Hub.\n\n" - # "Paste a write-access token from [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)" - # " and paste an existing model id from the Hub in the `username/model-name` form." - # ) - # pr_token = gr.Textbox( - # max_lines=1, - # label="Hugging Face token", - # ) - # pr_model_name = gr.Textbox( - # max_lines=1, - # label="Hub model name (e.g. `diffuser/artistic-diffusion-v1`)", - # placeholder="diffuser/my-awesome-model", - # ) - # - # btn = gr.Button("Convert and open a PR") - # output = gr.Markdown(label="Output") - - -interface.queue(concurrency_count=1) -interface.launch() diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/grid_assigner.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/grid_assigner.py deleted file mode 100644 index d8935d2df2937f90c71599e5b45ed9a3dff8cd7e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/grid_assigner.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Tuple, Union - -import torch -from mmengine.structures import InstanceData - -from mmdet.registry import TASK_UTILS -from mmdet.utils import ConfigType -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@TASK_UTILS.register_module() -class GridAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple[float, float]): IoU threshold for negative - bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - Defaults to 0. - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - iou_calculator (:obj:`ConfigDict` or dict): Config of overlaps - Calculator. - """ - - def __init__( - self, - pos_iou_thr: float, - neg_iou_thr: Union[float, Tuple[float, float]], - min_pos_iou: float = .0, - gt_max_assign_all: bool = True, - iou_calculator: ConfigType = dict(type='BboxOverlaps2D') - ) -> None: - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.iou_calculator = TASK_UTILS.build(iou_calculator) - - def assign(self, - pred_instances: InstanceData, - gt_instances: InstanceData, - gt_instances_ignore: Optional[InstanceData] = None, - **kwargs) -> AssignResult: - """Assign gt to bboxes. The process is very much like the max iou - assigner, except that positive samples are constrained within the cell - that the gt boxes fell in. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to -1 - 2. assign proposals whose iou with all gts <= neg_iou_thr to 0 - 3. for each bbox within a cell, if the iou with its nearest gt > - pos_iou_thr and the center of that gt falls inside the cell, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals within the cell the - gt bbox falls in to itself. - - Args: - pred_instances (:obj:`InstanceData`): Instances of model - predictions. It includes ``priors``, and the priors can - be anchors or points, or the bboxes predicted by the - previous stage, has shape (n, 4). The bboxes predicted by - the current model or stage will be named ``bboxes``, - ``labels``, and ``scores``, the same as the ``InstanceData`` - in other places. - gt_instances (:obj:`InstanceData`): Ground truth of instance - annotations. It usually includes ``bboxes``, with shape (k, 4), - and ``labels``, with shape (k, ). - gt_instances_ignore (:obj:`InstanceData`, optional): Instances - to be ignored during training. It includes ``bboxes`` - attribute data that is ignored during training and testing. - Defaults to None. - - Returns: - :obj:`AssignResult`: The assign result. - """ - gt_bboxes = gt_instances.bboxes - gt_labels = gt_instances.labels - - priors = pred_instances.priors - responsible_flags = pred_instances.responsible_flags - - num_gts, num_priors = gt_bboxes.size(0), priors.size(0) - - # compute iou between all gt and priors - overlaps = self.iou_calculator(gt_bboxes, priors) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_priors, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_priors == 0: - # No ground truth or priors, return empty assignment - max_overlaps = overlaps.new_zeros((num_priors, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - assigned_labels = overlaps.new_full((num_priors, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # 2. assign negative: below - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - # shape of max_overlaps == argmax_overlaps == num_priors - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps <= self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, (tuple, list)): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps > self.neg_iou_thr[0]) - & (max_overlaps <= self.neg_iou_thr[1])] = 0 - - # 3. assign positive: falls into responsible cell and above - # positive IOU threshold, the order matters. - # the prior condition of comparison is to filter out all - # unrelated anchors, i.e. not responsible_flags - overlaps[:, ~responsible_flags.type(torch.bool)] = -1. - - # calculate max_overlaps again, but this time we only consider IOUs - # for anchors responsible for prediction - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - # shape of gt_max_overlaps == gt_argmax_overlaps == num_gts - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - pos_inds = (max_overlaps > self.pos_iou_thr) & responsible_flags.type( - torch.bool) - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - # 4. assign positive to max overlapped anchors within responsible cell - for i in range(num_gts): - if gt_max_overlaps[i] > self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = (overlaps[i, :] == gt_max_overlaps[i]) & \ - responsible_flags.type(torch.bool) - assigned_gt_inds[max_iou_inds] = i + 1 - elif responsible_flags[gt_argmax_overlaps[i]]: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - # assign labels of positive anchors - assigned_labels = assigned_gt_inds.new_full((num_priors, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[assigned_gt_inds[pos_inds] - - 1] - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/spaces/Lamai/LAMAIGPT/tests/browse_tests.py b/spaces/Lamai/LAMAIGPT/tests/browse_tests.py deleted file mode 100644 index f896e7dd751b1b661d5e989909448b7e182eab69..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/browse_tests.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import sys -import unittest - -from bs4 import BeautifulSoup - -sys.path.append(os.path.abspath("../scripts")) - -from browse import extract_hyperlinks - - -class TestBrowseLinks(unittest.TestCase): - def test_extract_hyperlinks(self): - body = """ - - Google - Foo -
    Some other crap
    - - """ - soup = BeautifulSoup(body, "html.parser") - links = extract_hyperlinks(soup, "http://example.com") - self.assertEqual( - links, - [("Google", "https://google.com"), ("Foo", "http://example.com/foo.html")], - ) diff --git a/spaces/Lbin123/Lbingo/src/components/chat-list.tsx b/spaces/Lbin123/Lbingo/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
    - {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
    - ) -} diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" deleted file mode 100644 index ffbb05599ef09c9de25334ebeca2eef8022b9aaf..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" +++ /dev/null @@ -1,160 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - -fast_debug = False - -def readPdf(pdfPath): - """ - 读取pdf文件,返回文本内容 - """ - import pdfminer - from pdfminer.pdfparser import PDFParser - from pdfminer.pdfdocument import PDFDocument - from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed - from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter - from pdfminer.pdfdevice import PDFDevice - from pdfminer.layout import LAParams - from pdfminer.converter import PDFPageAggregator - - fp = open(pdfPath, 'rb') - - # Create a PDF parser object associated with the file object - parser = PDFParser(fp) - - # Create a PDF document object that stores the document structure. - # Password for initialization as 2nd parameter - document = PDFDocument(parser) - # Check if the document allows text extraction. If not, abort. - if not document.is_extractable: - raise PDFTextExtractionNotAllowed - - # Create a PDF resource manager object that stores shared resources. - rsrcmgr = PDFResourceManager() - - # Create a PDF device object. - # device = PDFDevice(rsrcmgr) - - # BEGIN LAYOUT ANALYSIS. - # Set parameters for analysis. - laparams = LAParams( - char_margin=10.0, - line_margin=0.2, - boxes_flow=0.2, - all_texts=False, - ) - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - # Create a PDF interpreter object. - interpreter = PDFPageInterpreter(rsrcmgr, device) - - # loop over all pages in the document - outTextList = [] - for page in PDFPage.create_pages(document): - # read the page into a layout object - interpreter.process_page(page) - layout = device.get_result() - for obj in layout._objs: - if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): - # print(obj.get_text()) - outTextList.append(obj.get_text()) - - return outTextList - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - from bs4 import BeautifulSoup - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if ".tex" in fp: - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - if ".pdf" in fp.lower(): - file_content = readPdf(fp) - file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk') - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - diff --git a/spaces/Lookimi/TuberTranscript/app.py b/spaces/Lookimi/TuberTranscript/app.py deleted file mode 100644 index 0b05ba2736842fd723ca02eda326289cb6a663e9..0000000000000000000000000000000000000000 --- a/spaces/Lookimi/TuberTranscript/app.py +++ /dev/null @@ -1,61 +0,0 @@ - - - -#importing the necessary modules -import os -import urllib.request -import re -import time -import gradio as gr - -#Creating a Gradio App Menu -#def transcript_extract(): - - #specifying the YouTube channel URL -channel_url = gr.inputs.Textbox(label="Channel URL") - -#accessing the webpage -page = urllib.request.urlopen(channel_url) - -#reading the source code -data = page.read().decode("utf-8") - -#creating a directory to save the transcripts -# os.makedirs('Transcripts',exist_ok=True) - -#finding the transcripts -transcript_links = re.findall(r'(\/watch\?v=[A-Za-z0-9_.-]*)', str(data)) - -#looping through each transcript to download -for link in transcript_links: - video_url = 'http://www.youtube.com'+link - #access the video page - video_page = urllib.request.urlopen(video_url) - #read the source code - video_data = video_page.read().decode("utf-8") - #find the transcript - transcript_link = re.findall(r'(\/timedtext_editor\?[A-Za-z0-9_.-]*)', str(video_data)) - #check if there is a transcript available - if(len(transcript_link) > 0): - #access the transcript page - transcript_url ='http://www.youtube.com'+ transcript_link[0] - transcript_page = urllib.request.urlopen(transcript_url) - transcript_data = transcript_page.read().decode("utf-8") - #find the link to the transcript - transcript_download_link = re.findall(r'(\/api\/timedtext\?[A-Za-z0-9_.-]*)', str(transcript_data)) - #check if the transcript is available for download - if(len(transcript_download_link) > 0): - #download the transcript - # file_name = "Transcripts/" + link[9:] + ".xml" - file_name = link[9:] + ".xml" - download_url = 'http://www.youtube.com'+transcript_download_link[0] - urllib.request.urlretrieve(download_url, file_name) - print("Downloading transcript for video " + link[9:] + "...") - time.sleep(3) - else: - print("Transcript not available for video " + link[9:]) - else: - print("Transcript not available for video " + link[9:]) - -#launch the gradio -gr.Interface(fn=transcript_extract, inputs="textbox", outputs="textbox", share=True).launch() \ No newline at end of file diff --git a/spaces/Luelll/ChuanhuChatGPT/modules/models/base_model.py b/spaces/Luelll/ChuanhuChatGPT/modules/models/base_model.py deleted file mode 100644 index 995bac5f72a0a1d8cc2eed8ccdfde87928ba2f41..0000000000000000000000000000000000000000 --- a/spaces/Luelll/ChuanhuChatGPT/modules/models/base_model.py +++ /dev/null @@ -1,593 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback -import pathlib - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from ..presets import * -from ..llama_func import * -from ..utils import * -from .. import shared -from ..config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - StableLM = 4 - MOSS = 5 - YuanAI = 6 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - elif "stablelm" in model_name_lower: - model_type = ModelType.StableLM - elif "moss" in model_name_lower: - model_type = ModelType.MOSS - elif "yuanai" in model_name_lower: - model_type = ModelType.YuanAI - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - # logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - response_synthesizer=None - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
  8. {domain_name}
  9. \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
      \n\n" + "".join(display_append) + "
    " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - self.auto_save(chatbot) - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - pathlib.Path(os.path.join(HISTORY_DIR, self.user_identifier, new_auto_history_filename(os.path.join(HISTORY_DIR, self.user_identifier)))).touch() - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def auto_save(self, chatbot): - history_file_path = get_history_filepath(self.user_identifier) - save_file(history_file_path, self.system_prompt, self.history, chatbot, self.user_identifier) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - logging.info(f"filename: {filename}") - if type(filename) != str and filename is not None: - filename = filename.name - try: - if "/" not in filename: - history_file_path = os.path.join(HISTORY_DIR, user_name, filename) - else: - history_file_path = filename - with open(history_file_path, "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return os.path.basename(filename), json_s["system"], json_s["chatbot"] - except: - # 没有对话历史或者对话历史解析失败 - logging.info(f"没有找到对话历史记录 {filename}") - return gr.update(), self.system_prompt, gr.update() - - def auto_load(self): - if self.user_identifier == "": - self.reset() - return self.system_prompt, gr.update() - history_file_path = get_history_filepath(self.user_identifier) - filename, system_prompt, chatbot = self.load_chat_history(history_file_path, self.user_identifier) - return system_prompt, chatbot - - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/spec.py b/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/spec.py deleted file mode 100644 index 3fa983523d7b404aed99529a94a087e921a70a86..0000000000000000000000000000000000000000 --- a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/spec.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Meta, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Conveniance wrapper to perform STFT and iSTFT""" - -import torch as th - - -def spectro(x, n_fft=512, hop_length=None, pad=0): - *other, length = x.shape - x = x.reshape(-1, length) - z = th.stft(x, - n_fft * (1 + pad), - hop_length or n_fft // 4, - window=th.hann_window(n_fft).to(x), - win_length=n_fft, - normalized=True, - center=True, - return_complex=True, - pad_mode='reflect') - _, freqs, frame = z.shape - return z.view(*other, freqs, frame) - - -def ispectro(z, hop_length=None, length=None, pad=0): - *other, freqs, frames = z.shape - n_fft = 2 * freqs - 2 - z = z.view(-1, freqs, frames) - win_length = n_fft // (1 + pad) - x = th.istft(z, - n_fft, - hop_length, - window=th.hann_window(win_length).to(z.real), - win_length=win_length, - normalized=True, - length=length, - center=True) - _, length = x.shape - return x.view(*other, length) diff --git a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/states.py b/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/states.py deleted file mode 100644 index 71f229a886527291139d46e53d3cb7f947047060..0000000000000000000000000000000000000000 --- a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/states.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) Meta, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Utilities to save and load models. -""" -from contextlib import contextmanager - -import functools -import hashlib -import inspect -import io -from pathlib import Path -import warnings - -from omegaconf import OmegaConf -from diffq import DiffQuantizer, UniformQuantizer, restore_quantized_state -import torch - - -def get_quantizer(model, args, optimizer=None): - """Return the quantizer given the XP quantization args.""" - quantizer = None - if args.diffq: - quantizer = DiffQuantizer( - model, min_size=args.min_size, group_size=args.group_size) - if optimizer is not None: - quantizer.setup_optimizer(optimizer) - elif args.qat: - quantizer = UniformQuantizer( - model, bits=args.qat, min_size=args.min_size) - return quantizer - - -def load_model(path_or_package, strict=False): - """Load a model from the given serialized model, either given as a dict (already loaded) - or a path to a file on disk.""" - if isinstance(path_or_package, dict): - package = path_or_package - elif isinstance(path_or_package, (str, Path)): - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - path = path_or_package - package = torch.load(path, 'cpu') - else: - raise ValueError(f"Invalid type for {path_or_package}.") - - klass = package["klass"] - args = package["args"] - kwargs = package["kwargs"] - - if strict: - model = klass(*args, **kwargs) - else: - sig = inspect.signature(klass) - for key in list(kwargs): - if key not in sig.parameters: - warnings.warn("Dropping inexistant parameter " + key) - del kwargs[key] - model = klass(*args, **kwargs) - - state = package["state"] - - set_state(model, state) - return model - - -def get_state(model, quantizer, half=False): - """Get the state from a model, potentially with quantization applied. - If `half` is True, model are stored as half precision, which shouldn't impact performance - but half the state size.""" - if quantizer is None: - dtype = torch.half if half else None - state = {k: p.data.to(device='cpu', dtype=dtype) for k, p in model.state_dict().items()} - else: - state = quantizer.get_quantized_state() - state['__quantized'] = True - return state - - -def set_state(model, state, quantizer=None): - """Set the state on a given model.""" - if state.get('__quantized'): - if quantizer is not None: - quantizer.restore_quantized_state(model, state['quantized']) - else: - restore_quantized_state(model, state) - else: - model.load_state_dict(state) - return state - - -def save_with_checksum(content, path): - """Save the given value on disk, along with a sha256 hash. - Should be used with the output of either `serialize_model` or `get_state`.""" - buf = io.BytesIO() - torch.save(content, buf) - sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8] - - path = path.parent / (path.stem + "-" + sig + path.suffix) - path.write_bytes(buf.getvalue()) - - -def serialize_model(model, training_args, quantizer=None, half=True): - args, kwargs = model._init_args_kwargs - klass = model.__class__ - - state = get_state(model, quantizer, half) - return { - 'klass': klass, - 'args': args, - 'kwargs': kwargs, - 'state': state, - 'training_args': OmegaConf.to_container(training_args, resolve=True), - } - - -def copy_state(state): - return {k: v.cpu().clone() for k, v in state.items()} - - -@contextmanager -def swap_state(model, state): - """ - Context manager that swaps the state of a model, e.g: - - # model is in old state - with swap_state(model, new_state): - # model in new state - # model back to old state - """ - old_state = copy_state(model.state_dict()) - model.load_state_dict(state, strict=False) - try: - yield - finally: - model.load_state_dict(old_state) - - -def capture_init(init): - @functools.wraps(init) - def __init__(self, *args, **kwargs): - self._init_args_kwargs = (args, kwargs) - init(self, *args, **kwargs) - - return __init__ diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/README.md b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/README.md deleted file mode 100644 index 1d827ae05da6978cce32f992c737b31c8c4c62a6..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LoveLive-ShojoKageki VITS -emoji: ⚡ -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py deleted file mode 100644 index 10c0920c1a217af5bb3e1b13077568035ab3b7b5..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py +++ /dev/null @@ -1,123 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -DETR Transformer class. - -Copy-paste from torch.nn.Transformer with modifications: - * positional encodings are passed in MHattention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers -""" -from typing import Optional - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - -from .utils import ( - MLP, - _get_activation_fn, - _get_clones, - gen_encoder_output_proposals, - gen_sineembed_for_position, - sigmoid_focal_loss, -) - - -class TextTransformer(nn.Module): - def __init__(self, num_layers, d_model=256, nheads=8, dim_feedforward=2048, dropout=0.1): - super().__init__() - self.num_layers = num_layers - self.d_model = d_model - self.nheads = nheads - self.dim_feedforward = dim_feedforward - self.norm = None - - single_encoder_layer = TransformerEncoderLayer( - d_model=d_model, nhead=nheads, dim_feedforward=dim_feedforward, dropout=dropout - ) - self.layers = _get_clones(single_encoder_layer, num_layers) - - def forward(self, memory_text: torch.Tensor, text_attention_mask: torch.Tensor): - """ - - Args: - text_attention_mask: bs, num_token - memory_text: bs, num_token, d_model - - Raises: - RuntimeError: _description_ - - Returns: - output: bs, num_token, d_model - """ - - output = memory_text.transpose(0, 1) - - for layer in self.layers: - output = layer(output, src_key_padding_mask=text_attention_mask) - - if self.norm is not None: - output = self.norm(output) - - return output.transpose(0, 1) - - -class TransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - self.nhead = nhead - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - # repeat attn mask - if src_mask.dim() == 3 and src_mask.shape[0] == src.shape[1]: - # bs, num_q, num_k - src_mask = src_mask.repeat(self.nhead, 1, 1) - - q = k = self.with_pos_embed(src, pos) - - src2 = self.self_attn(q, k, value=src, attn_mask=src_mask)[0] - - # src2 = self.self_attn(q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src diff --git a/spaces/Manjushri/MusicGen/audiocraft/quantization/vq.py b/spaces/Manjushri/MusicGen/audiocraft/quantization/vq.py deleted file mode 100644 index f67c3a0cd30d4b8993a36c587f00dc8a451d926f..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/quantization/vq.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp - -import torch - -from .base import BaseQuantizer, QuantizedResult -from .core_vq import ResidualVectorQuantization - - -class ResidualVectorQuantizer(BaseQuantizer): - """Residual Vector Quantizer. - - Args: - dimension (int): Dimension of the codebooks. - n_q (int): Number of residual vector quantizers used. - q_dropout (bool): Random quantizer drop out at train time. - bins (int): Codebook size. - decay (float): Decay for exponential moving average over the codebooks. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider. - for orthogonal regulariation. - """ - def __init__( - self, - dimension: int = 256, - n_q: int = 8, - q_dropout: bool = False, - bins: int = 1024, - decay: float = 0.99, - kmeans_init: bool = True, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - self.max_n_q = n_q - self.n_q = n_q - self.q_dropout = q_dropout - self.dimension = dimension - self.bins = bins - self.decay = decay - self.kmeans_init = kmeans_init - self.kmeans_iters = kmeans_iters - self.threshold_ema_dead_code = threshold_ema_dead_code - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - self.vq = ResidualVectorQuantization( - dim=self.dimension, - codebook_size=self.bins, - num_quantizers=self.n_q, - decay=self.decay, - kmeans_init=self.kmeans_init, - kmeans_iters=self.kmeans_iters, - threshold_ema_dead_code=self.threshold_ema_dead_code, - orthogonal_reg_weight=self.orthogonal_reg_weight, - orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only, - orthogonal_reg_max_codes=self.orthogonal_reg_max_codes, - channels_last=False - ) - - def forward(self, x: torch.Tensor, frame_rate: int): - n_q = self.n_q - if self.training and self.q_dropout: - n_q = int(torch.randint(1, self.n_q + 1, (1,)).item()) - bw_per_q = math.log2(self.bins) * frame_rate / 1000 - quantized, codes, commit_loss = self.vq(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - bw = torch.tensor(n_q * bw_per_q).to(x) - return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified frame rate at the given bandwidth. - The RVQ encode method sets the appropriate number of quantizer to use - and returns indices for each quantizer. - """ - n_q = self.n_q - codes = self.vq.encode(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - return codes - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T]. - codes = codes.transpose(0, 1) - quantized = self.vq.decode(codes) - return quantized - - @property - def total_codebooks(self): - return self.max_n_q - - @property - def num_codebooks(self): - return self.n_q - - def set_num_codebooks(self, n: int): - assert n > 0 and n <= self.max_n_q - self.n_q = n diff --git a/spaces/Marshalls/testmtd/feature_extraction/extract_transform.py b/spaces/Marshalls/testmtd/feature_extraction/extract_transform.py deleted file mode 100644 index 395d78b1f502dd3881e99b68fc229563c7a0e307..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/extract_transform.py +++ /dev/null @@ -1,79 +0,0 @@ -import librosa -import numpy as np -from pathlib import Path -import json -import os.path -import sys -import argparse - -''' -Compute transforms which r4quire to be fitted on all the data at once rather than sequential (so they dont implement the `partial_fit` function) -''' - -THIS_DIR = os.path.dirname(os.path.abspath(__file__)) -ROOT_DIR = os.path.abspath(os.path.join(THIS_DIR, os.pardir)) -sys.path.append(ROOT_DIR) -from audio_feature_utils import extract_features_hybrid, extract_features_mel, extract_features_multi_mel -from utils import distribute_tasks -#from scripts.feature_extraction.utils import distribute_tasks - -parser = argparse.ArgumentParser(description="Preprocess songs data") - -parser.add_argument("data_path", type=str, help="Directory contining Beat Saber level folders") -parser.add_argument("--feature_name", metavar='', type=str, default="mel", help="mel, chroma, multi_mel") -parser.add_argument("--transforms", metavar='', type=str, default="scaler", help="comma-separated lists of transforms to extract (scaler,pca_transform)") -args = parser.parse_args() - -# makes arugments into global variables of the same name, used later in the code -globals().update(vars(args)) -data_path = Path(data_path) - -## distributing tasks accross nodes ## -from mpi4py import MPI -comm = MPI.COMM_WORLD -rank = comm.Get_rank() -size = comm.Get_size() -print(rank) -assert size == 1 -candidate_files = sorted(data_path.glob('**/*'+feature_name+'.npy'), key=lambda path: path.parent.__str__()) -tasks = range(len(candidate_files)) - -from sklearn import decomposition, preprocessing -features = None -for i in tasks: - path = candidate_files[i] - feature_file = path.__str__() - if i == 0: - features = np.load(feature_file) - else: - feature = np.load(feature_file) - features = np.concatenate([features,feature],0) - -import pickle -transforms = transforms.split(",") -for transform in transforms: - if transform == "2moments": - if len(features.shape) == 3: - features = features[:,0,:] - C = np.dot(features.T,features)/features.shape[0] - m = np.mean(features,0) - pickle.dump((m,C), open(data_path.joinpath(feature_name+'_2moments.pkl'), 'wb')) - elif transform == "2moments_ext": - if len(features.shape) == 3: - features = features[:,0,:] - if features.shape[0] % 3 != 0: - features = features[:-(features.shape[0]%3)] - features = np.reshape(features,(-1,3*features.shape[1])) - C = np.dot(features.T,features)/features.shape[0] - m = np.mean(features,0) - pickle.dump((m,C), open(data_path.joinpath(feature_name+'_2moments_ext.pkl'), 'wb')) - elif transform == "scaler": - scaler = preprocessing.StandardScaler().fit(features) - pickle.dump(scaler, open(data_path.joinpath(feature_name+'_scaler.pkl'), 'wb')) - elif transform == "pca_transform": - feature_size = features.shape[1] - pca = decomposition.PCA(n_components=feature_size) - pca_transform = pca.fit(features) - pickle.dump(pca_transform, open(data_path.joinpath(feature_name+'_pca_transform.pkl'), 'wb')) - else: - raise NotImplementedError("Transform type "+transform+" not implemented") diff --git a/spaces/MatzeFix/openai-whisper-large-v2/README.md b/spaces/MatzeFix/openai-whisper-large-v2/README.md deleted file mode 100644 index 7da6cc67247289e9769597a605996a91552d5d2a..0000000000000000000000000000000000000000 --- a/spaces/MatzeFix/openai-whisper-large-v2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Openai Whisper Large V2 -emoji: 📈 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/assign_score_withk.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/assign_score_withk.py deleted file mode 100644 index 4906adaa2cffd1b46912fbe7d4f87ef2f9fa0012..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/assign_score_withk.py +++ /dev/null @@ -1,123 +0,0 @@ -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['assign_score_withk_forward', 'assign_score_withk_backward']) - - -class AssignScoreWithK(Function): - r"""Perform weighted sum to generate output features according to scores. - Modified from `PAConv `_. - - This is a memory-efficient CUDA implementation of assign_scores operation, - which first transform all point features with weight bank, then assemble - neighbor features with ``knn_idx`` and perform weighted sum of ``scores``. - - See the `paper `_ appendix Sec. D for - more detailed descriptions. - - Note: - This implementation assumes using ``neighbor`` kernel input, which is - (point_features - center_features, point_features). - See https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/model/ - pointnet2/paconv.py#L128 for more details. - """ - - @staticmethod - def forward(ctx, - scores, - point_features, - center_features, - knn_idx, - aggregate='sum'): - """ - Args: - scores (torch.Tensor): (B, npoint, K, M), predicted scores to - aggregate weight matrices in the weight bank. - ``npoint`` is the number of sampled centers. - ``K`` is the number of queried neighbors. - ``M`` is the number of weight matrices in the weight bank. - point_features (torch.Tensor): (B, N, M, out_dim) - Pre-computed point features to be aggregated. - center_features (torch.Tensor): (B, N, M, out_dim) - Pre-computed center features to be aggregated. - knn_idx (torch.Tensor): (B, npoint, K), index of sampled kNN. - We assume the first idx in each row is the idx of the center. - aggregate (str, optional): Aggregation method. - Can be 'sum', 'avg' or 'max'. Defaults: 'sum'. - - Returns: - torch.Tensor: (B, out_dim, npoint, K), the aggregated features. - """ - agg = {'sum': 0, 'avg': 1, 'max': 2} - - B, N, M, out_dim = point_features.size() - _, npoint, K, _ = scores.size() - - output = point_features.new_zeros((B, out_dim, npoint, K)) - ext_module.assign_score_withk_forward( - point_features.contiguous(), - center_features.contiguous(), - scores.contiguous(), - knn_idx.contiguous(), - output, - B=B, - N0=N, - N1=npoint, - M=M, - K=K, - O=out_dim, - aggregate=agg[aggregate]) - - ctx.save_for_backward(output, point_features, center_features, scores, - knn_idx) - ctx.agg = agg[aggregate] - - return output - - @staticmethod - def backward(ctx, grad_out): - """ - Args: - grad_out (torch.Tensor): (B, out_dim, npoint, K) - - Returns: - grad_scores (torch.Tensor): (B, npoint, K, M) - grad_point_features (torch.Tensor): (B, N, M, out_dim) - grad_center_features (torch.Tensor): (B, N, M, out_dim) - """ - _, point_features, center_features, scores, knn_idx = ctx.saved_tensors - - agg = ctx.agg - - B, N, M, out_dim = point_features.size() - _, npoint, K, _ = scores.size() - - grad_point_features = point_features.new_zeros(point_features.shape) - grad_center_features = center_features.new_zeros(center_features.shape) - grad_scores = scores.new_zeros(scores.shape) - - ext_module.assign_score_withk_backward( - grad_out.contiguous(), - point_features.contiguous(), - center_features.contiguous(), - scores.contiguous(), - knn_idx.contiguous(), - grad_point_features, - grad_center_features, - grad_scores, - B=B, - N0=N, - N1=npoint, - M=M, - K=K, - O=out_dim, - aggregate=agg) - - return grad_scores, grad_point_features, \ - grad_center_features, None, None - - -assign_score_withk = AssignScoreWithK.apply diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py deleted file mode 100644 index 98392ac04c4c44a7f4e7b1c0808266875877dd1f..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py +++ /dev/null @@ -1,298 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmseg.core import add_prefix -from annotator.uniformer.mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .base import BaseSegmentor - - -@SEGMENTORS.register_module() -class EncoderDecoder(BaseSegmentor): - """Encoder Decoder segmentors. - - EncoderDecoder typically consists of backbone, decode_head, auxiliary_head. - Note that auxiliary_head is only used for deep supervision during training, - which could be dumped during inference. - """ - - def __init__(self, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(EncoderDecoder, self).__init__() - self.backbone = builder.build_backbone(backbone) - if neck is not None: - self.neck = builder.build_neck(neck) - self._init_decode_head(decode_head) - self._init_auxiliary_head(auxiliary_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - self.init_weights(pretrained=pretrained) - - assert self.with_decode_head - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - self.decode_head = builder.build_head(decode_head) - self.align_corners = self.decode_head.align_corners - self.num_classes = self.decode_head.num_classes - - def _init_auxiliary_head(self, auxiliary_head): - """Initialize ``auxiliary_head``""" - if auxiliary_head is not None: - if isinstance(auxiliary_head, list): - self.auxiliary_head = nn.ModuleList() - for head_cfg in auxiliary_head: - self.auxiliary_head.append(builder.build_head(head_cfg)) - else: - self.auxiliary_head = builder.build_head(auxiliary_head) - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone and heads. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - super(EncoderDecoder, self).init_weights(pretrained) - self.backbone.init_weights(pretrained=pretrained) - self.decode_head.init_weights() - if self.with_auxiliary_head: - if isinstance(self.auxiliary_head, nn.ModuleList): - for aux_head in self.auxiliary_head: - aux_head.init_weights() - else: - self.auxiliary_head.init_weights() - - def extract_feat(self, img): - """Extract features from images.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self._decode_head_forward_test(x, img_metas) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - loss_decode = self.decode_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode')) - return losses - - def _decode_head_forward_test(self, x, img_metas): - """Run forward function and calculate loss for decode head in - inference.""" - seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg) - return seg_logits - - def _auxiliary_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for auxiliary head in - training.""" - losses = dict() - if isinstance(self.auxiliary_head, nn.ModuleList): - for idx, aux_head in enumerate(self.auxiliary_head): - loss_aux = aux_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - losses.update(add_prefix(loss_aux, f'aux_{idx}')) - else: - loss_aux = self.auxiliary_head.forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_aux, 'aux')) - - return losses - - def forward_dummy(self, img): - """Dummy forward function.""" - seg_logit = self.encode_decode(img, None) - - return seg_logit - - def forward_train(self, img, img_metas, gt_semantic_seg): - """Forward function for training. - - Args: - img (Tensor): Input images. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - x = self.extract_feat(img) - - losses = dict() - - loss_decode = self._decode_head_forward_train(x, img_metas, - gt_semantic_seg) - losses.update(loss_decode) - - if self.with_auxiliary_head: - loss_aux = self._auxiliary_head_forward_train( - x, img_metas, gt_semantic_seg) - losses.update(loss_aux) - - return losses - - # TODO refactor - def slide_inference(self, img, img_meta, rescale): - """Inference by sliding-window with overlap. - - If h_crop > h_img or w_crop > w_img, the small patch will be used to - decode without padding. - """ - - h_stride, w_stride = self.test_cfg.stride - h_crop, w_crop = self.test_cfg.crop_size - batch_size, _, h_img, w_img = img.size() - num_classes = self.num_classes - h_grids = max(h_img - h_crop + h_stride - 1, 0) // h_stride + 1 - w_grids = max(w_img - w_crop + w_stride - 1, 0) // w_stride + 1 - preds = img.new_zeros((batch_size, num_classes, h_img, w_img)) - count_mat = img.new_zeros((batch_size, 1, h_img, w_img)) - for h_idx in range(h_grids): - for w_idx in range(w_grids): - y1 = h_idx * h_stride - x1 = w_idx * w_stride - y2 = min(y1 + h_crop, h_img) - x2 = min(x1 + w_crop, w_img) - y1 = max(y2 - h_crop, 0) - x1 = max(x2 - w_crop, 0) - crop_img = img[:, :, y1:y2, x1:x2] - crop_seg_logit = self.encode_decode(crop_img, img_meta) - preds += F.pad(crop_seg_logit, - (int(x1), int(preds.shape[3] - x2), int(y1), - int(preds.shape[2] - y2))) - - count_mat[:, :, y1:y2, x1:x2] += 1 - assert (count_mat == 0).sum() == 0 - if torch.onnx.is_in_onnx_export(): - # cast count_mat to constant while exporting to ONNX - count_mat = torch.from_numpy( - count_mat.cpu().detach().numpy()).to(device=img.device) - preds = preds / count_mat - if rescale: - preds = resize( - preds, - size=img_meta[0]['ori_shape'][:2], - mode='bilinear', - align_corners=self.align_corners, - warning=False) - return preds - - def whole_inference(self, img, img_meta, rescale): - """Inference with full image.""" - - seg_logit = self.encode_decode(img, img_meta) - if rescale: - # support dynamic shape for onnx - if torch.onnx.is_in_onnx_export(): - size = img.shape[2:] - else: - size = img_meta[0]['ori_shape'][:2] - seg_logit = resize( - seg_logit, - size=size, - mode='bilinear', - align_corners=self.align_corners, - warning=False) - - return seg_logit - - def inference(self, img, img_meta, rescale): - """Inference with slide/whole style. - - Args: - img (Tensor): The input image of shape (N, 3, H, W). - img_meta (dict): Image info dict where each dict has: 'img_shape', - 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - rescale (bool): Whether rescale back to original shape. - - Returns: - Tensor: The output segmentation map. - """ - - assert self.test_cfg.mode in ['slide', 'whole'] - ori_shape = img_meta[0]['ori_shape'] - assert all(_['ori_shape'] == ori_shape for _ in img_meta) - if self.test_cfg.mode == 'slide': - seg_logit = self.slide_inference(img, img_meta, rescale) - else: - seg_logit = self.whole_inference(img, img_meta, rescale) - output = F.softmax(seg_logit, dim=1) - flip = img_meta[0]['flip'] - if flip: - flip_direction = img_meta[0]['flip_direction'] - assert flip_direction in ['horizontal', 'vertical'] - if flip_direction == 'horizontal': - output = output.flip(dims=(3, )) - elif flip_direction == 'vertical': - output = output.flip(dims=(2, )) - - return output - - def simple_test(self, img, img_meta, rescale=True): - """Simple test with single image.""" - seg_logit = self.inference(img, img_meta, rescale) - seg_pred = seg_logit.argmax(dim=1) - if torch.onnx.is_in_onnx_export(): - # our inference backend only support 4D output - seg_pred = seg_pred.unsqueeze(0) - return seg_pred - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred - - def aug_test(self, imgs, img_metas, rescale=True): - """Test with augmentations. - - Only rescale=True is supported. - """ - # aug_test rescale all imgs back to ori_shape for now - assert rescale - # to save memory, we get augmented seg logit inplace - seg_logit = self.inference(imgs[0], img_metas[0], rescale) - for i in range(1, len(imgs)): - cur_seg_logit = self.inference(imgs[i], img_metas[i], rescale) - seg_logit += cur_seg_logit - seg_logit /= len(imgs) - seg_pred = seg_logit.argmax(dim=1) - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/schedules/schedule_adamw_cos_6e.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/schedules/schedule_adamw_cos_6e.py deleted file mode 100644 index cd9d29323583c5db51fa3fc8aba2e2aa3a0ed618..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/schedules/schedule_adamw_cos_6e.py +++ /dev/null @@ -1,21 +0,0 @@ -# optimizer -optim_wrapper = dict( - type='OptimWrapper', - optimizer=dict( - type='AdamW', - lr=4e-4, - betas=(0.9, 0.999), - eps=1e-08, - weight_decay=0.05)) -train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=6, val_interval=1) -val_cfg = dict(type='ValLoop') -test_cfg = dict(type='TestLoop') - -# learning policy -param_scheduler = [ - dict( - type='CosineAnnealingLR', - T_max=6, - eta_min=4e-6, - convert_to_iter_based=True) -] diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/engine/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/engine/__init__.py deleted file mode 100644 index 1944bc1e57726ec1922b1e97fb69a75df9c384fe..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/engine/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hooks import * # NOQA diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/encoders/nrtr_encoder.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/encoders/nrtr_encoder.py deleted file mode 100644 index e7d80778990dce9bd8f22eff9a32b6fc5b64fb5d..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/encoders/nrtr_encoder.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -from typing import Dict, Optional, Sequence, Union - -import torch -import torch.nn as nn -from mmengine.model import ModuleList - -from mmocr.models.common import TFEncoderLayer -from mmocr.registry import MODELS -from mmocr.structures import TextRecogDataSample -from .base import BaseEncoder - - -@MODELS.register_module() -class NRTREncoder(BaseEncoder): - """Transformer Encoder block with self attention mechanism. - - Args: - n_layers (int): The number of sub-encoder-layers in the encoder. - Defaults to 6. - n_head (int): The number of heads in the multiheadattention models - Defaults to 8. - d_k (int): Total number of features in key. Defaults to 64. - d_v (int): Total number of features in value. Defaults to 64. - d_model (int): The number of expected features in the decoder inputs. - Defaults to 512. - d_inner (int): The dimension of the feedforward network model. - Defaults to 256. - dropout (float): Dropout rate for MHSA and FFN. Defaults to 0.1. - init_cfg (dict or list[dict], optional): Initialization configs. - """ - - def __init__(self, - n_layers: int = 6, - n_head: int = 8, - d_k: int = 64, - d_v: int = 64, - d_model: int = 512, - d_inner: int = 256, - dropout: float = 0.1, - init_cfg: Optional[Union[Dict, - Sequence[Dict]]] = None) -> None: - super().__init__(init_cfg=init_cfg) - self.d_model = d_model - self.layer_stack = ModuleList([ - TFEncoderLayer( - d_model, d_inner, n_head, d_k, d_v, dropout=dropout) - for _ in range(n_layers) - ]) - self.layer_norm = nn.LayerNorm(d_model) - - def _get_source_mask(self, src_seq: torch.Tensor, - valid_ratios: Sequence[float]) -> torch.Tensor: - """Generate mask for source sequence. - - Args: - src_seq (torch.Tensor): Image sequence. Shape :math:`(N, T, C)`. - valid_ratios (list[float]): The valid ratio of input image. For - example, if the width of the original image is w1 and the width - after pad is w2, then valid_ratio = w1/w2. source mask is used - to cover the area of the pad region. - - Returns: - Tensor or None: Source mask. Shape :math:`(N, T)`. The region of - pad area are False, and the rest are True. - """ - - N, T, _ = src_seq.size() - mask = None - if len(valid_ratios) > 0: - mask = src_seq.new_zeros((N, T), device=src_seq.device) - for i, valid_ratio in enumerate(valid_ratios): - valid_width = min(T, math.ceil(T * valid_ratio)) - mask[i, :valid_width] = 1 - - return mask - - def forward(self, - feat: torch.Tensor, - data_samples: Sequence[TextRecogDataSample] = None - ) -> torch.Tensor: - """ - Args: - feat (Tensor): Backbone output of shape :math:`(N, C, H, W)`. - data_samples (list[TextRecogDataSample]): Batch of - TextRecogDataSample, containing valid_ratio information. - Defaults to None. - - - Returns: - Tensor: The encoder output tensor. Shape :math:`(N, T, C)`. - """ - n, c, h, w = feat.size() - - feat = feat.view(n, c, h * w).permute(0, 2, 1).contiguous() - - valid_ratios = [] - for data_sample in data_samples: - valid_ratios.append(data_sample.get('valid_ratio')) - mask = self._get_source_mask(feat, valid_ratios) - - output = feat - for enc_layer in self.layer_stack: - output = enc_layer(output, mask) - output = self.layer_norm(output) - - return output diff --git a/spaces/MrBodean/VoiceClone/vocoder_preprocess.py b/spaces/MrBodean/VoiceClone/vocoder_preprocess.py deleted file mode 100644 index 7ede3dfb95972e2de575de35b9d4a9c6d642885e..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/vocoder_preprocess.py +++ /dev/null @@ -1,59 +0,0 @@ -from synthesizer.synthesize import run_synthesis -from synthesizer.hparams import hparams -from utils.argutils import print_args -import argparse -import os - - -if __name__ == "__main__": - class MyFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter): - pass - - parser = argparse.ArgumentParser( - description="Creates ground-truth aligned (GTA) spectrograms from the vocoder.", - formatter_class=MyFormatter - ) - parser.add_argument("datasets_root", type=str, help=\ - "Path to the directory containing your SV2TTS directory. If you specify both --in_dir and " - "--out_dir, this argument won't be used.") - parser.add_argument("--model_dir", type=str, - default="synthesizer/saved_models/pretrained/", help=\ - "Path to the pretrained model directory.") - parser.add_argument("-i", "--in_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the mel spectrograms, the wavs and the " - "embeds. Defaults to /SV2TTS/synthesizer/.") - parser.add_argument("-o", "--out_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the output vocoder directory that will contain the ground truth aligned mel " - "spectrograms. Defaults to /SV2TTS/vocoder/.") - parser.add_argument("--hparams", default="", - help="Hyperparameter overrides as a comma-separated list of name=value " - "pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--cpu", action="store_true", help=\ - "If True, processing is done on CPU, even when a GPU is available.") - args = parser.parse_args() - print_args(args, parser) - modified_hp = hparams.parse(args.hparams) - - if not hasattr(args, "in_dir"): - args.in_dir = os.path.join(args.datasets_root, "SV2TTS", "synthesizer") - if not hasattr(args, "out_dir"): - args.out_dir = os.path.join(args.datasets_root, "SV2TTS", "vocoder") - - if args.cpu: - # Hide GPUs from Pytorch to force CPU processing - os.environ["CUDA_VISIBLE_DEVICES"] = "-1" - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - del args.no_trim - - run_synthesis(args.in_dir, args.out_dir, args.model_dir, modified_hp) - diff --git a/spaces/Mrchuw/text-to-image_6_by_6/app.py b/spaces/Mrchuw/text-to-image_6_by_6/app.py deleted file mode 100644 index 44924a18a63ebc549d264bc99e586a2ee2ea6629..0000000000000000000000000000000000000000 --- a/spaces/Mrchuw/text-to-image_6_by_6/app.py +++ /dev/null @@ -1,134 +0,0 @@ -import gradio as gr -import os -import random -import time -from zipfile import ZipFile -import tempfile -import string -import time - -directory = tempfile.mkdtemp(dir="./") - -imagens = [] - -model = gr.Interface.load( - "models/dreamlike-art/dreamlike-photoreal-2.0", -) - -#o = os.getenv("P") -o = "V" - -m_out = (""" -
    -
    -

    Please choose a Simpler Prompt, or Upgrade for faster loading.

    -
    -""") -loading=(""" -
    """) - -def add_random_noise(prompt, noise_level=1.00): - if noise_level == 0: - noise_level = 0.00 - if noise_level == None: - noise_level = 1.00 - percentage_noise = noise_level * 5 - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - prompt_list = list(prompt) - noise_chars = list(string.ascii_letters + string.punctuation + ' ' + string.digits) - noise_chars.extend(['😍', '💩', '😂', '🤔', '😊', '🤗', '😭', '🙄', '😷', '🤯', '🤫', '🥴', '😴', '🤩', '🥳', '😔', '😩', '🤪', '😇', '🤢', '😈', '👹', '👻', '🤖', '👽', '💀', '🎃', '🎅', '🎄', '🎁', '🎂', '🎉', '🎈', '🎊', '🎮', '❤️', '💔', '💕', '💖', '💗', '🐶', '🐱', '🐭', '🐹', '🦊', '🐻', '🐨', '🐯', '🦁', '🐘', '🔥', '🌧️', '🌞', '🌈', '💥', '🌴', '🌊', '🌺', '🌻', '🌸', '🎨', '🌅', '🌌', '☁️', '⛈️', '❄️', '☀️', '🌤️', '⛅️', '🌥️', '🌦️', '🌧️', '🌩️', '🌨️', '🌫️', '☔️', '🌬️', '💨', '🌪️', '🌈']) - for index in noise_indices: - prompt_list[index] = random.choice(noise_chars) - return "".join(prompt_list) - -def build(): - def zip_files(): - zip_name = f"{b.prompt.split(' ')[0]}_{random.randint(0, 10000)}.zip" - with ZipFile(zip_name, "w") as zipObj: - for file in b.imagens: - zipObj.write(file, os.path.basename(file)) - b.imagens = [] - return zip_name - def clear(): - return gr.update(value=0),gr.update(value=0) - def start(): - stamp = time.time() - return gr.update(value=stamp),gr.update(value=0) - def end(stamp): - ts = stamp + 360 - ti = time.time() - if ti > ts and stamp != 0: - return gr.update(value=1),gr.HTML.update(f"{m_out}",visible=True) - else: - return gr.update(value=0),None - def im_fn(prompt,noise_level,h=None): - try: - if h == o: - prompt_with_noise = add_random_noise(prompt, noise_level) - imagem = model(prompt_with_noise) - b.prompt = prompt - b.imagens.append(imagem) - return imagem - elif h != o: - return(None,None) - except Exception as E: - return None, None - def cl_fac(): - return "",gr.HTML.update(f"{loading}") - with gr.Blocks() as b: - b.imagens: list = [] - with gr.Row(): - with gr.Column(): - prompt = gr.Textbox(label="Prompt", placeholder="Enter a prompt") - noise_level = gr.Slider(minimum=0.0, maximum=10, step=0.1, label="Noise Level between images.") - with gr.Column(): - with gr.Row(): - btn1 = gr.Button("Generate") - btn2 = gr.Button("Clear") - message=gr.HTML("
    ") - message2=gr.HTML("",visible=False) - - with gr.Row(): - out1 = gr.Image() - out2 = gr.Image() - with gr.Row(): - out3 = gr.Image() - out4 = gr.Image() - with gr.Row(): - out5 = gr.Image() - out6 = gr.Image() - with gr.Row(): - # btn3 = gr.Button("Download") - caixa = gr.File(file_count="multiple", file_types=["text", ".json", ".csv", "image"]) - - with gr.Row(visible=False): - h_variavel=gr.Textbox(value="V") - t_state=gr.Number() - t_switch=gr.Textbox(value=0) - auto= gr.Image() - def clear_all(): - return "",None,None,None,None,None,None,None,None,1,gr.HTML.update("
    ") - fac_b = gr.Textbox(value="",visible=False) - - def noth(): - return gr.HTML.update("
    ") - #a1=btn1.click(noth,None,btn1,every=1) - btn1.click(cl_fac,None,[fac_b,message],show_progress=False) - b1=btn1.click(start,None,[t_state,t_switch],show_progress=True) - sta = t_state.change(end,t_state,[t_switch,message2],every=1,show_progress=True) - b2=btn1.click(im_fn,[prompt,noise_level,h_variavel],[out1,], show_progress=True) - b3=out1.change(im_fn,[prompt,noise_level,h_variavel],[out2,], show_progress=True) - b4=out2.change(im_fn,[prompt,noise_level,h_variavel],[out3,], show_progress=True) - b5=out3.change(im_fn,[prompt,noise_level,h_variavel],[out4,], show_progress=True) - b6=out4.change(im_fn,[prompt,noise_level,h_variavel],[out5,], show_progress=True) - b7=out5.change(im_fn,[prompt,noise_level,h_variavel],[out6], show_progress=True) - b8=out6.change(noth,None,[message], show_progress=False) - b8=out6.change(zip_files,None,[caixa], show_progress=False) - swi=t_switch.change(clear,None,[t_switch,fac_b], cancels=[sta,b2,b3,b4,b5,b6,b7],show_progress=False) - #btn2.click(noth,None,message,cancels=[b1,sta,b2,b3,b4,b5,swi],show_progress=False) - btn2.click(clear_all, None,[fac_b,prompt,out1,out2,out3,out4,out5,out6,t_state,t_switch,message],cancels=[b1,sta,b2,b3,b4,b5,b6,b7,b8,swi],show_progress=False) - # btn3.click(zip_files,None,[caixa],show_progress=False) - # caixa.change(noth,None,[message],show_progress=False) - b.queue(concurrency_count=100).launch(show_api=False) -build() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/bert_pretrainer_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/bert_pretrainer_test.py deleted file mode 100644 index eb9ace5ccf132ec0423276b28fa1e1e473a97290..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/bert_pretrainer_test.py +++ /dev/null @@ -1,164 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for BERT trainer network.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from tensorflow.python.keras import keras_parameterized # pylint: disable=g-direct-tensorflow-import -from official.nlp.modeling import networks -from official.nlp.modeling.models import bert_pretrainer - - -# This decorator runs the test in V1, V2-Eager, and V2-Functional mode. It -# guarantees forward compatibility of this code for the V2 switchover. -@keras_parameterized.run_all_keras_modes -class BertPretrainerTest(keras_parameterized.TestCase): - - def test_bert_pretrainer(self): - """Validate that the Keras object can be created.""" - # Build a transformer network to use within the BERT trainer. - vocab_size = 100 - sequence_length = 512 - test_network = networks.TransformerEncoder( - vocab_size=vocab_size, num_layers=2, sequence_length=sequence_length) - - # Create a BERT trainer with the created network. - num_classes = 3 - num_token_predictions = 2 - bert_trainer_model = bert_pretrainer.BertPretrainer( - test_network, - num_classes=num_classes, - num_token_predictions=num_token_predictions) - - # Create a set of 2-dimensional inputs (the first dimension is implicit). - word_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - mask = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - type_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - masked_lm_positions = tf.keras.Input( - shape=(num_token_predictions,), dtype=tf.int32) - - # Invoke the trainer model on the inputs. This causes the layer to be built. - outputs = bert_trainer_model( - [word_ids, mask, type_ids, masked_lm_positions]) - - # Validate that the outputs are of the expected shape. - expected_lm_shape = [None, num_token_predictions, vocab_size] - expected_classification_shape = [None, num_classes] - self.assertAllEqual(expected_lm_shape, outputs['masked_lm'].shape.as_list()) - self.assertAllEqual(expected_classification_shape, - outputs['classification'].shape.as_list()) - - def test_bert_trainer_tensor_call(self): - """Validate that the Keras object can be invoked.""" - # Build a transformer network to use within the BERT trainer. (Here, we use - # a short sequence_length for convenience.) - test_network = networks.TransformerEncoder( - vocab_size=100, num_layers=2, sequence_length=2) - - # Create a BERT trainer with the created network. - bert_trainer_model = bert_pretrainer.BertPretrainer( - test_network, num_classes=2, num_token_predictions=2) - - # Create a set of 2-dimensional data tensors to feed into the model. - word_ids = tf.constant([[1, 1], [2, 2]], dtype=tf.int32) - mask = tf.constant([[1, 1], [1, 0]], dtype=tf.int32) - type_ids = tf.constant([[1, 1], [2, 2]], dtype=tf.int32) - lm_mask = tf.constant([[1, 1], [1, 0]], dtype=tf.int32) - - # Invoke the trainer model on the tensors. In Eager mode, this does the - # actual calculation. (We can't validate the outputs, since the network is - # too complex: this simply ensures we're not hitting runtime errors.) - _ = bert_trainer_model([word_ids, mask, type_ids, lm_mask]) - - def test_serialize_deserialize(self): - """Validate that the BERT trainer can be serialized and deserialized.""" - # Build a transformer network to use within the BERT trainer. (Here, we use - # a short sequence_length for convenience.) - test_network = networks.TransformerEncoder( - vocab_size=100, num_layers=2, sequence_length=5) - - # Create a BERT trainer with the created network. (Note that all the args - # are different, so we can catch any serialization mismatches.) - bert_trainer_model = bert_pretrainer.BertPretrainer( - test_network, num_classes=4, num_token_predictions=3) - - # Create another BERT trainer via serialization and deserialization. - config = bert_trainer_model.get_config() - new_bert_trainer_model = bert_pretrainer.BertPretrainer.from_config(config) - - # Validate that the config can be forced to JSON. - _ = new_bert_trainer_model.to_json() - - # If the serialization was successful, the new config should match the old. - self.assertAllEqual(bert_trainer_model.get_config(), - new_bert_trainer_model.get_config()) - - def test_bert_pretrainerv2(self): - """Validate that the Keras object can be created.""" - # Build a transformer network to use within the BERT trainer. - vocab_size = 100 - sequence_length = 512 - test_network = networks.TransformerEncoder( - vocab_size=vocab_size, num_layers=2, sequence_length=sequence_length) - - # Create a BERT trainer with the created network. - num_token_predictions = 2 - bert_trainer_model = bert_pretrainer.BertPretrainerV2( - encoder_network=test_network, num_masked_tokens=num_token_predictions) - - # Create a set of 2-dimensional inputs (the first dimension is implicit). - word_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - mask = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - type_ids = tf.keras.Input(shape=(sequence_length,), dtype=tf.int32) - lm_mask = tf.keras.Input(shape=(num_token_predictions,), dtype=tf.int32) - - # Invoke the trainer model on the inputs. This causes the layer to be built. - outputs = bert_trainer_model([word_ids, mask, type_ids, lm_mask]) - - # Validate that the outputs are of the expected shape. - expected_lm_shape = [None, num_token_predictions, vocab_size] - self.assertAllEqual(expected_lm_shape, outputs['lm_output'].shape.as_list()) - - def test_v2_serialize_deserialize(self): - """Validate that the BERT trainer can be serialized and deserialized.""" - # Build a transformer network to use within the BERT trainer. (Here, we use - # a short sequence_length for convenience.) - test_network = networks.TransformerEncoder( - vocab_size=100, num_layers=2, sequence_length=5) - - # Create a BERT trainer with the created network. (Note that all the args - # are different, so we can catch any serialization mismatches.) - bert_trainer_model = bert_pretrainer.BertPretrainerV2( - encoder_network=test_network, num_masked_tokens=2) - - # Create another BERT trainer via serialization and deserialization. - config = bert_trainer_model.get_config() - new_bert_trainer_model = bert_pretrainer.BertPretrainerV2.from_config( - config) - - # Validate that the config can be forced to JSON. - _ = new_bert_trainer_model.to_json() - - # If the serialization was successful, the new config should match the old. - self.assertAllEqual(bert_trainer_model.get_config(), - new_bert_trainer_model.get_config()) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/models_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/models_test.py deleted file mode 100644 index 39676a347d65e2dc19e99a7dec4d22dfb4c60df4..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/models_test.py +++ /dev/null @@ -1,324 +0,0 @@ -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for nlp.nhnet.models.""" - -import os - -from absl import logging -from absl.testing import parameterized -import numpy as np -import tensorflow as tf - -# pylint: disable=g-direct-tensorflow-import -from tensorflow.python.distribute import combinations -from tensorflow.python.distribute import strategy_combinations -# pylint: enable=g-direct-tensorflow-import -from official.nlp.nhnet import configs -from official.nlp.nhnet import models -from official.nlp.nhnet import utils - - -def all_strategy_combinations(): - return combinations.combine( - distribution=[ - strategy_combinations.default_strategy, - strategy_combinations.tpu_strategy, - strategy_combinations.one_device_strategy_gpu, - strategy_combinations.mirrored_strategy_with_gpu_and_cpu, - strategy_combinations.mirrored_strategy_with_two_gpus, - ], - mode="eager", - ) - - -def distribution_forward_path(strategy, - model, - inputs, - batch_size, - mode="train"): - dataset = tf.data.Dataset.from_tensor_slices((inputs)) - dataset = dataset.batch(batch_size) - dataset = strategy.experimental_distribute_dataset(dataset) - - @tf.function - def test_step(inputs): - """Calculates evaluation metrics on distributed devices.""" - - def _test_step_fn(inputs): - """Replicated accuracy calculation.""" - return model(inputs, mode=mode, training=False) - - outputs = strategy.run(_test_step_fn, args=(inputs,)) - return tf.nest.map_structure(strategy.experimental_local_results, outputs) - - return [test_step(inputs) for inputs in dataset] - - -def process_decoded_ids(predictions, end_token_id): - """Transforms decoded tensors to lists ending with END_TOKEN_ID.""" - if isinstance(predictions, tf.Tensor): - predictions = predictions.numpy() - flatten_ids = predictions.reshape((-1, predictions.shape[-1])) - results = [] - for ids in flatten_ids: - ids = list(ids) - if end_token_id in ids: - ids = ids[:ids.index(end_token_id)] - results.append(ids) - return results - - -class Bert2BertTest(tf.test.TestCase, parameterized.TestCase): - - def setUp(self): - super(Bert2BertTest, self).setUp() - self._config = utils.get_test_params() - - def test_model_creation(self): - model = models.create_bert2bert_model(params=self._config) - fake_ids = np.zeros((2, 10), dtype=np.int32) - fake_inputs = { - "input_ids": fake_ids, - "input_mask": fake_ids, - "segment_ids": fake_ids, - "target_ids": fake_ids, - } - model(fake_inputs) - - @combinations.generate(all_strategy_combinations()) - def test_bert2bert_train_forward(self, distribution): - seq_length = 10 - # Defines the model inside distribution strategy scope. - with distribution.scope(): - # Forward path. - batch_size = 2 - batches = 4 - fake_ids = np.zeros((batch_size * batches, seq_length), dtype=np.int32) - fake_inputs = { - "input_ids": fake_ids, - "input_mask": fake_ids, - "segment_ids": fake_ids, - "target_ids": fake_ids, - } - model = models.create_bert2bert_model(params=self._config) - results = distribution_forward_path(distribution, model, fake_inputs, - batch_size) - logging.info("Forward path results: %s", str(results)) - self.assertLen(results, batches) - - def test_bert2bert_decoding(self): - seq_length = 10 - self._config.override( - { - "beam_size": 3, - "len_title": seq_length, - "alpha": 0.6, - }, - is_strict=False) - - batch_size = 2 - fake_ids = np.zeros((batch_size, seq_length), dtype=np.int32) - fake_inputs = { - "input_ids": fake_ids, - "input_mask": fake_ids, - "segment_ids": fake_ids, - } - self._config.override({ - "padded_decode": False, - "use_cache": False, - }, - is_strict=False) - model = models.create_bert2bert_model(params=self._config) - ckpt = tf.train.Checkpoint(model=model) - - # Initializes variables from checkpoint to keep outputs deterministic. - init_checkpoint = ckpt.save(os.path.join(self.get_temp_dir(), "ckpt")) - ckpt.restore(init_checkpoint).assert_existing_objects_matched() - top_ids, scores = model(fake_inputs, mode="predict") - - self._config.override({ - "padded_decode": False, - "use_cache": True, - }, - is_strict=False) - model = models.create_bert2bert_model(params=self._config) - ckpt = tf.train.Checkpoint(model=model) - ckpt.restore(init_checkpoint).assert_existing_objects_matched() - cached_top_ids, cached_scores = model(fake_inputs, mode="predict") - self.assertEqual( - process_decoded_ids(top_ids, self._config.end_token_id), - process_decoded_ids(cached_top_ids, self._config.end_token_id)) - self.assertAllClose(scores, cached_scores) - - self._config.override({ - "padded_decode": True, - "use_cache": True, - }, - is_strict=False) - model = models.create_bert2bert_model(params=self._config) - ckpt = tf.train.Checkpoint(model=model) - ckpt.restore(init_checkpoint).assert_existing_objects_matched() - padded_top_ids, padded_scores = model(fake_inputs, mode="predict") - self.assertEqual( - process_decoded_ids(top_ids, self._config.end_token_id), - process_decoded_ids(padded_top_ids, self._config.end_token_id)) - self.assertAllClose(scores, padded_scores) - - @combinations.generate(all_strategy_combinations()) - def test_bert2bert_eval(self, distribution): - seq_length = 10 - padded_decode = isinstance(distribution, - tf.distribute.experimental.TPUStrategy) - self._config.override( - { - "beam_size": 3, - "len_title": seq_length, - "alpha": 0.6, - "padded_decode": padded_decode, - }, - is_strict=False) - # Defines the model inside distribution strategy scope. - with distribution.scope(): - # Forward path. - batch_size = 2 - batches = 4 - fake_ids = np.zeros((batch_size * batches, seq_length), dtype=np.int32) - fake_inputs = { - "input_ids": fake_ids, - "input_mask": fake_ids, - "segment_ids": fake_ids, - } - model = models.create_bert2bert_model(params=self._config) - results = distribution_forward_path( - distribution, model, fake_inputs, batch_size, mode="predict") - self.assertLen(results, batches) - results = distribution_forward_path( - distribution, model, fake_inputs, batch_size, mode="eval") - self.assertLen(results, batches) - - -class NHNetTest(tf.test.TestCase, parameterized.TestCase): - - def setUp(self): - super(NHNetTest, self).setUp() - self._nhnet_config = configs.NHNetConfig() - self._nhnet_config.override(utils.get_test_params().as_dict()) - self._bert2bert_config = configs.BERT2BERTConfig() - self._bert2bert_config.override(utils.get_test_params().as_dict()) - - def _count_params(self, layer, trainable_only=True): - """Returns the count of all model parameters, or just trainable ones.""" - if not trainable_only: - return layer.count_params() - else: - return int( - np.sum([ - tf.keras.backend.count_params(p) for p in layer.trainable_weights - ])) - - def test_create_nhnet_layers(self): - single_doc_bert, single_doc_decoder = models.get_bert2bert_layers( - self._bert2bert_config) - multi_doc_bert, multi_doc_decoder = models.get_nhnet_layers( - self._nhnet_config) - # Expects multi-doc encoder/decoder have the same number of parameters as - # single-doc encoder/decoder. - self.assertEqual( - self._count_params(multi_doc_bert), self._count_params(single_doc_bert)) - self.assertEqual( - self._count_params(multi_doc_decoder), - self._count_params(single_doc_decoder)) - - def test_checkpoint_restore(self): - bert2bert_model = models.create_bert2bert_model(self._bert2bert_config) - ckpt = tf.train.Checkpoint(model=bert2bert_model) - init_checkpoint = ckpt.save(os.path.join(self.get_temp_dir(), "ckpt")) - nhnet_model = models.create_nhnet_model( - params=self._nhnet_config, init_checkpoint=init_checkpoint) - source_weights = ( - bert2bert_model.bert_layer.trainable_weights + - bert2bert_model.decoder_layer.trainable_weights) - dest_weights = ( - nhnet_model.bert_layer.trainable_weights + - nhnet_model.decoder_layer.trainable_weights) - for source_weight, dest_weight in zip(source_weights, dest_weights): - self.assertAllClose(source_weight.numpy(), dest_weight.numpy()) - - @combinations.generate(all_strategy_combinations()) - def test_nhnet_train_forward(self, distribution): - seq_length = 10 - # Defines the model inside distribution strategy scope. - with distribution.scope(): - # Forward path. - batch_size = 2 - num_docs = 2 - batches = 4 - fake_ids = np.zeros((batch_size * batches, num_docs, seq_length), - dtype=np.int32) - fake_inputs = { - "input_ids": - fake_ids, - "input_mask": - fake_ids, - "segment_ids": - fake_ids, - "target_ids": - np.zeros((batch_size * batches, seq_length * 2), dtype=np.int32), - } - model = models.create_nhnet_model(params=self._nhnet_config) - results = distribution_forward_path(distribution, model, fake_inputs, - batch_size) - logging.info("Forward path results: %s", str(results)) - self.assertLen(results, batches) - - @combinations.generate(all_strategy_combinations()) - def test_nhnet_eval(self, distribution): - seq_length = 10 - padded_decode = isinstance(distribution, - tf.distribute.experimental.TPUStrategy) - self._nhnet_config.override( - { - "beam_size": 4, - "len_title": seq_length, - "alpha": 0.6, - "multi_channel_cross_attention": True, - "padded_decode": padded_decode, - }, - is_strict=False) - # Defines the model inside distribution strategy scope. - with distribution.scope(): - # Forward path. - batch_size = 2 - num_docs = 2 - batches = 4 - fake_ids = np.zeros((batch_size * batches, num_docs, seq_length), - dtype=np.int32) - fake_inputs = { - "input_ids": fake_ids, - "input_mask": fake_ids, - "segment_ids": fake_ids, - "target_ids": np.zeros((batch_size * batches, 5), dtype=np.int32), - } - model = models.create_nhnet_model(params=self._nhnet_config) - results = distribution_forward_path( - distribution, model, fake_inputs, batch_size, mode="predict") - self.assertLen(results, batches) - results = distribution_forward_path( - distribution, model, fake_inputs, batch_size, mode="eval") - self.assertLen(results, batches) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NahuelCosta/DTW-CNN/README.md b/spaces/NahuelCosta/DTW-CNN/README.md deleted file mode 100644 index c43b9728849264134c835d939a4474e5e2254fca..0000000000000000000000000000000000000000 --- a/spaces/NahuelCosta/DTW-CNN/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DTW CNN -emoji: 🔋 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Navneet574/algerian-forest-fire-prediction/Explore_Page.py b/spaces/Navneet574/algerian-forest-fire-prediction/Explore_Page.py deleted file mode 100644 index d75d0e40b868ce0aed49ad1af3708153363372b3..0000000000000000000000000000000000000000 --- a/spaces/Navneet574/algerian-forest-fire-prediction/Explore_Page.py +++ /dev/null @@ -1,8 +0,0 @@ -import streamlit as st -import pandas as pd -import matplotlib.pyplot as plt - - - -def show_analysis(): - pass \ No newline at end of file diff --git a/spaces/Nikhil0987/hnjii/README.md b/spaces/Nikhil0987/hnjii/README.md deleted file mode 100644 index 16ec7c50be2d707d979cdc766bb9254294d3b85d..0000000000000000000000000000000000000000 --- a/spaces/Nikhil0987/hnjii/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hnjii -emoji: 🐢 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NimaBoscarino/gradio-secrets/app.py b/spaces/NimaBoscarino/gradio-secrets/app.py deleted file mode 100644 index 6e4f40a7c1e0dd81679cd61695638ff27b2b712c..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/gradio-secrets/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import gradio as gr -import os - -def greet_from_secret(ignored_param): - name = os.environ.get('NAME') - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet_from_secret, inputs="text", outputs="text") -iface.launch() diff --git a/spaces/Not-Grim-Refer/Reverse-Prompt-Engineering-Code/readme.md b/spaces/Not-Grim-Refer/Reverse-Prompt-Engineering-Code/readme.md deleted file mode 100644 index 521b630aed398ed481a25fdd3e99c3713d814bea..0000000000000000000000000000000000000000 --- a/spaces/Not-Grim-Refer/Reverse-Prompt-Engineering-Code/readme.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Reverse-Prompt-Engineering-Code -emoji: 🌍 -colorFrom: red -colorTo: grey -sdk: streamlet -sdk_version: 1.24.0 -app_file: main.py -pinned: true -license: mit - ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/Waifu2x/Common.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/Waifu2x/Common.py deleted file mode 100644 index 6d7f1981aad000b9e3016e87f5c34878577447a5..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/Waifu2x/Common.py +++ /dev/null @@ -1,255 +0,0 @@ -from contextlib import contextmanager -from math import sqrt, log - -import torch -import torch.nn as nn - - -# import warnings -# warnings.simplefilter('ignore') - - -class BaseModule(nn.Module): - def __init__(self): - self.act_fn = None - super(BaseModule, self).__init__() - - def selu_init_params(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d) and m.weight.requires_grad: - m.weight.data.normal_(0.0, 1.0 / sqrt(m.weight.numel())) - if m.bias is not None: - m.bias.data.fill_(0) - elif isinstance(m, nn.BatchNorm2d) and m.weight.requires_grad: - m.weight.data.fill_(1) - m.bias.data.zero_() - - elif isinstance(m, nn.Linear) and m.weight.requires_grad: - m.weight.data.normal_(0, 1.0 / sqrt(m.weight.numel())) - m.bias.data.zero_() - - def initialize_weights_xavier_uniform(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d) and m.weight.requires_grad: - # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='leaky_relu') - nn.init.xavier_uniform_(m.weight) - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.BatchNorm2d) and m.weight.requires_grad: - m.weight.data.fill_(1) - m.bias.data.zero_() - - def load_state_dict(self, state_dict, strict=True, self_state=False): - own_state = self_state if self_state else self.state_dict() - for name, param in state_dict.items(): - if name in own_state: - try: - own_state[name].copy_(param.data) - except Exception as e: - print("Parameter {} fails to load.".format(name)) - print("-----------------------------------------") - print(e) - else: - print("Parameter {} is not in the model. ".format(name)) - - @contextmanager - def set_activation_inplace(self): - if hasattr(self, "act_fn") and hasattr(self.act_fn, "inplace"): - # save memory - self.act_fn.inplace = True - yield - self.act_fn.inplace = False - else: - yield - - def total_parameters(self): - total = sum([i.numel() for i in self.parameters()]) - trainable = sum([i.numel() for i in self.parameters() if i.requires_grad]) - print( - "Total parameters : {}. Trainable parameters : {}".format(total, trainable) - ) - return total - - def forward(self, *x): - raise NotImplementedError - - -class ResidualFixBlock(BaseModule): - def __init__( - self, - in_channels, - out_channels, - kernel_size=3, - padding=1, - dilation=1, - groups=1, - activation=nn.SELU(), - conv=nn.Conv2d, - ): - super(ResidualFixBlock, self).__init__() - self.act_fn = activation - self.m = nn.Sequential( - conv( - in_channels, - out_channels, - kernel_size, - padding=padding, - dilation=dilation, - groups=groups, - ), - activation, - # conv(out_channels, out_channels, kernel_size, padding=(kernel_size - 1) // 2, dilation=1, groups=groups), - conv( - in_channels, - out_channels, - kernel_size, - padding=padding, - dilation=dilation, - groups=groups, - ), - ) - - def forward(self, x): - out = self.m(x) - return self.act_fn(out + x) - - -class ConvBlock(BaseModule): - def __init__( - self, - in_channels, - out_channels, - kernel_size=3, - padding=1, - dilation=1, - groups=1, - activation=nn.SELU(), - conv=nn.Conv2d, - ): - super(ConvBlock, self).__init__() - self.m = nn.Sequential( - conv( - in_channels, - out_channels, - kernel_size, - padding=padding, - dilation=dilation, - groups=groups, - ), - activation, - ) - - def forward(self, x): - return self.m(x) - - -class UpSampleBlock(BaseModule): - def __init__(self, channels, scale, activation, atrous_rate=1, conv=nn.Conv2d): - assert scale in [2, 4, 8], "Currently UpSampleBlock supports 2, 4, 8 scaling" - super(UpSampleBlock, self).__init__() - m = nn.Sequential( - conv( - channels, - 4 * channels, - kernel_size=3, - padding=atrous_rate, - dilation=atrous_rate, - ), - activation, - nn.PixelShuffle(2), - ) - self.m = nn.Sequential(*[m for _ in range(int(log(scale, 2)))]) - - def forward(self, x): - return self.m(x) - - -class SpatialChannelSqueezeExcitation(BaseModule): - # https://arxiv.org/abs/1709.01507 - # https://arxiv.org/pdf/1803.02579v1.pdf - def __init__(self, in_channel, reduction=16, activation=nn.ReLU()): - super(SpatialChannelSqueezeExcitation, self).__init__() - linear_nodes = max(in_channel // reduction, 4) # avoid only 1 node case - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.channel_excite = nn.Sequential( - # check the paper for the number 16 in reduction. It is selected by experiment. - nn.Linear(in_channel, linear_nodes), - activation, - nn.Linear(linear_nodes, in_channel), - nn.Sigmoid(), - ) - self.spatial_excite = nn.Sequential( - nn.Conv2d(in_channel, 1, kernel_size=1, stride=1, padding=0, bias=False), - nn.Sigmoid(), - ) - - def forward(self, x): - b, c, h, w = x.size() - # - channel = self.avg_pool(x).view(b, c) - # channel = F.avg_pool2d(x, kernel_size=(h,w)).view(b,c) # used for porting to other frameworks - cSE = self.channel_excite(channel).view(b, c, 1, 1) - x_cSE = torch.mul(x, cSE) - - # spatial - sSE = self.spatial_excite(x) - x_sSE = torch.mul(x, sSE) - # return x_sSE - return torch.add(x_cSE, x_sSE) - - -class PartialConv(nn.Module): - # reference: - # Image Inpainting for Irregular Holes Using Partial Convolutions - # http://masc.cs.gmu.edu/wiki/partialconv/show?time=2018-05-24+21%3A41%3A10 - # https://github.com/naoto0804/pytorch-inpainting-with-partial-conv/blob/master/net.py - # https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/common/net.py - # partial based padding - # https: // github.com / NVIDIA / partialconv / blob / master / models / pd_resnet.py - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - ): - - super(PartialConv, self).__init__() - self.feature_conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - groups, - bias, - ) - - self.mask_conv = nn.Conv2d( - 1, 1, kernel_size, stride, padding, dilation, groups, bias=False - ) - self.window_size = self.mask_conv.kernel_size[0] * self.mask_conv.kernel_size[1] - torch.nn.init.constant_(self.mask_conv.weight, 1.0) - - for param in self.mask_conv.parameters(): - param.requires_grad = False - - def forward(self, x): - output = self.feature_conv(x) - if self.feature_conv.bias is not None: - output_bias = self.feature_conv.bias.view(1, -1, 1, 1).expand_as(output) - else: - output_bias = torch.zeros_like(output, device=x.device) - - with torch.no_grad(): - ones = torch.ones(1, 1, x.size(2), x.size(3), device=x.device) - output_mask = self.mask_conv(ones) - output_mask = self.window_size / output_mask - output = (output - output_bias) * output_mask + output_bias - - return output diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/process_data/clean_histogram.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/process_data/clean_histogram.py deleted file mode 100644 index e24e073dc0eb43c76e2ce717f52bb848c5b026b8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/process_data/clean_histogram.py +++ /dev/null @@ -1,52 +0,0 @@ -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument('--src', type=str, help='Source language') -parser.add_argument('--tgt', type=str, help='Target language') -parser.add_argument('--src-file', type=str, help='Input source file') -parser.add_argument('--tgt-file', type=str, help='Input target file') -parser.add_argument('--src-output-file', type=str, help='Output source file') -parser.add_argument('--tgt-output-file', type=str, help='Output target file') -parser.add_argument('--threshold', type=float, default=0.5, help='Threshold') -parser.add_argument('--threshold-character', type=str, default=']', help='Threshold character') -parser.add_argument('--histograms', type=str, help='Path to histograms') - -args = parser.parse_args() - - -def read_hist(f): - ch = [] - for line in f: - c = line[0] - if c == args.threshold_character: - break - ch.append(c) - return ch - - -with(open("{}/{}".format(args.histograms, args.src), 'r', encoding='utf8')) as f: - ch1 = read_hist(f) - -with(open("{}/{}".format(args.histograms, args.tgt), 'r', encoding='utf8')) as f: - ch2 = read_hist(f) - -print("Accepted characters for {}: {}".format(args.src, ch1)) -print("Accepted characters for {}: {}".format(args.tgt, ch2)) - -with open(args.src_file, 'r', encoding='utf8') as fs1, open(args.tgt_file, 'r', encoding='utf8') as fs2, open(args.src_output_file, 'w', encoding='utf8') as fos1, open(args.tgt_output_file, 'w', encoding='utf8') as fos2: - ls1 = fs1.readline() - ls2 = fs2.readline() - - while ls1 or ls2: - cnt1 = len([c for c in ls1.strip() if c in ch1]) - cnt2 = len([c for c in ls2.strip() if c in ch2]) - - if cnt1 / len(ls1) > args.threshold and cnt2 / len(ls2) > args.threshold: - fos1.write(ls1) - fos2.write(ls2) - else: - print("{} {} {} \n{} {} {}".format(args.src, cnt1 / len(ls1), ls1.strip(), args.tgt, cnt2 / len(ls2), ls2.strip())) - - ls1 = fs1.readline() - ls2 = fs2.readline() - \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py deleted file mode 100644 index 0d356e5a10241716b58a5bc04a9d204a72553ff8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import math - -import torch -import torch.nn.functional as F -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import label_smoothed_nll_loss -from fairseq import metrics, utils - - -@register_criterion("guided_label_smoothed_cross_entropy_with_accuracy") -class GuidedCrossEntAccCriterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - guide_alpha, - text_input_cost_ratio, - label_smoothing, - disable_text_guide_update_num=0, - attentive_cost_regularization=0, - ): - """ - guide_alpha: alpha to inteplate nll and kd loss - text_input_cost_ratio: loss ratio for text only input data - label_smoothing: label smoothing ratio - disable_text_guide_update_num: only use nll loss for the first N updates - attentive_cost_regularization: ratio fo attentive cost - """ - super().__init__(task) - self.alpha = guide_alpha - self.attn_beta = attentive_cost_regularization - self.sentence_avg = sentence_avg - self.eps = label_smoothing - self.text_input_cost_ratio = text_input_cost_ratio - self.disable_update_num = disable_text_guide_update_num - assert self.alpha >= 0 and self.alpha <= 1.0 - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--label-smoothing', default=0., type=float, metavar='D', - help='epsilon for label smoothing, 0 means no label smoothing') - # fmt: off - parser.add_argument('--guide-alpha', default=0., type=float, metavar='D', - help='alpha to merge kd cost from text to speech input with ce loss') - # fmt: off - parser.add_argument('--disable-text-guide-update-num', default=0, type=int, metavar='D', - help='disable guided target from text for the first N updates.') - parser.add_argument("--attentive-cost-regularization", default=0.0, type=float, metavar='D', - help="use encoder attentive loss regularization with cost ratio D") - parser.add_argument("--attentive-cost-without-normalize", action='store_true', - help="Don't do normalization during attentive cost computation") - - def forward(self, model, sample, reduce=True): - reduction = 'sum' if reduce else 'none' - net_input = sample["net_input"] - net_output = model(**net_input) - attn_cost = None - lprobs = model.get_normalized_probs(net_output, log_probs=True) - is_dual_input = True if net_input['src_tokens'] is not None and net_input.get('src_txt_tokens') is not None else False - target = model.get_targets(sample, net_output) - src_token_num = 0 - if is_dual_input: - # lprobs_spch from speech encoder and lprobs_text from text encoder - lprobs_spch, lprobs_text = torch.chunk(lprobs, 2) - lprobs_spch.batch_first = lprobs.batch_first - lprobs_text.batch_first = lprobs.batch_first - - speech_loss, speech_nll_loss, speech_correct, speech_total = \ - self.guide_loss_and_acc(model, lprobs_spch, lprobs_text, target, reduce=(reduction == 'sum')) - text_loss, text_nll_loss, text_correct, text_total = self.compute_loss_and_acc(model, lprobs_text, target, reduction=reduction) - loss = (speech_loss + text_loss) - nll_loss = (speech_nll_loss + text_nll_loss) - correct = speech_correct + text_correct - total = speech_total + text_total - - attn_cost = net_output[1].get('attn_cost') - if attn_cost is not None: - # attn_cost is batch_first and padding tokens have been masked already - src_token_num = attn_cost.ne(0).sum() - attn_cost = attn_cost.sum() - loss = loss + attn_cost * self.attn_beta - else: - attn_cost = 0 - else: - loss, nll_loss, correct, total = self.compute_loss_and_acc(model, lprobs, target, reduction=reduction) - if sample["net_input"]['src_tokens'] is None: # text input only - loss = loss * self.text_input_cost_ratio - speech_loss = None - speech_nll_loss = None - - sample_size, logging_output = self.get_logging_output( - sample, loss, nll_loss, correct, total, src_token_num, speech_loss, speech_nll_loss, attn_cost, is_dual_input - ) - return loss, sample_size, logging_output - - def compute_loss_and_acc(self, model, lprobs, target, reduction='sum'): - if not lprobs.batch_first: - lprobs = lprobs.transpose(0, 1) - lprobs = lprobs.view(-1, lprobs.size(-1)) # -> (B x T) x C - target = target.view(-1) - loss, nll_loss = label_smoothed_nll_loss( - lprobs, target, self.eps, ignore_index=self.padding_idx, reduce=(reduction == 'sum'), - ) - - mask = target.ne(self.padding_idx) - correct = torch.sum(lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))) - total = torch.sum(mask) - return loss, nll_loss, correct, total - - def guide_loss_and_acc(self, model, lprobs, lprobs_teacher, target, reduce=True): - """ lprobs_teacher is used as guide for lprobs """ - if self.alpha == 0.0 or model.num_updates < self.disable_update_num: - return self.compute_loss_and_acc(model, lprobs, target, reduction=('sum' if reduce else 'none')) - if not lprobs.batch_first: - lprobs = lprobs.transpose(0, 1) - lprobs_teacher = lprobs_teacher.transpose(0, 1) - - lprobs = lprobs.view(-1, lprobs.size(-1)).float() # -> (B x T) x C - lprobs_teacher = lprobs_teacher.view(-1, lprobs_teacher.size(-1)).float() # -> (B x T) x C - target = target.view(-1) - loss = F.nll_loss(lprobs, target, ignore_index=self.padding_idx, reduction='sum' if reduce else 'none') - nll_loss = loss - probs_teacher = lprobs_teacher.exp().masked_fill_(target.unsqueeze(-1).eq(self.padding_idx), 0) - probs_teacher = probs_teacher.detach() - guide_loss = -(probs_teacher*lprobs).sum() if reduce else -(probs_teacher*lprobs).sum(-1, keepdim=True) - loss = self.alpha*guide_loss + (1.0 - self.alpha)*loss - - mask = target.ne(self.padding_idx) - correct = torch.sum(lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))) - total = torch.sum(mask) - return loss, nll_loss, correct, total - - def get_logging_output( - self, - sample, - loss, - nll_loss, - correct, - total, - src_token_num=0, - speech_loss=None, - speech_nll_loss=None, - attn_cost=None, - is_dual_input=False, - ): - - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - mul_size = 2 if is_dual_input else 1 - - logging_output = { - "loss": utils.item(loss.data), # * sample['ntokens'], - "nll_loss": utils.item(nll_loss.data), # * sample['ntokens'], - "ntokens": sample["ntokens"]*mul_size, - "nsentences": sample["target"].size(0)*mul_size, - "sample_size": sample_size*mul_size, - "correct": utils.item(correct.data), - "total": utils.item(total.data), - "src_token_num": utils.item(src_token_num.data) if src_token_num > 0 else 0, - "nframes": torch.sum(sample["net_input"]["src_lengths"]).item(), - } - - if speech_loss is not None: - logging_output["speech_loss"] = utils.item(speech_loss.data) - logging_output["speech_nll_loss"] = utils.item(speech_nll_loss.data) - logging_output["sample_size_speech_cost"] = sample_size - logging_output["speech_attn_loss"] = attn_cost - - return sample_size*mul_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - correct_sum = sum(log.get("correct", 0) for log in logging_outputs) - total_sum = sum(log.get("total", 0) for log in logging_outputs) - src_token_sum = sum(log.get("src_token_num", 0) for log in logging_outputs) - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - nframes = sum(log.get("nframes", 0) for log in logging_outputs) - speech_loss_sum = sum(log.get("speech_loss", 0) for log in logging_outputs) - speech_nll_loss_sum = sum(log.get("speech_nll_loss", 0) for log in logging_outputs) - speech_attn_loss_sum = sum(log.get("speech_attn_loss", 0) for log in logging_outputs) - sample_size_speech = sum(log.get("sample_size_speech_cost", 0) for log in logging_outputs) - - agg_output = { - "loss": loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - "nll_loss": nll_loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - # if args.sentence_avg, then sample_size is nsentences, and loss - # is per-sentence loss; else sample_size is ntokens, and the loss - # becomes per-output token loss - "speech_loss": speech_loss_sum / sample_size_speech / math.log(2) if sample_size_speech > 0 else 0.0, - "speech_nll_loss": speech_nll_loss_sum / sample_size_speech / math.log(2) if sample_size_speech > 0 else 0.0, - "speech_attn_loss": speech_attn_loss_sum / src_token_sum / math.log(2) if src_token_sum > 0 else 0.0, - "ntokens": ntokens, - "nsentences": nsentences, - "nframes": nframes, - "sample_size": sample_size, - "acc": correct_sum * 100.0 / total_sum if total_sum > 0 else 0.0, - "correct": correct_sum, - "total": total_sum, - "src_token_num": src_token_sum, - # total is the number of validate tokens - } - return agg_output - - @classmethod - def reduce_metrics(cls, logging_outputs): - """Aggregate logging outputs from data parallel training.""" - agg_logging_outputs = cls.aggregate_logging_outputs(logging_outputs) - for k, v in agg_logging_outputs.items(): - if k in {'nsentences', 'ntokens', 'sample_size'}: - continue - metrics.log_scalar(k, v, round=3) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/README.md deleted file mode 100644 index 86518c9d5ef09fbd4fed1512a52e9431b74f08fa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/README.md +++ /dev/null @@ -1,70 +0,0 @@ -# Truncated Backpropagation Through Time (BPTT) - -Truncated BPTT is a useful technique for training language models on very long -sequences. Typically a long sequences is split into chunks and a language model -is trained over the chunks sequentially. The LM may condition on previous -chunks, but gradients only flow through the current chunk. This technique was -the basis for the paper: [Transformer-XL: Attentive Language Models Beyond a -Fixed-Length Context](https://arxiv.org/abs/1901.02860), which achieved -state-of-the-art language modeling results at the time of publication. - -It is slightly tricky to implement Truncated BPTT efficiently in fairseq, since -we need to iterate over the data sequentially and disable any batch shuffling -logic. The code provided in this example illustrates how to implement Truncated -BPTT in fairseq by overriding ``FairseqTask::get_batch_iterator`` to iterate -over the data sequentially. Crucially, this example supports batching and -multi-GPU (data parallel) training. - -##### 0. Setup - -First, see the general [language modeling README](README.md) for instructions on -preprocessing the WikiText-103 data. - -##### 1. Train a Transformer-XL model on WikiText-103 - -We will train a 16-layer Transformer-XL model following the [hyperparameters -used in the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). - -The following command assumes 4 GPUs, so that the total batch size is 60 -sequences (15 x 4). Training should take ~24 hours on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - data-bin/wikitext-103/ \ - --task truncated_bptt_lm --tokens-per-sample 150 \ - --batch-size 15 --max-update 200000 \ - --arch transformer_xl --n-layer 16 --d-model 410 --n-head 10 \ - --d-head 41 --d-inner 2100 --dropout 0.1 --dropatt 0.0 --mem-len 150 \ - --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 --min-lr 0.0 --lr 0.00025 \ - --log-format json --log-interval 25 \ - --fp16 -``` - -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. - -##### 2. Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103/ \ - --path checkpoints/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ \ - --task truncated_bptt_lm \ - --batch-size 1 --required-batch-size-multiple 1 \ - --model-overrides '{"mem_len":640,"clamp_len":400,"same_length":True}' \ - --tokens-per-sample 64 -# ... | INFO | fairseq_cli.eval_lm | num. model params: 151123537 -# ... | INFO | fairseq_cli.eval_lm | Evaluated 245569 tokens in 83.1s (2956.82 tokens/s) -# ... | INFO | fairseq_cli.eval_lm | Loss (base 2): 4.5668, Perplexity: 23.70 -# Compare to 24.0 test perplexity from the paper -``` - -*Note:* During training the model saw 150 tokens of context -(``--tokens-per-sample=150``) and 150 extra memory tokens (``--mem-len=150``). -During evaluation we measure perplexity on sequences of 64 tokens -(``--tokens-per-sample=64``) and increase the memory length -(``--model-overrides='{"mem_len":640}'``). These settings match the evaluation -settings from [the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py deleted file mode 100644 index 223a16f740c10b58ea45a0390814363e7b5f68b8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import torch -from fairseq import metrics, utils -from fairseq.criterions import register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterion, - LabelSmoothedCrossEntropyCriterionConfig -) - -try: - from simuleval.metrics.latency import ( - AverageLagging, - AverageProportion, - DifferentiableAverageLagging - ) - LATENCY_METRICS = { - "average_lagging": AverageLagging, - "average_proportion": AverageProportion, - "differentiable_average_lagging": DifferentiableAverageLagging, - } -except ImportError: - LATENCY_METRICS = None - - -@dataclass -class LabelSmoothedCrossEntropyCriterionLatencyAugmentConfig( - LabelSmoothedCrossEntropyCriterionConfig -): - latency_avg_weight: float = field( - default=0.0, - metadata={"help": "weight fot average latency loss."}, - ) - latency_var_weight: float = field( - default=0.0, - metadata={"help": "weight fot variance latency loss."}, - ) - latency_avg_type: str = field( - default="differentiable_average_lagging", - metadata={"help": "latency type for average loss"}, - ) - latency_var_type: str = field( - default="variance_delay", - metadata={"help": "latency typ for variance loss"}, - ) - latency_gather_method: str = field( - default="weighted_average", - metadata={"help": "method to gather latency loss for all heads"}, - ) - latency_update_after: int = field( - default=0, - metadata={"help": "Add latency loss after certain steps"}, - ) - -@register_criterion( - "latency_augmented_label_smoothed_cross_entropy", - dataclass=LabelSmoothedCrossEntropyCriterionLatencyAugmentConfig -) -class LatencyAugmentedLabelSmoothedCrossEntropyCriterion( - LabelSmoothedCrossEntropyCriterion -): - def __init__( - self, - task, - sentence_avg, - label_smoothing, - ignore_prefix_size, - report_accuracy, - latency_avg_weight, - latency_var_weight, - latency_avg_type, - latency_var_type, - latency_gather_method, - latency_update_after, - ): - super().__init__( - task, sentence_avg, label_smoothing, ignore_prefix_size, report_accuracy - ) - assert LATENCY_METRICS is not None, "Please make sure SimulEval is installed." - - self.latency_avg_weight = latency_avg_weight - self.latency_var_weight = latency_var_weight - self.latency_avg_type = latency_avg_type - self.latency_var_type = latency_var_type - self.latency_gather_method = latency_gather_method - self.latency_update_after = latency_update_after - - def forward(self, model, sample, reduce=True): - net_output = model(**sample["net_input"]) - # 1. Compute cross entropy loss - loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce) - - # 2. Compute cross latency loss - latency_loss, expected_latency, expected_delays_var = self.compute_latency_loss( - model, sample, net_output - ) - - if self.latency_update_after > 0: - num_updates = getattr(model.decoder, "num_updates", None) - assert num_updates is not None, ( - "model.decoder doesn't have attribute 'num_updates'" - ) - if num_updates <= self.latency_update_after: - latency_loss = 0 - - loss += latency_loss - - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - "latency": expected_latency, - "delays_var": expected_delays_var, - "latency_loss": latency_loss, - } - - if self.report_accuracy: - n_correct, total = self.compute_accuracy(model, net_output, sample) - logging_output["n_correct"] = utils.item(n_correct.data) - logging_output["total"] = utils.item(total.data) - return loss, sample_size, logging_output - - def compute_latency_loss(self, model, sample, net_output): - assert ( - net_output[-1].encoder_padding_mask is None - or not net_output[-1].encoder_padding_mask[:, 0].any() - ), ( - "Only right padding on source is supported." - ) - # 1. Obtain the expected alignment - alpha_list = [item["alpha"] for item in net_output[1].attn_list] - num_layers = len(alpha_list) - bsz, num_heads, tgt_len, src_len = alpha_list[0].size() - - # bsz * num_layers * num_heads, tgt_len, src_len - alpha_all = torch.cat(alpha_list, dim=1).view(-1, tgt_len, src_len) - - # 2 compute expected delays - # bsz * num_heads * num_layers, tgt_len, src_len for MMA - steps = ( - torch.arange(1, 1 + src_len) - .unsqueeze(0) - .unsqueeze(1) - .expand_as(alpha_all) - .type_as(alpha_all) - ) - - expected_delays = torch.sum(steps * alpha_all, dim=-1) - - target_padding_mask = ( - model.get_targets(sample, net_output) - .eq(self.padding_idx) - .unsqueeze(1) - .expand(bsz, num_layers * num_heads, tgt_len) - .contiguous() - .view(-1, tgt_len) - ) - - src_lengths = ( - sample["net_input"]["src_lengths"] - .unsqueeze(1) - .expand(bsz, num_layers * num_heads) - .contiguous() - .view(-1) - ) - expected_latency = LATENCY_METRICS[self.latency_avg_type]( - expected_delays, src_lengths, None, - target_padding_mask=target_padding_mask - ) - - # 2.1 average expected latency of heads - # bsz, num_layers * num_heads - expected_latency = expected_latency.view(bsz, -1) - if self.latency_gather_method == "average": - # bsz * tgt_len - expected_latency = expected_delays.mean(dim=1) - elif self.latency_gather_method == "weighted_average": - weights = torch.nn.functional.softmax(expected_latency, dim=1) - expected_latency = torch.sum(expected_latency * weights, dim=1) - elif self.latency_gather_method == "max": - expected_latency = expected_latency.max(dim=1)[0] - else: - raise NotImplementedError - - expected_latency = expected_latency.sum() - avg_loss = self.latency_avg_weight * expected_latency - - # 2.2 variance of expected delays - expected_delays_var = ( - expected_delays.view(bsz, -1, tgt_len).var(dim=1).mean(dim=1) - ) - expected_delays_var = expected_delays_var.sum() - var_loss = self.latency_avg_weight * expected_delays_var - - # 3. Final loss - latency_loss = avg_loss + var_loss - - return latency_loss, expected_latency, expected_delays_var - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - super().reduce_metrics(logging_outputs) - latency = sum( - log.get("latency", 0) for log in logging_outputs - ) - delays_var = sum( - log.get("delays_var", 0) for log in logging_outputs - ) - latency_loss = sum( - log.get("latency_loss", 0) for log in logging_outputs - ) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - metrics.log_scalar( - "latency", latency.float() / nsentences, nsentences, round=3 - ) - metrics.log_scalar( - "delays_var", delays_var / nsentences, - nsentences, round=3 - ) - metrics.log_scalar( - "latency_loss", latency_loss / nsentences, - nsentences, round=3 - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py deleted file mode 100644 index 51f58359eda387d67748f48217906ac6d16ccd08..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class CosineLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = field( - default=II("optimization.lr"), - metadata={"help": "max learning rate, must be more than cfg.min_lr"}, - ) - min_lr: float = field(default=0.0, metadata={"help": "min learning rate"}) - t_mult: float = field( - default=1.0, metadata={"help": "factor to grow the length of each period"} - ) - lr_period_updates: float = field( - default=-1, metadata={"help": "initial number of updates per period"} - ) - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - # This is not required, but is for convenience in inferring lr_period_updates - max_update: int = II("optimization.max_update") - - -@register_lr_scheduler("cosine", dataclass=CosineLRScheduleConfig) -class CosineLRSchedule(FairseqLRScheduler): - """Assign LR based on a cyclical schedule that follows the cosine function. - - See https://arxiv.org/pdf/1608.03983.pdf for details. - - We also support a warmup phase where we linearly increase the learning rate - from some initial learning rate (``--warmup-init-lr``) until the configured - max learning rate (``--lr``). - - During warmup:: - - lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates) - lr = lrs[update_num] - - After warmup:: - - lr = cfg.min_lr + 0.5*(cfg.lr - cfg.min_lr)*(1 + cos(t_curr / t_i)) - - where ``t_curr`` is current percentage of updates within the current period - range and ``t_i`` is the current period range, which is scaled by ``t_mul`` - after every iteration. - """ - - def __init__(self, cfg: CosineLRScheduleConfig, fairseq_optimizer): - super().__init__(cfg, fairseq_optimizer) - if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with cosine." - f" Consider --lr-scheduler=fixed instead. ({cfg.lr})" - ) - - self.max_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - assert ( - self.max_lr > cfg.min_lr - ), f"max_lr (={cfg.lr}) must be more than min_lr (={cfg.min_lr})" - - warmup_end_lr = self.max_lr - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = cfg.min_lr - - self.t_mult = cfg.t_mult - self.period = cfg.lr_period_updates - - if self.period <= 0: - assert ( - cfg.max_update > 0 - ), "Either --max_update or --lr-period-updates must be set" - self.period = cfg.max_update - cfg.warmup_updates - - if cfg.warmup_updates > 0: - # linearly warmup for the first cfg.warmup_updates - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - else: - self.lr_step = 1 - - self.warmup_updates = cfg.warmup_updates - self.lr_shrink = cfg.lr_shrink - - # initial learning rate - self.lr = cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - else: - curr_updates = num_updates - self.cfg.warmup_updates - if self.t_mult != 1: - i = math.floor( - math.log( - 1 - curr_updates / self.period * (1 - self.t_mult), self.t_mult - ) - ) - t_i = self.t_mult ** i * self.period - t_curr = ( - curr_updates - - (1 - self.t_mult ** i) / (1 - self.t_mult) * self.period - ) - else: - i = math.floor(curr_updates / self.period) - t_i = self.period - t_curr = curr_updates - (self.period * i) - - lr_shrink = self.lr_shrink ** i - min_lr = self.cfg.min_lr * lr_shrink - max_lr = self.max_lr * lr_shrink - - self.lr = min_lr + 0.5 * (max_lr - min_lr) * ( - 1 + math.cos(math.pi * t_curr / t_i) - ) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/data.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/data.py deleted file mode 100644 index 89a4ea4c9577e6131731444f149eec76978ec260..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/data.py +++ /dev/null @@ -1,168 +0,0 @@ -import glob -import os - -import cv2 -import PIL.Image as Image -import numpy as np - -from torch.utils.data import Dataset -import torch.nn.functional as F - - -def load_image(fname, mode='RGB', return_orig=False): - img = np.array(Image.open(fname).convert(mode)) - if img.ndim == 3: - img = np.transpose(img, (2, 0, 1)) - out_img = img.astype('float32') / 255 - if return_orig: - return out_img, img - else: - return out_img - - -def ceil_modulo(x, mod): - if x % mod == 0: - return x - return (x // mod + 1) * mod - - -def pad_img_to_modulo(img, mod): - channels, height, width = img.shape - out_height = ceil_modulo(height, mod) - out_width = ceil_modulo(width, mod) - return np.pad(img, ((0, 0), (0, out_height - height), (0, out_width - width)), mode='symmetric') - - -def pad_tensor_to_modulo(img, mod): - batch_size, channels, height, width = img.shape - out_height = ceil_modulo(height, mod) - out_width = ceil_modulo(width, mod) - return F.pad(img, pad=(0, out_width - width, 0, out_height - height), mode='reflect') - - -def scale_image(img, factor, interpolation=cv2.INTER_AREA): - if img.shape[0] == 1: - img = img[0] - else: - img = np.transpose(img, (1, 2, 0)) - - img = cv2.resize(img, dsize=None, fx=factor, fy=factor, interpolation=interpolation) - - if img.ndim == 2: - img = img[None, ...] - else: - img = np.transpose(img, (2, 0, 1)) - return img - - -class InpaintingDataset(Dataset): - def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None): - self.datadir = datadir - self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, '**', '*mask*.png'), recursive=True))) - self.img_filenames = [fname.rsplit('_mask', 1)[0] + img_suffix for fname in self.mask_filenames] - self.pad_out_to_modulo = pad_out_to_modulo - self.scale_factor = scale_factor - - def __len__(self): - return len(self.mask_filenames) - - def __getitem__(self, i): - image = load_image(self.img_filenames[i], mode='RGB') - mask = load_image(self.mask_filenames[i], mode='L') - result = dict(image=image, mask=mask[None, ...]) - - if self.scale_factor is not None: - result['image'] = scale_image(result['image'], self.scale_factor) - result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST) - - if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1: - result['unpad_to_size'] = result['image'].shape[1:] - result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo) - result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo) - - return result - -class OurInpaintingDataset(Dataset): - def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None): - self.datadir = datadir - self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, 'mask', '**', '*mask*.png'), recursive=True))) - self.img_filenames = [os.path.join(self.datadir, 'img', os.path.basename(fname.rsplit('-', 1)[0].rsplit('_', 1)[0]) + '.png') for fname in self.mask_filenames] - self.pad_out_to_modulo = pad_out_to_modulo - self.scale_factor = scale_factor - - def __len__(self): - return len(self.mask_filenames) - - def __getitem__(self, i): - result = dict(image=load_image(self.img_filenames[i], mode='RGB'), - mask=load_image(self.mask_filenames[i], mode='L')[None, ...]) - - if self.scale_factor is not None: - result['image'] = scale_image(result['image'], self.scale_factor) - result['mask'] = scale_image(result['mask'], self.scale_factor) - - if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1: - result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo) - result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo) - - return result - -class PrecomputedInpaintingResultsDataset(InpaintingDataset): - def __init__(self, datadir, predictdir, inpainted_suffix='_inpainted.jpg', **kwargs): - super().__init__(datadir, **kwargs) - if not datadir.endswith('/'): - datadir += '/' - self.predictdir = predictdir - self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix) - for fname in self.mask_filenames] - - def __getitem__(self, i): - result = super().__getitem__(i) - result['inpainted'] = load_image(self.pred_filenames[i]) - if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1: - result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo) - return result - -class OurPrecomputedInpaintingResultsDataset(OurInpaintingDataset): - def __init__(self, datadir, predictdir, inpainted_suffix="png", **kwargs): - super().__init__(datadir, **kwargs) - if not datadir.endswith('/'): - datadir += '/' - self.predictdir = predictdir - self.pred_filenames = [os.path.join(predictdir, os.path.basename(os.path.splitext(fname)[0]) + f'_inpainted.{inpainted_suffix}') - for fname in self.mask_filenames] - # self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix) - # for fname in self.mask_filenames] - - def __getitem__(self, i): - result = super().__getitem__(i) - result['inpainted'] = self.file_loader(self.pred_filenames[i]) - - if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1: - result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo) - return result - -class InpaintingEvalOnlineDataset(Dataset): - def __init__(self, indir, mask_generator, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None, **kwargs): - self.indir = indir - self.mask_generator = mask_generator - self.img_filenames = sorted(list(glob.glob(os.path.join(self.indir, '**', f'*{img_suffix}' ), recursive=True))) - self.pad_out_to_modulo = pad_out_to_modulo - self.scale_factor = scale_factor - - def __len__(self): - return len(self.img_filenames) - - def __getitem__(self, i): - img, raw_image = load_image(self.img_filenames[i], mode='RGB', return_orig=True) - mask = self.mask_generator(img, raw_image=raw_image) - result = dict(image=img, mask=mask) - - if self.scale_factor is not None: - result['image'] = scale_image(result['image'], self.scale_factor) - result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST) - - if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1: - result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo) - result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo) - return result \ No newline at end of file diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/utils/__init__.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/utils/__init__.py deleted file mode 100644 index 130d3011b032f91df1a9cf965625e54922f6c81b..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/utils/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .events import setup_wandb, WandbWriter \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py deleted file mode 100644 index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/pspnet_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='PSPHead', - in_channels=64, - in_index=4, - channels=16, - pool_scales=(1, 2, 3, 6), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/PYTHONOPTIC/FOCUSGUMMY/README.md b/spaces/PYTHONOPTIC/FOCUSGUMMY/README.md deleted file mode 100644 index 28c3d5c75dd5564749dbd8f09a05a2ae53fb1328..0000000000000000000000000000000000000000 --- a/spaces/PYTHONOPTIC/FOCUSGUMMY/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PYTHONOPTIC -emoji: 🐨 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/file-cache.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/file-cache.go deleted file mode 100644 index 052b454be5ec00dd8a70328d9b808efd95d4366c..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/file-cache.go and /dev/null differ diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/batch_norm.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/batch_norm.py deleted file mode 100644 index 03ce61030aa6f66bd50c1c82431c7b4faed0be15..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/batch_norm.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch -from torch import nn - -import torch.distributed as dist -import maskrcnn_benchmark.utils.comm as comm -from torch.autograd.function import Function - -class FrozenBatchNorm2d(nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters - are fixed - """ - - def __init__(self, n): - super(FrozenBatchNorm2d, self).__init__() - self.register_buffer("weight", torch.ones(n)) - self.register_buffer("bias", torch.zeros(n)) - self.register_buffer("running_mean", torch.zeros(n)) - self.register_buffer("running_var", torch.ones(n)) - - def forward(self, x): - scale = self.weight * self.running_var.rsqrt() - bias = self.bias - self.running_mean * scale - scale = scale.reshape(1, -1, 1, 1) - bias = bias.reshape(1, -1, 1, 1) - return x * scale + bias - - -class AllReduce(Function): - @staticmethod - def forward(ctx, input): - input_list = [torch.zeros_like(input) for k in range(dist.get_world_size())] - # Use allgather instead of allreduce since I don't trust in-place operations .. - dist.all_gather(input_list, input, async_op=False) - inputs = torch.stack(input_list, dim=0) - return torch.sum(inputs, dim=0) - - @staticmethod - def backward(ctx, grad_output): - dist.all_reduce(grad_output, async_op=False) - return grad_output - - -class NaiveSyncBatchNorm2d(nn.BatchNorm2d): - """ - In PyTorch<=1.5, ``nn.SyncBatchNorm`` has incorrect gradient - when the batch size on each worker is different. - (e.g., when scale augmentation is used, or when it is applied to mask head). - - This is a slower but correct alternative to `nn.SyncBatchNorm`. - - Note: - There isn't a single definition of Sync BatchNorm. - - When ``stats_mode==""``, this module computes overall statistics by using - statistics of each worker with equal weight. The result is true statistics - of all samples (as if they are all on one worker) only when all workers - have the same (N, H, W). This mode does not support inputs with zero batch size. - - When ``stats_mode=="N"``, this module computes overall statistics by weighting - the statistics of each worker by their ``N``. The result is true statistics - of all samples (as if they are all on one worker) only when all workers - have the same (H, W). It is slower than ``stats_mode==""``. - - Even though the result of this module may not be the true statistics of all samples, - it may still be reasonable because it might be preferrable to assign equal weights - to all workers, regardless of their (H, W) dimension, instead of putting larger weight - on larger images. From preliminary experiments, little difference is found between such - a simplified implementation and an accurate computation of overall mean & variance. - """ - - def __init__(self, *args, stats_mode="", **kwargs): - super().__init__(*args, **kwargs) - assert stats_mode in ["", "N"] - self._stats_mode = stats_mode - - def forward(self, input): - if comm.get_world_size() == 1 or not self.training: - return super().forward(input) - - B, C = input.shape[0], input.shape[1] - - mean = torch.mean(input, dim=[0, 2, 3]) - meansqr = torch.mean(input * input, dim=[0, 2, 3]) - - if self._stats_mode == "": - assert B > 0, 'SyncBatchNorm(stats_mode="") does not support zero batch size.' - vec = torch.cat([mean, meansqr], dim=0) - vec = AllReduce.apply(vec) * (1.0 / dist.get_world_size()) - mean, meansqr = torch.split(vec, C) - momentum = self.momentum - else: - if B == 0: - vec = torch.zeros([2 * C + 1], device=mean.device, dtype=mean.dtype) - vec = vec + input.sum() # make sure there is gradient w.r.t input - else: - vec = torch.cat( - [mean, meansqr, torch.ones([1], device=mean.device, dtype=mean.dtype)], dim=0 - ) - vec = AllReduce.apply(vec * B) - - total_batch = vec[-1].detach() - momentum = total_batch.clamp(max=1) * self.momentum # no update if total_batch is 0 - total_batch = torch.max(total_batch, torch.ones_like(total_batch)) # avoid div-by-zero - mean, meansqr, _ = torch.split(vec / total_batch, C) - - var = meansqr - mean * mean - invstd = torch.rsqrt(var + self.eps) - scale = self.weight * invstd - bias = self.bias - mean * scale - scale = scale.reshape(1, -1, 1, 1) - bias = bias.reshape(1, -1, 1, 1) - - self.running_mean += momentum * (mean.detach() - self.running_mean) - self.running_var += momentum * (var.detach() - self.running_var) - return input * scale + bias \ No newline at end of file diff --git a/spaces/Plurigrid/LifeSim/src/components/ui/menubar.tsx b/spaces/Plurigrid/LifeSim/src/components/ui/menubar.tsx deleted file mode 100644 index d57454816cea9b7572ad1ae6ab139d6946c4d5d5..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/components/ui/menubar.tsx +++ /dev/null @@ -1,236 +0,0 @@ -"use client" - -import * as React from "react" -import * as MenubarPrimitive from "@radix-ui/react-menubar" -import { Check, ChevronRight, Circle } from "lucide-react" - -import { cn } from "@/lib/utils" - -const MenubarMenu = MenubarPrimitive.Menu - -const MenubarGroup = MenubarPrimitive.Group - -const MenubarPortal = MenubarPrimitive.Portal - -const MenubarSub = MenubarPrimitive.Sub - -const MenubarRadioGroup = MenubarPrimitive.RadioGroup - -const Menubar = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -Menubar.displayName = MenubarPrimitive.Root.displayName - -const MenubarTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -MenubarTrigger.displayName = MenubarPrimitive.Trigger.displayName - -const MenubarSubTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, children, ...props }, ref) => ( - - {children} - - -)) -MenubarSubTrigger.displayName = MenubarPrimitive.SubTrigger.displayName - -const MenubarSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -MenubarSubContent.displayName = MenubarPrimitive.SubContent.displayName - -const MenubarContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, align = "start", alignOffset = -4, sideOffset = 8, ...props }, - ref - ) => ( - - - - ) -) -MenubarContent.displayName = MenubarPrimitive.Content.displayName - -const MenubarItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -MenubarItem.displayName = MenubarPrimitive.Item.displayName - -const MenubarCheckboxItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, checked, ...props }, ref) => ( - - - - - - - {children} - -)) -MenubarCheckboxItem.displayName = MenubarPrimitive.CheckboxItem.displayName - -const MenubarRadioItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -MenubarRadioItem.displayName = MenubarPrimitive.RadioItem.displayName - -const MenubarLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -MenubarLabel.displayName = MenubarPrimitive.Label.displayName - -const MenubarSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -MenubarSeparator.displayName = MenubarPrimitive.Separator.displayName - -const MenubarShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -MenubarShortcut.displayname = "MenubarShortcut" - -export { - Menubar, - MenubarMenu, - MenubarTrigger, - MenubarContent, - MenubarItem, - MenubarSeparator, - MenubarLabel, - MenubarCheckboxItem, - MenubarRadioGroup, - MenubarRadioItem, - MenubarPortal, - MenubarSubContent, - MenubarSubTrigger, - MenubarGroup, - MenubarSub, - MenubarShortcut, -} diff --git a/spaces/Podtekatel/Avatar2VSK/inference/center_crop.py b/spaces/Podtekatel/Avatar2VSK/inference/center_crop.py deleted file mode 100644 index 5ef5008869aa2882ea8c26b5dc72579b236ef644..0000000000000000000000000000000000000000 --- a/spaces/Podtekatel/Avatar2VSK/inference/center_crop.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np - - -# From albumentations -def center_crop(img: np.ndarray, crop_height: int, crop_width: int): - height, width = img.shape[:2] - if height < crop_height or width < crop_width: - raise ValueError( - "Requested crop size ({crop_height}, {crop_width}) is " - "larger than the image size ({height}, {width})".format( - crop_height=crop_height, crop_width=crop_width, height=height, width=width - ) - ) - x1, y1, x2, y2 = get_center_crop_coords(height, width, crop_height, crop_width) - img = img[y1:y2, x1:x2] - return img - - -def get_center_crop_coords(height: int, width: int, crop_height: int, crop_width: int): - y1 = (height - crop_height) // 2 - y2 = y1 + crop_height - x1 = (width - crop_width) // 2 - x2 = x1 + crop_width - return x1, y1, x2, y2 diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/_base_explorers.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/_base_explorers.py deleted file mode 100644 index d3f26666aa596f7bd2e8695c4f00e7963e978ceb..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/_base_explorers.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import time -import typing as tp -from dora import Explorer -import treetable as tt - - -def get_sheep_ping(sheep) -> tp.Optional[str]: - """Return the amount of time since the Sheep made some update - to its log. Returns a str using the relevant time unit.""" - ping = None - if sheep.log is not None and sheep.log.exists(): - delta = time.time() - sheep.log.stat().st_mtime - if delta > 3600 * 24: - ping = f'{delta / (3600 * 24):.1f}d' - elif delta > 3600: - ping = f'{delta / (3600):.1f}h' - elif delta > 60: - ping = f'{delta / 60:.1f}m' - else: - ping = f'{delta:.1f}s' - return ping - - -class BaseExplorer(ABC, Explorer): - """Base explorer for AudioCraft grids. - - All task specific solvers are expected to implement the `get_grid_metrics` - method to specify logic about metrics to display for a given task. - - If additional stages are used, the child explorer must define how to handle - these new stages in the `process_history` and `process_sheep` methods. - """ - def stages(self): - return ["train", "valid", "evaluate"] - - def get_grid_meta(self): - """Returns the list of Meta information to display for each XP/job. - """ - return [ - tt.leaf("index", align=">"), - tt.leaf("name", wrap=140), - tt.leaf("state"), - tt.leaf("sig", align=">"), - tt.leaf("sid", align="<"), - ] - - @abstractmethod - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table. - """ - ... - - def process_sheep(self, sheep, history): - train = { - "epoch": len(history), - } - parts = {"train": train} - for metrics in history: - for key, sub in metrics.items(): - part = parts.get(key, {}) - if 'duration' in sub: - # Convert to minutes for readability. - sub['duration'] = sub['duration'] / 60. - part.update(sub) - parts[key] = part - ping = get_sheep_ping(sheep) - if ping is not None: - for name in self.stages(): - if name not in parts: - parts[name] = {} - # Add the ping to each part for convenience. - parts[name]['ping'] = ping - return parts diff --git a/spaces/Qosmo/video2music-demo/README.md b/spaces/Qosmo/video2music-demo/README.md deleted file mode 100644 index 5d573821cec6d3f61543ff2ee9ccd69f37124592..0000000000000000000000000000000000000000 --- a/spaces/Qosmo/video2music-demo/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Video2music Demo -emoji: 👀 -colorFrom: indigo -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ramse/TTS_Hindi/modules/hifigan/model/discriminator.py b/spaces/Ramse/TTS_Hindi/modules/hifigan/model/discriminator.py deleted file mode 100644 index 77392fb60668b6fb3f7dbaed4f98fcdabee97a48..0000000000000000000000000000000000000000 --- a/spaces/Ramse/TTS_Hindi/modules/hifigan/model/discriminator.py +++ /dev/null @@ -1,80 +0,0 @@ -import torch -import torch.nn as nn - - -class Discriminator(nn.Module): - def __init__(self, ndf = 16, n_layers = 3, downsampling_factor = 4, disc_out = 512): - super(Discriminator, self).__init__() - discriminator = nn.ModuleDict() - discriminator["layer_0"] = nn.Sequential( - nn.ReflectionPad1d(7), - nn.utils.weight_norm(nn.Conv1d(1, ndf, kernel_size=15, stride=1)), - nn.LeakyReLU(0.2, True), - ) - - nf = ndf - stride = downsampling_factor - for n in range(1, n_layers + 1): - nf_prev = nf - nf = min(nf * stride, disc_out) - - discriminator["layer_%d" % n] = nn.Sequential( - nn.utils.weight_norm(nn.Conv1d( - nf_prev, - nf, - kernel_size=stride * 10 + 1, - stride=stride, - padding=stride * 5, - groups=nf_prev // 4, - )), - nn.LeakyReLU(0.2, True), - ) - nf = min(nf * 2, disc_out) - discriminator["layer_%d" % (n_layers + 1)] = nn.Sequential( - nn.utils.weight_norm(nn.Conv1d(nf, disc_out, kernel_size=5, stride=1, padding=2)), - nn.LeakyReLU(0.2, True), - ) - - discriminator["layer_%d" % (n_layers + 2)] = nn.utils.weight_norm(nn.Conv1d( - nf, 1, kernel_size=3, stride=1, padding=1 - )) - self.discriminator = discriminator - - def forward(self, x): - ''' - returns: (list of 6 features, discriminator score) - we directly predict score without last sigmoid function - since we're using Least Squares GAN (https://arxiv.org/abs/1611.04076) - ''' - features = list() - for key, module in self.discriminator.items(): - x = module(x) - features.append(x) - return features[-1], features[:-1] - - -if __name__ == '__main__': - model = Discriminator() - ''' - Length of features : 5 - Length of score : 3 - torch.Size([3, 16, 25600]) - torch.Size([3, 64, 6400]) - torch.Size([3, 256, 1600]) - torch.Size([3, 512, 400]) - torch.Size([3, 512, 400]) - torch.Size([3, 1, 400]) -> score - ''' - - x = torch.randn(3, 1, 25600) - print(x.shape) - - features, score = model(x) - print("Length of features : ", len(features)) - print("Length of score : ", len(score)) - for feat in features: - print(feat.shape) - print(score.shape) - - pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - print(pytorch_total_params) diff --git a/spaces/Rashid2026/Course-Recommender/README.md b/spaces/Rashid2026/Course-Recommender/README.md deleted file mode 100644 index 27759aaa15b4de119d9c81102da9c9b457a8299c..0000000000000000000000000000000000000000 --- a/spaces/Rashid2026/Course-Recommender/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Course Recommender -emoji: 👀 -colorFrom: blue -colorTo: pink -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/_collections.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/_collections.py deleted file mode 100644 index 98fce8008dc25cb97d026426b47f898fccc0c34a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/_collections.py +++ /dev/null @@ -1,56 +0,0 @@ -import collections -import itertools - - -# from jaraco.collections 3.5.1 -class DictStack(list, collections.abc.Mapping): - """ - A stack of dictionaries that behaves as a view on those dictionaries, - giving preference to the last. - - >>> stack = DictStack([dict(a=1, c=2), dict(b=2, a=2)]) - >>> stack['a'] - 2 - >>> stack['b'] - 2 - >>> stack['c'] - 2 - >>> len(stack) - 3 - >>> stack.push(dict(a=3)) - >>> stack['a'] - 3 - >>> set(stack.keys()) == set(['a', 'b', 'c']) - True - >>> set(stack.items()) == set([('a', 3), ('b', 2), ('c', 2)]) - True - >>> dict(**stack) == dict(stack) == dict(a=3, c=2, b=2) - True - >>> d = stack.pop() - >>> stack['a'] - 2 - >>> d = stack.pop() - >>> stack['a'] - 1 - >>> stack.get('b', None) - >>> 'c' in stack - True - """ - - def __iter__(self): - dicts = list.__iter__(self) - return iter(set(itertools.chain.from_iterable(c.keys() for c in dicts))) - - def __getitem__(self, key): - for scope in reversed(tuple(list.__iter__(self))): - if key in scope: - return scope[key] - raise KeyError(key) - - push = list.append - - def __contains__(self, other): - return collections.abc.Mapping.__contains__(self, other) - - def __len__(self): - return len(list(iter(self))) diff --git a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/deprecated/build_model.py b/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/deprecated/build_model.py deleted file mode 100644 index 6b4f6608296c21387b19242681e6e49160c0887e..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/deprecated/build_model.py +++ /dev/null @@ -1,786 +0,0 @@ -import torch -import torch.nn as nn -from dkm import * -from .local_corr import LocalCorr -from .corr_channels import NormedCorr -from torchvision.models import resnet as tv_resnet - -dkm_pretrained_urls = { - "DKM": { - "mega_synthetic": "https://github.com/Parskatt/storage/releases/download/dkm_mega_synthetic/dkm_mega_synthetic.pth", - "mega": "https://github.com/Parskatt/storage/releases/download/dkm_mega/dkm_mega.pth", - }, - "DKMv2": { - "outdoor": "https://github.com/Parskatt/storage/releases/download/dkmv2/dkm_v2_outdoor.pth", - "indoor": "https://github.com/Parskatt/storage/releases/download/dkmv2/dkm_v2_indoor.pth", - }, -} - - -def DKM(pretrained=True, version="mega_synthetic", device=None): - if device is None: - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - gp_dim = 256 - dfn_dim = 384 - feat_dim = 256 - coordinate_decoder = DFN( - internal_dim=dfn_dim, - feat_input_modules=nn.ModuleDict( - { - "32": nn.Conv2d(512, feat_dim, 1, 1), - "16": nn.Conv2d(512, feat_dim, 1, 1), - } - ), - pred_input_modules=nn.ModuleDict( - { - "32": nn.Identity(), - "16": nn.Identity(), - } - ), - rrb_d_dict=nn.ModuleDict( - { - "32": RRB(gp_dim + feat_dim, dfn_dim), - "16": RRB(gp_dim + feat_dim, dfn_dim), - } - ), - cab_dict=nn.ModuleDict( - { - "32": CAB(2 * dfn_dim, dfn_dim), - "16": CAB(2 * dfn_dim, dfn_dim), - } - ), - rrb_u_dict=nn.ModuleDict( - { - "32": RRB(dfn_dim, dfn_dim), - "16": RRB(dfn_dim, dfn_dim), - } - ), - terminal_module=nn.ModuleDict( - { - "32": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - "16": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - } - ), - ) - dw = True - hidden_blocks = 8 - kernel_size = 5 - conv_refiner = nn.ModuleDict( - { - "16": ConvRefiner( - 2 * 512, - 1024, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "8": ConvRefiner( - 2 * 512, - 1024, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "4": ConvRefiner( - 2 * 256, - 512, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "2": ConvRefiner( - 2 * 64, - 128, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "1": ConvRefiner( - 2 * 3, - 24, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - } - ) - kernel_temperature = 0.2 - learn_temperature = False - no_cov = True - kernel = CosKernel - only_attention = False - basis = "fourier" - gp32 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gp16 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gps = nn.ModuleDict({"32": gp32, "16": gp16}) - proj = nn.ModuleDict( - {"16": nn.Conv2d(1024, 512, 1, 1), "32": nn.Conv2d(2048, 512, 1, 1)} - ) - decoder = Decoder(coordinate_decoder, gps, proj, conv_refiner, detach=True) - h, w = 384, 512 - encoder = Encoder( - tv_resnet.resnet50(pretrained=not pretrained), - ) # only load pretrained weights if not loading a pretrained matcher ;) - matcher = RegressionMatcher(encoder, decoder, h=h, w=w).to(device) - if pretrained: - weights = torch.hub.load_state_dict_from_url( - dkm_pretrained_urls["DKM"][version] - ) - matcher.load_state_dict(weights) - return matcher - - -def DKMv2(pretrained=True, version="outdoor", resolution="low", **kwargs): - gp_dim = 256 - dfn_dim = 384 - feat_dim = 256 - coordinate_decoder = DFN( - internal_dim=dfn_dim, - feat_input_modules=nn.ModuleDict( - { - "32": nn.Conv2d(512, feat_dim, 1, 1), - "16": nn.Conv2d(512, feat_dim, 1, 1), - } - ), - pred_input_modules=nn.ModuleDict( - { - "32": nn.Identity(), - "16": nn.Identity(), - } - ), - rrb_d_dict=nn.ModuleDict( - { - "32": RRB(gp_dim + feat_dim, dfn_dim), - "16": RRB(gp_dim + feat_dim, dfn_dim), - } - ), - cab_dict=nn.ModuleDict( - { - "32": CAB(2 * dfn_dim, dfn_dim), - "16": CAB(2 * dfn_dim, dfn_dim), - } - ), - rrb_u_dict=nn.ModuleDict( - { - "32": RRB(dfn_dim, dfn_dim), - "16": RRB(dfn_dim, dfn_dim), - } - ), - terminal_module=nn.ModuleDict( - { - "32": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - "16": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - } - ), - ) - dw = True - hidden_blocks = 8 - kernel_size = 5 - displacement_emb = "linear" - conv_refiner = nn.ModuleDict( - { - "16": ConvRefiner( - 2 * 512 + 128, - 1024 + 128, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - displacement_emb=displacement_emb, - displacement_emb_dim=128, - ), - "8": ConvRefiner( - 2 * 512 + 64, - 1024 + 64, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - displacement_emb=displacement_emb, - displacement_emb_dim=64, - ), - "4": ConvRefiner( - 2 * 256 + 32, - 512 + 32, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - displacement_emb=displacement_emb, - displacement_emb_dim=32, - ), - "2": ConvRefiner( - 2 * 64 + 16, - 128 + 16, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - displacement_emb=displacement_emb, - displacement_emb_dim=16, - ), - "1": ConvRefiner( - 2 * 3 + 6, - 24, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - displacement_emb=displacement_emb, - displacement_emb_dim=6, - ), - } - ) - kernel_temperature = 0.2 - learn_temperature = False - no_cov = True - kernel = CosKernel - only_attention = False - basis = "fourier" - gp32 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gp16 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gps = nn.ModuleDict({"32": gp32, "16": gp16}) - proj = nn.ModuleDict( - {"16": nn.Conv2d(1024, 512, 1, 1), "32": nn.Conv2d(2048, 512, 1, 1)} - ) - decoder = Decoder(coordinate_decoder, gps, proj, conv_refiner, detach=True) - if resolution == "low": - h, w = 384, 512 - elif resolution == "high": - h, w = 480, 640 - encoder = Encoder( - tv_resnet.resnet50(pretrained=not pretrained), - ) # only load pretrained weights if not loading a pretrained matcher ;) - matcher = RegressionMatcher(encoder, decoder, h=h, w=w, **kwargs).to(device) - if pretrained: - try: - weights = torch.hub.load_state_dict_from_url( - dkm_pretrained_urls["DKMv2"][version] - ) - except: - weights = torch.load(dkm_pretrained_urls["DKMv2"][version]) - matcher.load_state_dict(weights) - return matcher - - -def local_corr(pretrained=True, version="mega_synthetic"): - gp_dim = 256 - dfn_dim = 384 - feat_dim = 256 - coordinate_decoder = DFN( - internal_dim=dfn_dim, - feat_input_modules=nn.ModuleDict( - { - "32": nn.Conv2d(512, feat_dim, 1, 1), - "16": nn.Conv2d(512, feat_dim, 1, 1), - } - ), - pred_input_modules=nn.ModuleDict( - { - "32": nn.Identity(), - "16": nn.Identity(), - } - ), - rrb_d_dict=nn.ModuleDict( - { - "32": RRB(gp_dim + feat_dim, dfn_dim), - "16": RRB(gp_dim + feat_dim, dfn_dim), - } - ), - cab_dict=nn.ModuleDict( - { - "32": CAB(2 * dfn_dim, dfn_dim), - "16": CAB(2 * dfn_dim, dfn_dim), - } - ), - rrb_u_dict=nn.ModuleDict( - { - "32": RRB(dfn_dim, dfn_dim), - "16": RRB(dfn_dim, dfn_dim), - } - ), - terminal_module=nn.ModuleDict( - { - "32": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - "16": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - } - ), - ) - dw = True - hidden_blocks = 8 - kernel_size = 5 - conv_refiner = nn.ModuleDict( - { - "16": LocalCorr( - 81, - 81 * 12, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "8": LocalCorr( - 81, - 81 * 12, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "4": LocalCorr( - 81, - 81 * 6, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "2": LocalCorr( - 81, - 81, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "1": ConvRefiner( - 2 * 3, - 24, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - } - ) - kernel_temperature = 0.2 - learn_temperature = False - no_cov = True - kernel = CosKernel - only_attention = False - basis = "fourier" - gp32 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gp16 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gps = nn.ModuleDict({"32": gp32, "16": gp16}) - proj = nn.ModuleDict( - {"16": nn.Conv2d(1024, 512, 1, 1), "32": nn.Conv2d(2048, 512, 1, 1)} - ) - decoder = Decoder(coordinate_decoder, gps, proj, conv_refiner, detach=True) - h, w = 384, 512 - encoder = Encoder( - tv_resnet.resnet50(pretrained=not pretrained) - ) # only load pretrained weights if not loading a pretrained matcher ;) - matcher = RegressionMatcher(encoder, decoder, h=h, w=w).to(device) - if pretrained: - weights = torch.hub.load_state_dict_from_url( - dkm_pretrained_urls["local_corr"][version] - ) - matcher.load_state_dict(weights) - return matcher - - -def corr_channels(pretrained=True, version="mega_synthetic"): - h, w = 384, 512 - gp_dim = (h // 32) * (w // 32), (h // 16) * (w // 16) - dfn_dim = 384 - feat_dim = 256 - coordinate_decoder = DFN( - internal_dim=dfn_dim, - feat_input_modules=nn.ModuleDict( - { - "32": nn.Conv2d(512, feat_dim, 1, 1), - "16": nn.Conv2d(512, feat_dim, 1, 1), - } - ), - pred_input_modules=nn.ModuleDict( - { - "32": nn.Identity(), - "16": nn.Identity(), - } - ), - rrb_d_dict=nn.ModuleDict( - { - "32": RRB(gp_dim[0] + feat_dim, dfn_dim), - "16": RRB(gp_dim[1] + feat_dim, dfn_dim), - } - ), - cab_dict=nn.ModuleDict( - { - "32": CAB(2 * dfn_dim, dfn_dim), - "16": CAB(2 * dfn_dim, dfn_dim), - } - ), - rrb_u_dict=nn.ModuleDict( - { - "32": RRB(dfn_dim, dfn_dim), - "16": RRB(dfn_dim, dfn_dim), - } - ), - terminal_module=nn.ModuleDict( - { - "32": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - "16": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - } - ), - ) - dw = True - hidden_blocks = 8 - kernel_size = 5 - conv_refiner = nn.ModuleDict( - { - "16": ConvRefiner( - 2 * 512, - 1024, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "8": ConvRefiner( - 2 * 512, - 1024, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "4": ConvRefiner( - 2 * 256, - 512, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "2": ConvRefiner( - 2 * 64, - 128, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "1": ConvRefiner( - 2 * 3, - 24, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - } - ) - gp32 = NormedCorr() - gp16 = NormedCorr() - gps = nn.ModuleDict({"32": gp32, "16": gp16}) - proj = nn.ModuleDict( - {"16": nn.Conv2d(1024, 512, 1, 1), "32": nn.Conv2d(2048, 512, 1, 1)} - ) - decoder = Decoder(coordinate_decoder, gps, proj, conv_refiner, detach=True) - h, w = 384, 512 - encoder = Encoder( - tv_resnet.resnet50(pretrained=not pretrained) - ) # only load pretrained weights if not loading a pretrained matcher ;) - matcher = RegressionMatcher(encoder, decoder, h=h, w=w).to(device) - if pretrained: - weights = torch.hub.load_state_dict_from_url( - dkm_pretrained_urls["corr_channels"][version] - ) - matcher.load_state_dict(weights) - return matcher - - -def baseline(pretrained=True, version="mega_synthetic"): - h, w = 384, 512 - gp_dim = (h // 32) * (w // 32), (h // 16) * (w // 16) - dfn_dim = 384 - feat_dim = 256 - coordinate_decoder = DFN( - internal_dim=dfn_dim, - feat_input_modules=nn.ModuleDict( - { - "32": nn.Conv2d(512, feat_dim, 1, 1), - "16": nn.Conv2d(512, feat_dim, 1, 1), - } - ), - pred_input_modules=nn.ModuleDict( - { - "32": nn.Identity(), - "16": nn.Identity(), - } - ), - rrb_d_dict=nn.ModuleDict( - { - "32": RRB(gp_dim[0] + feat_dim, dfn_dim), - "16": RRB(gp_dim[1] + feat_dim, dfn_dim), - } - ), - cab_dict=nn.ModuleDict( - { - "32": CAB(2 * dfn_dim, dfn_dim), - "16": CAB(2 * dfn_dim, dfn_dim), - } - ), - rrb_u_dict=nn.ModuleDict( - { - "32": RRB(dfn_dim, dfn_dim), - "16": RRB(dfn_dim, dfn_dim), - } - ), - terminal_module=nn.ModuleDict( - { - "32": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - "16": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - } - ), - ) - dw = True - hidden_blocks = 8 - kernel_size = 5 - conv_refiner = nn.ModuleDict( - { - "16": LocalCorr( - 81, - 81 * 12, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "8": LocalCorr( - 81, - 81 * 12, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "4": LocalCorr( - 81, - 81 * 6, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "2": LocalCorr( - 81, - 81, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "1": ConvRefiner( - 2 * 3, - 24, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - } - ) - gp32 = NormedCorr() - gp16 = NormedCorr() - gps = nn.ModuleDict({"32": gp32, "16": gp16}) - proj = nn.ModuleDict( - {"16": nn.Conv2d(1024, 512, 1, 1), "32": nn.Conv2d(2048, 512, 1, 1)} - ) - decoder = Decoder(coordinate_decoder, gps, proj, conv_refiner, detach=True) - h, w = 384, 512 - encoder = Encoder( - tv_resnet.resnet50(pretrained=not pretrained) - ) # only load pretrained weights if not loading a pretrained matcher ;) - matcher = RegressionMatcher(encoder, decoder, h=h, w=w).to(device) - if pretrained: - weights = torch.hub.load_state_dict_from_url( - dkm_pretrained_urls["baseline"][version] - ) - matcher.load_state_dict(weights) - return matcher - - -def linear(pretrained=True, version="mega_synthetic"): - gp_dim = 256 - dfn_dim = 384 - feat_dim = 256 - coordinate_decoder = DFN( - internal_dim=dfn_dim, - feat_input_modules=nn.ModuleDict( - { - "32": nn.Conv2d(512, feat_dim, 1, 1), - "16": nn.Conv2d(512, feat_dim, 1, 1), - } - ), - pred_input_modules=nn.ModuleDict( - { - "32": nn.Identity(), - "16": nn.Identity(), - } - ), - rrb_d_dict=nn.ModuleDict( - { - "32": RRB(gp_dim + feat_dim, dfn_dim), - "16": RRB(gp_dim + feat_dim, dfn_dim), - } - ), - cab_dict=nn.ModuleDict( - { - "32": CAB(2 * dfn_dim, dfn_dim), - "16": CAB(2 * dfn_dim, dfn_dim), - } - ), - rrb_u_dict=nn.ModuleDict( - { - "32": RRB(dfn_dim, dfn_dim), - "16": RRB(dfn_dim, dfn_dim), - } - ), - terminal_module=nn.ModuleDict( - { - "32": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - "16": nn.Conv2d(dfn_dim, 3, 1, 1, 0), - } - ), - ) - dw = True - hidden_blocks = 8 - kernel_size = 5 - conv_refiner = nn.ModuleDict( - { - "16": ConvRefiner( - 2 * 512, - 1024, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "8": ConvRefiner( - 2 * 512, - 1024, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "4": ConvRefiner( - 2 * 256, - 512, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "2": ConvRefiner( - 2 * 64, - 128, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - "1": ConvRefiner( - 2 * 3, - 24, - 3, - kernel_size=kernel_size, - dw=dw, - hidden_blocks=hidden_blocks, - ), - } - ) - kernel_temperature = 0.2 - learn_temperature = False - no_cov = True - kernel = CosKernel - only_attention = False - basis = "linear" - gp32 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gp16 = GP( - kernel, - T=kernel_temperature, - learn_temperature=learn_temperature, - only_attention=only_attention, - gp_dim=gp_dim, - basis=basis, - no_cov=no_cov, - ) - gps = nn.ModuleDict({"32": gp32, "16": gp16}) - proj = nn.ModuleDict( - {"16": nn.Conv2d(1024, 512, 1, 1), "32": nn.Conv2d(2048, 512, 1, 1)} - ) - decoder = Decoder(coordinate_decoder, gps, proj, conv_refiner, detach=True) - h, w = 384, 512 - encoder = Encoder( - tv_resnet.resnet50(pretrained=not pretrained) - ) # only load pretrained weights if not loading a pretrained matcher ;) - matcher = RegressionMatcher(encoder, decoder, h=h, w=w).to(device) - if pretrained: - weights = torch.hub.load_state_dict_from_url( - dkm_pretrained_urls["linear"][version] - ) - matcher.load_state_dict(weights) - return matcher diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/scripts/reproduce_test/outdoor.sh b/spaces/Realcat/image-matching-webui/third_party/TopicFM/scripts/reproduce_test/outdoor.sh deleted file mode 100644 index e6217883a1ea9c17edf2ce0ff0ee97d26868b5d9..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/scripts/reproduce_test/outdoor.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash -l - -SCRIPTPATH=$(dirname $(readlink -f "$0")) -PROJECT_DIR="${SCRIPTPATH}/../../" - -# conda activate loftr -export PYTHONPATH=$PROJECT_DIR:$PYTHONPATH -cd $PROJECT_DIR - -data_cfg_path="configs/data/megadepth_test_1500.py" -main_cfg_path="configs/model/outdoor/model_cfg_test.py" -ckpt_path="pretrained/model_best.ckpt" -dump_dir="dump/loftr_ds_outdoor" -profiler_name="inference" -n_nodes=1 # mannually keep this the same with --nodes -n_gpus_per_node=-1 -torch_num_workers=4 -batch_size=1 # per gpu - -python -u ./test.py \ - ${data_cfg_path} \ - ${main_cfg_path} \ - --ckpt_path=${ckpt_path} \ - --dump_dir=${dump_dir} \ - --gpus=${n_gpus_per_node} --num_nodes=${n_nodes} --accelerator="ddp" \ - --batch_size=${batch_size} --num_workers=${torch_num_workers}\ - --profiler_name=${profiler_name} \ - --benchmark - diff --git a/spaces/Rifd/Sdallmodels/style.css b/spaces/Rifd/Sdallmodels/style.css deleted file mode 100644 index 07f8d9fc7f44dc2b3e44d622ef522a614ac7ce03..0000000000000000000000000000000000000000 --- a/spaces/Rifd/Sdallmodels/style.css +++ /dev/null @@ -1,3 +0,0 @@ -.gradio-container { - background-image: linear-gradient(#660099, #000000) !important; - } \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/fovea_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/fovea_head.py deleted file mode 100644 index c8ccea787cba3d092284d4a5e209adaf6521c86a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/fovea_head.py +++ /dev/null @@ -1,341 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, normal_init -from mmcv.ops import DeformConv2d - -from mmdet.core import multi_apply, multiclass_nms -from ..builder import HEADS -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -class FeatureAlign(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4): - super(FeatureAlign, self).__init__() - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 4, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def init_weights(self): - normal_init(self.conv_offset, std=0.1) - normal_init(self.conv_adaption, std=0.01) - - def forward(self, x, shape): - offset = self.conv_offset(shape) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class FoveaHead(AnchorFreeHead): - """FoveaBox: Beyond Anchor-based Object Detector - https://arxiv.org/abs/1904.03797 - """ - - def __init__(self, - num_classes, - in_channels, - base_edge_list=(16, 32, 64, 128, 256), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, - 512)), - sigma=0.4, - with_deform=False, - deform_groups=4, - **kwargs): - self.base_edge_list = base_edge_list - self.scale_ranges = scale_ranges - self.sigma = sigma - self.with_deform = with_deform - self.deform_groups = deform_groups - super().__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - # box branch - super()._init_reg_convs() - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - # cls branch - if not self.with_deform: - super()._init_cls_convs() - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - else: - self.cls_convs = nn.ModuleList() - self.cls_convs.append( - ConvModule( - self.feat_channels, (self.feat_channels * 4), - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.cls_convs.append( - ConvModule((self.feat_channels * 4), (self.feat_channels * 4), - 1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.feature_adaption = FeatureAlign( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = nn.Conv2d( - int(self.feat_channels * 4), - self.cls_out_channels, - 3, - padding=1) - - def init_weights(self): - super().init_weights() - if self.with_deform: - self.feature_adaption.init_weights() - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - if self.with_deform: - cls_feat = self.feature_adaption(cls_feat, bbox_pred.exp()) - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - return cls_score, bbox_pred - - def _get_points_single(self, *args, **kwargs): - y, x = super()._get_points_single(*args, **kwargs) - return y + 0.5, x + 0.5 - - def loss(self, - cls_scores, - bbox_preds, - gt_bbox_list, - gt_label_list, - img_metas, - gt_bboxes_ignore=None): - assert len(cls_scores) == len(bbox_preds) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - num_imgs = cls_scores[0].size(0) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_labels, flatten_bbox_targets = self.get_targets( - gt_bbox_list, gt_label_list, featmap_sizes, points) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < self.num_classes)).nonzero().view(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs) - if num_pos > 0: - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_weights = pos_bbox_targets.new_zeros( - pos_bbox_targets.size()) + 1.0 - loss_bbox = self.loss_bbox( - pos_bbox_preds, - pos_bbox_targets, - pos_weights, - avg_factor=num_pos) - else: - loss_bbox = torch.tensor( - 0, - dtype=flatten_bbox_preds.dtype, - device=flatten_bbox_preds.device) - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, gt_bbox_list, gt_label_list, featmap_sizes, points): - label_list, bbox_target_list = multi_apply( - self._get_target_single, - gt_bbox_list, - gt_label_list, - featmap_size_list=featmap_sizes, - point_list=points) - flatten_labels = [ - torch.cat([ - labels_level_img.flatten() for labels_level_img in labels_level - ]) for labels_level in zip(*label_list) - ] - flatten_bbox_targets = [ - torch.cat([ - bbox_targets_level_img.reshape(-1, 4) - for bbox_targets_level_img in bbox_targets_level - ]) for bbox_targets_level in zip(*bbox_target_list) - ] - flatten_labels = torch.cat(flatten_labels) - flatten_bbox_targets = torch.cat(flatten_bbox_targets) - return flatten_labels, flatten_bbox_targets - - def _get_target_single(self, - gt_bboxes_raw, - gt_labels_raw, - featmap_size_list=None, - point_list=None): - - gt_areas = torch.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) * - (gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1])) - label_list = [] - bbox_target_list = [] - # for each pyramid, find the cls and box target - for base_len, (lower_bound, upper_bound), stride, featmap_size, \ - (y, x) in zip(self.base_edge_list, self.scale_ranges, - self.strides, featmap_size_list, point_list): - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - labels = gt_labels_raw.new_zeros(featmap_size) + self.num_classes - bbox_targets = gt_bboxes_raw.new(featmap_size[0], featmap_size[1], - 4) + 1 - # scale assignment - hit_indices = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(hit_indices) == 0: - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - continue - _, hit_index_order = torch.sort(-gt_areas[hit_indices]) - hit_indices = hit_indices[hit_index_order] - gt_bboxes = gt_bboxes_raw[hit_indices, :] / stride - gt_labels = gt_labels_raw[hit_indices] - half_w = 0.5 * (gt_bboxes[:, 2] - gt_bboxes[:, 0]) - half_h = 0.5 * (gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # valid fovea area: left, right, top, down - pos_left = torch.ceil( - gt_bboxes[:, 0] + (1 - self.sigma) * half_w - 0.5).long().\ - clamp(0, featmap_size[1] - 1) - pos_right = torch.floor( - gt_bboxes[:, 0] + (1 + self.sigma) * half_w - 0.5).long().\ - clamp(0, featmap_size[1] - 1) - pos_top = torch.ceil( - gt_bboxes[:, 1] + (1 - self.sigma) * half_h - 0.5).long().\ - clamp(0, featmap_size[0] - 1) - pos_down = torch.floor( - gt_bboxes[:, 1] + (1 + self.sigma) * half_h - 0.5).long().\ - clamp(0, featmap_size[0] - 1) - for px1, py1, px2, py2, label, (gt_x1, gt_y1, gt_x2, gt_y2) in \ - zip(pos_left, pos_top, pos_right, pos_down, gt_labels, - gt_bboxes_raw[hit_indices, :]): - labels[py1:py2 + 1, px1:px2 + 1] = label - bbox_targets[py1:py2 + 1, px1:px2 + 1, 0] = \ - (stride * x[py1:py2 + 1, px1:px2 + 1] - gt_x1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 1] = \ - (stride * y[py1:py2 + 1, px1:px2 + 1] - gt_y1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 2] = \ - (gt_x2 - stride * x[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 3] = \ - (gt_y2 - stride * y[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets = bbox_targets.clamp(min=1. / 16, max=16.) - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - return label_list, bbox_target_list - - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=None): - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - points = self.get_points( - featmap_sizes, - bbox_preds[0].dtype, - bbox_preds[0].device, - flatten=True) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - det_bboxes = self._get_bboxes_single(cls_score_list, - bbox_pred_list, featmap_sizes, - points, img_shape, - scale_factor, cfg, rescale) - result_list.append(det_bboxes) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - featmap_sizes, - point_list, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(point_list) - det_bboxes = [] - det_scores = [] - for cls_score, bbox_pred, featmap_size, stride, base_len, (y, x) \ - in zip(cls_scores, bbox_preds, featmap_sizes, self.strides, - self.base_edge_list, point_list): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4).exp() - nms_pre = cfg.get('nms_pre', -1) - if (nms_pre > 0) and (scores.shape[0] > nms_pre): - max_scores, _ = scores.max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - y = y[topk_inds] - x = x[topk_inds] - x1 = (stride * x - base_len * bbox_pred[:, 0]).\ - clamp(min=0, max=img_shape[1] - 1) - y1 = (stride * y - base_len * bbox_pred[:, 1]).\ - clamp(min=0, max=img_shape[0] - 1) - x2 = (stride * x + base_len * bbox_pred[:, 2]).\ - clamp(min=0, max=img_shape[1] - 1) - y2 = (stride * y + base_len * bbox_pred[:, 3]).\ - clamp(min=0, max=img_shape[0] - 1) - bboxes = torch.stack([x1, y1, x2, y2], -1) - det_bboxes.append(bboxes) - det_scores.append(scores) - det_bboxes = torch.cat(det_bboxes) - if rescale: - det_bboxes /= det_bboxes.new_tensor(scale_factor) - det_scores = torch.cat(det_scores) - padding = det_scores.new_zeros(det_scores.shape[0], 1) - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - det_scores = torch.cat([det_scores, padding], dim=1) - det_bboxes, det_labels = multiclass_nms(det_bboxes, det_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/non_local.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/non_local.py deleted file mode 100644 index 92d00155ef275c1201ea66bba30470a1785cc5d7..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/non_local.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta - -import torch -import torch.nn as nn - -from ..utils import constant_init, normal_init -from .conv_module import ConvModule -from .registry import PLUGIN_LAYERS - - -class _NonLocalNd(nn.Module, metaclass=ABCMeta): - """Basic Non-local module. - - This module is proposed in - "Non-local Neural Networks" - Paper reference: https://arxiv.org/abs/1711.07971 - Code reference: https://github.com/AlexHex7/Non-local_pytorch - - Args: - in_channels (int): Channels of the input feature map. - reduction (int): Channel reduction ratio. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - `1/sqrt(inter_channels)` when the mode is `embedded_gaussian`. - Default: True. - conv_cfg (None | dict): The config dict for convolution layers. - If not specified, it will use `nn.Conv2d` for convolution layers. - Default: None. - norm_cfg (None | dict): The config dict for normalization layers. - Default: None. (This parameter is only applicable to conv_out.) - mode (str): Options are `gaussian`, `concatenation`, - `embedded_gaussian` and `dot_product`. Default: embedded_gaussian. - """ - - def __init__(self, - in_channels, - reduction=2, - use_scale=True, - conv_cfg=None, - norm_cfg=None, - mode='embedded_gaussian', - **kwargs): - super(_NonLocalNd, self).__init__() - self.in_channels = in_channels - self.reduction = reduction - self.use_scale = use_scale - self.inter_channels = max(in_channels // reduction, 1) - self.mode = mode - - if mode not in [ - 'gaussian', 'embedded_gaussian', 'dot_product', 'concatenation' - ]: - raise ValueError("Mode should be in 'gaussian', 'concatenation', " - f"'embedded_gaussian' or 'dot_product', but got " - f'{mode} instead.') - - # g, theta, phi are defaulted as `nn.ConvNd`. - # Here we use ConvModule for potential usage. - self.g = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - self.conv_out = ConvModule( - self.inter_channels, - self.in_channels, - kernel_size=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - if self.mode != 'gaussian': - self.theta = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - self.phi = ConvModule( - self.in_channels, - self.inter_channels, - kernel_size=1, - conv_cfg=conv_cfg, - act_cfg=None) - - if self.mode == 'concatenation': - self.concat_project = ConvModule( - self.inter_channels * 2, - 1, - kernel_size=1, - stride=1, - padding=0, - bias=False, - act_cfg=dict(type='ReLU')) - - self.init_weights(**kwargs) - - def init_weights(self, std=0.01, zeros_init=True): - if self.mode != 'gaussian': - for m in [self.g, self.theta, self.phi]: - normal_init(m.conv, std=std) - else: - normal_init(self.g.conv, std=std) - if zeros_init: - if self.conv_out.norm_cfg is None: - constant_init(self.conv_out.conv, 0) - else: - constant_init(self.conv_out.norm, 0) - else: - if self.conv_out.norm_cfg is None: - normal_init(self.conv_out.conv, std=std) - else: - normal_init(self.conv_out.norm, std=std) - - def gaussian(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def embedded_gaussian(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - if self.use_scale: - # theta_x.shape[-1] is `self.inter_channels` - pairwise_weight /= theta_x.shape[-1]**0.5 - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def dot_product(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - pairwise_weight /= pairwise_weight.shape[-1] - return pairwise_weight - - def concatenation(self, theta_x, phi_x): - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - h = theta_x.size(2) - w = phi_x.size(3) - theta_x = theta_x.repeat(1, 1, 1, w) - phi_x = phi_x.repeat(1, 1, h, 1) - - concat_feature = torch.cat([theta_x, phi_x], dim=1) - pairwise_weight = self.concat_project(concat_feature) - n, _, h, w = pairwise_weight.size() - pairwise_weight = pairwise_weight.view(n, h, w) - pairwise_weight /= pairwise_weight.shape[-1] - - return pairwise_weight - - def forward(self, x): - # Assume `reduction = 1`, then `inter_channels = C` - # or `inter_channels = C` when `mode="gaussian"` - - # NonLocal1d x: [N, C, H] - # NonLocal2d x: [N, C, H, W] - # NonLocal3d x: [N, C, T, H, W] - n = x.size(0) - - # NonLocal1d g_x: [N, H, C] - # NonLocal2d g_x: [N, HxW, C] - # NonLocal3d g_x: [N, TxHxW, C] - g_x = self.g(x).view(n, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - # NonLocal1d theta_x: [N, H, C], phi_x: [N, C, H] - # NonLocal2d theta_x: [N, HxW, C], phi_x: [N, C, HxW] - # NonLocal3d theta_x: [N, TxHxW, C], phi_x: [N, C, TxHxW] - if self.mode == 'gaussian': - theta_x = x.view(n, self.in_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - if self.sub_sample: - phi_x = self.phi(x).view(n, self.in_channels, -1) - else: - phi_x = x.view(n, self.in_channels, -1) - elif self.mode == 'concatenation': - theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) - phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) - else: - theta_x = self.theta(x).view(n, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(n, self.inter_channels, -1) - - pairwise_func = getattr(self, self.mode) - # NonLocal1d pairwise_weight: [N, H, H] - # NonLocal2d pairwise_weight: [N, HxW, HxW] - # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW] - pairwise_weight = pairwise_func(theta_x, phi_x) - - # NonLocal1d y: [N, H, C] - # NonLocal2d y: [N, HxW, C] - # NonLocal3d y: [N, TxHxW, C] - y = torch.matmul(pairwise_weight, g_x) - # NonLocal1d y: [N, C, H] - # NonLocal2d y: [N, C, H, W] - # NonLocal3d y: [N, C, T, H, W] - y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, - *x.size()[2:]) - - output = x + self.conv_out(y) - - return output - - -class NonLocal1d(_NonLocalNd): - """1D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv1d'). - """ - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv1d'), - **kwargs): - super(NonLocal1d, self).__init__( - in_channels, conv_cfg=conv_cfg, **kwargs) - - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool1d(kernel_size=2) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer - - -@PLUGIN_LAYERS.register_module() -class NonLocal2d(_NonLocalNd): - """2D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv2d'). - """ - - _abbr_ = 'nonlocal_block' - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv2d'), - **kwargs): - super(NonLocal2d, self).__init__( - in_channels, conv_cfg=conv_cfg, **kwargs) - - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool2d(kernel_size=(2, 2)) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer - - -class NonLocal3d(_NonLocalNd): - """3D Non-local module. - - Args: - in_channels (int): Same as `NonLocalND`. - sub_sample (bool): Whether to apply max pooling after pairwise - function (Note that the `sub_sample` is applied on spatial only). - Default: False. - conv_cfg (None | dict): Same as `NonLocalND`. - Default: dict(type='Conv3d'). - """ - - def __init__(self, - in_channels, - sub_sample=False, - conv_cfg=dict(type='Conv3d'), - **kwargs): - super(NonLocal3d, self).__init__( - in_channels, conv_cfg=conv_cfg, **kwargs) - self.sub_sample = sub_sample - - if sub_sample: - max_pool_layer = nn.MaxPool3d(kernel_size=(1, 2, 2)) - self.g = nn.Sequential(self.g, max_pool_layer) - if self.mode != 'gaussian': - self.phi = nn.Sequential(self.phi, max_pool_layer) - else: - self.phi = max_pool_layer diff --git a/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/datasets.py b/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/datasets.py deleted file mode 100644 index b6bb8b02aa706c7ea8536665d908b417134fcd0f..0000000000000000000000000000000000000000 --- a/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/datasets.py +++ /dev/null @@ -1,1320 +0,0 @@ -# Dataset utils and dataloaders - -import glob -import logging -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path -from threading import Thread - -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -from PIL import Image, ExifTags -from torch.utils.data import Dataset -from tqdm import tqdm - -import pickle -from copy import deepcopy -#from pycocotools import mask as maskUtils -from torchvision.utils import save_image -from torchvision.ops import roi_pool, roi_align, ps_roi_pool, ps_roi_align - -from utils.general import check_requirements, xyxy2xywh, xywh2xyxy, xywhn2xyxy, xyn2xy, segment2box, segments2boxes, \ - resample_segments, clean_str -from utils.torch_utils import torch_distributed_zero_first - -# Parameters -help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo'] # acceptable image suffixes -vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes -logger = logging.getLogger(__name__) - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - - -def get_hash(files): - # Returns a single hash value of a list of files - return sum(os.path.getsize(f) for f in files if os.path.isfile(f)) - - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - try: - rotation = dict(img._getexif().items())[orientation] - if rotation == 6: # rotation 270 - s = (s[1], s[0]) - elif rotation == 8: # rotation 90 - s = (s[1], s[0]) - except: - pass - - return s - - -def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, - rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''): - # Make sure only the first process in DDP process the dataset first, and the following others can use the cache - with torch_distributed_zero_first(rank): - dataset = LoadImagesAndLabels(path, imgsz, batch_size, - augment=augment, # augment images - hyp=hyp, # augmentation hyperparameters - rect=rect, # rectangular training - cache_images=cache, - single_cls=opt.single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - prefix=prefix) - - batch_size = min(batch_size, len(dataset)) - nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None - loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader - # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader() - dataloader = loader(dataset, - batch_size=batch_size, - num_workers=nw, - sampler=sampler, - pin_memory=True, - collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn) - return dataloader, dataset - - -class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler(object): - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - - -class LoadImages: # for inference - def __init__(self, path, img_size=640, stride=32): - p = str(Path(path).absolute()) # os-agnostic absolute path - if '*' in p: - files = sorted(glob.glob(p, recursive=True)) # glob - elif os.path.isdir(p): - files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - elif os.path.isfile(p): - files = [p] # files - else: - raise Exception(f'ERROR: {p} does not exist') - - images = [x for x in files if x.split('.')[-1].lower() in img_formats] - videos = [x for x in files if x.split('.')[-1].lower() in vid_formats] - ni, nv = len(images), len(videos) - - self.img_size = img_size - self.stride = stride - self.files = images + videos - self.nf = ni + nv # number of files - self.video_flag = [False] * ni + [True] * nv - self.mode = 'image' - if any(videos): - self.new_video(videos[0]) # new video - else: - self.cap = None - assert self.nf > 0, f'No images or videos found in {p}. ' \ - f'Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}' - - def __iter__(self): - self.count = 0 - return self - - def __next__(self): - if self.count == self.nf: - raise StopIteration - path = self.files[self.count] - - if self.video_flag[self.count]: - # Read video - self.mode = 'video' - ret_val, img0 = self.cap.read() - if not ret_val: - self.count += 1 - self.cap.release() - if self.count == self.nf: # last video - raise StopIteration - else: - path = self.files[self.count] - self.new_video(path) - ret_val, img0 = self.cap.read() - - self.frame += 1 - print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.nframes}) {path}: ', end='') - - else: - # Read image - self.count += 1 - img0 = cv2.imread(path) # BGR - assert img0 is not None, 'Image Not Found ' + path - #print(f'image {self.count}/{self.nf} {path}: ', end='') - - # Padded resize - img = letterbox(img0, self.img_size, stride=self.stride)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return path, img, img0, self.cap - - def new_video(self, path): - self.frame = 0 - self.cap = cv2.VideoCapture(path) - self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - def __len__(self): - return self.nf # number of files - - -class LoadWebcam: # for inference - def __init__(self, pipe='0', img_size=640, stride=32): - self.img_size = img_size - self.stride = stride - - if pipe.isnumeric(): - pipe = eval(pipe) # local camera - # pipe = 'rtsp://192.168.1.64/1' # IP camera - # pipe = 'rtsp://username:password@192.168.1.64/1' # IP camera with login - # pipe = 'http://wmccpinetop.axiscam.net/mjpg/video.mjpg' # IP golf camera - - self.pipe = pipe - self.cap = cv2.VideoCapture(pipe) # video capture object - self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - if cv2.waitKey(1) == ord('q'): # q to quit - self.cap.release() - cv2.destroyAllWindows() - raise StopIteration - - # Read frame - if self.pipe == 0: # local camera - ret_val, img0 = self.cap.read() - img0 = cv2.flip(img0, 1) # flip left-right - else: # IP camera - n = 0 - while True: - n += 1 - self.cap.grab() - if n % 30 == 0: # skip frames - ret_val, img0 = self.cap.retrieve() - if ret_val: - break - - # Print - assert ret_val, f'Camera Error {self.pipe}' - img_path = 'webcam.jpg' - print(f'webcam {self.count}: ', end='') - - # Padded resize - img = letterbox(img0, self.img_size, stride=self.stride)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return img_path, img, img0, None - - def __len__(self): - return 0 - - -class LoadStreams: # multiple IP or RTSP cameras - def __init__(self, sources='streams.txt', img_size=640, stride=32): - self.mode = 'stream' - self.img_size = img_size - self.stride = stride - - if os.path.isfile(sources): - with open(sources, 'r') as f: - sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())] - else: - sources = [sources] - - n = len(sources) - self.imgs = [None] * n - self.sources = [clean_str(x) for x in sources] # clean source names for later - for i, s in enumerate(sources): - # Start the thread to read frames from the video stream - print(f'{i + 1}/{n}: {s}... ', end='') - url = eval(s) if s.isnumeric() else s - if 'youtube.com/' in str(url) or 'youtu.be/' in str(url): # if source is YouTube video - check_requirements(('pafy', 'youtube_dl')) - import pafy - url = pafy.new(url).getbest(preftype="mp4").url - cap = cv2.VideoCapture(url) - assert cap.isOpened(), f'Failed to open {s}' - w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - self.fps = cap.get(cv2.CAP_PROP_FPS) % 100 - - _, self.imgs[i] = cap.read() # guarantee first frame - thread = Thread(target=self.update, args=([i, cap]), daemon=True) - print(f' success ({w}x{h} at {self.fps:.2f} FPS).') - thread.start() - print('') # newline - - # check for common shapes - s = np.stack([letterbox(x, self.img_size, stride=self.stride)[0].shape for x in self.imgs], 0) # shapes - self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal - if not self.rect: - print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.') - - def update(self, index, cap): - # Read next stream frame in a daemon thread - n = 0 - while cap.isOpened(): - n += 1 - # _, self.imgs[index] = cap.read() - cap.grab() - if n == 4: # read every 4th frame - success, im = cap.retrieve() - self.imgs[index] = im if success else self.imgs[index] * 0 - n = 0 - time.sleep(1 / self.fps) # wait time - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - img0 = self.imgs.copy() - if cv2.waitKey(1) == ord('q'): # q to quit - cv2.destroyAllWindows() - raise StopIteration - - # Letterbox - img = [letterbox(x, self.img_size, auto=self.rect, stride=self.stride)[0] for x in img0] - - # Stack - img = np.stack(img, 0) - - # Convert - img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416 - img = np.ascontiguousarray(img) - - return self.sources, img, img0, None - - def __len__(self): - return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years - - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - return ['txt'.join(x.replace(sa, sb, 1).rsplit(x.split('.')[-1], 1)) for x in img_paths] - - -class LoadImagesAndLabels(Dataset): # for training/testing - def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, - cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - self.path = path - #self.albumentations = Albumentations() if augment else None - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - # f = list(p.rglob('**/*.*')) # pathlib - elif p.is_file(): # file - with open(p, 'r') as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib) - else: - raise Exception(f'{prefix}{p} does not exist') - self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats]) - # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in img_formats]) # pathlib - assert self.img_files, f'{prefix}No images found' - except Exception as e: - raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {help_url}') - - # Check cache - self.label_files = img2label_paths(self.img_files) # labels - cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels - if cache_path.is_file(): - cache, exists = torch.load(cache_path), True # load - #if cache['hash'] != get_hash(self.label_files + self.img_files) or 'version' not in cache: # changed - # cache, exists = self.cache_labels(cache_path, prefix), False # re-cache - else: - cache, exists = self.cache_labels(cache_path, prefix), False # cache - - # Display cache - nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total - if exists: - d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted" - tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results - assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}' - - # Read cache - cache.pop('hash') # remove hash - cache.pop('version') # remove version - labels, shapes, self.segments = zip(*cache.values()) - self.labels = list(labels) - self.shapes = np.array(shapes, dtype=np.float64) - self.img_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - if single_cls: - for x in self.labels: - x[:, 0] = 0 - - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.img_files = [self.img_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride - - # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM) - self.imgs = [None] * n - if cache_images: - if cache_images == 'disk': - self.im_cache_dir = Path(Path(self.img_files[0]).parent.as_posix() + '_npy') - self.img_npy = [self.im_cache_dir / Path(f).with_suffix('.npy').name for f in self.img_files] - self.im_cache_dir.mkdir(parents=True, exist_ok=True) - gb = 0 # Gigabytes of cached images - self.img_hw0, self.img_hw = [None] * n, [None] * n - results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n))) - pbar = tqdm(enumerate(results), total=n) - for i, x in pbar: - if cache_images == 'disk': - if not self.img_npy[i].exists(): - np.save(self.img_npy[i].as_posix(), x[0]) - gb += self.img_npy[i].stat().st_size - else: - self.imgs[i], self.img_hw0[i], self.img_hw[i] = x - gb += self.imgs[i].nbytes - pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB)' - pbar.close() - - def cache_labels(self, path=Path('./labels.cache'), prefix=''): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate - pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files)) - for i, (im_file, lb_file) in enumerate(pbar): - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - segments = [] # instance segments - assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels' - assert im.format.lower() in img_formats, f'invalid image format {im.format}' - - # verify labels - if os.path.isfile(lb_file): - nf += 1 # label found - with open(lb_file, 'r') as f: - l = [x.split() for x in f.read().strip().splitlines()] - if any([len(x) > 8 for x in l]): # is segment - classes = np.array([x[0] for x in l], dtype=np.float32) - segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...) - l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh) - l = np.array(l, dtype=np.float32) - if len(l): - assert l.shape[1] == 5, 'labels require 5 columns each' - assert (l >= 0).all(), 'negative labels' - assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels' - assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels' - else: - ne += 1 # label empty - l = np.zeros((0, 5), dtype=np.float32) - else: - nm += 1 # label missing - l = np.zeros((0, 5), dtype=np.float32) - x[im_file] = [l, shape, segments] - except Exception as e: - nc += 1 - print(f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}') - - pbar.desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels... " \ - f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted" - pbar.close() - - if nf == 0: - print(f'{prefix}WARNING: No labels found in {path}. See {help_url}') - - x['hash'] = get_hash(self.label_files + self.img_files) - x['results'] = nf, nm, ne, nc, i + 1 - x['version'] = 0.1 # cache version - torch.save(x, path) # save for next time - logging.info(f'{prefix}New cache created: {path}') - return x - - def __len__(self): - return len(self.img_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - if random.random() < 0.8: - img, labels = load_mosaic(self, index) - else: - img, labels = load_mosaic9(self, index) - shapes = None - - # MixUp https://arxiv.org/pdf/1710.09412.pdf - if random.random() < hyp['mixup']: - if random.random() < 0.8: - img2, labels2 = load_mosaic(self, random.randint(0, len(self.labels) - 1)) - else: - img2, labels2 = load_mosaic9(self, random.randint(0, len(self.labels) - 1)) - r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0 - img = (img * r + img2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - - else: - # Load image - img, (h0, w0), (h, w) = load_image(self, index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - labels = self.labels[index].copy() - if labels.size: # normalized xywh to pixel xyxy format - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1]) - - if self.augment: - # Augment imagespace - if not mosaic: - img, labels = random_perspective(img, labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - - #img, labels = self.albumentations(img, labels) - - # Augment colorspace - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Apply cutouts - # if random.random() < 0.9: - # labels = cutout(img, labels) - - if random.random() < hyp['paste_in']: - sample_labels, sample_images, sample_masks = [], [], [] - while len(sample_labels) < 30: - sample_labels_, sample_images_, sample_masks_ = load_samples(self, random.randint(0, len(self.labels) - 1)) - sample_labels += sample_labels_ - sample_images += sample_images_ - sample_masks += sample_masks_ - #print(len(sample_labels)) - if len(sample_labels) == 0: - break - labels = pastein(img, labels, sample_labels, sample_images, sample_masks) - - nL = len(labels) # number of labels - if nL: - labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh - labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1 - labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1 - - if self.augment: - # flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nL: - labels[:, 2] = 1 - labels[:, 2] - - # flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nL: - labels[:, 1] = 1 - labels[:, 1] - - labels_out = torch.zeros((nL, 6)) - if nL: - labels_out[:, 1:] = torch.from_numpy(labels) - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return torch.from_numpy(img), labels_out, self.img_files[index], shapes - - @staticmethod - def collate_fn(batch): - img, label, path, shapes = zip(*batch) # transposed - for i, l in enumerate(label): - l[:, 0] = i # add target image index for build_targets() - return torch.stack(img, 0), torch.cat(label, 0), path, shapes - - @staticmethod - def collate_fn4(batch): - img, label, path, shapes = zip(*batch) # transposed - n = len(shapes) // 4 - img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n] - - ho = torch.tensor([[0., 0, 0, 1, 0, 0]]) - wo = torch.tensor([[0., 0, 1, 0, 0, 0]]) - s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale - for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW - i *= 4 - if random.random() < 0.5: - im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[ - 0].type(img[i].type()) - l = label[i] - else: - im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2) - l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s - img4.append(im) - label4.append(l) - - for i, l in enumerate(label4): - l[:, 0] = i # add target image index for build_targets() - - return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def load_image(self, index): - # loads 1 image from dataset, returns img, original hw, resized hw - img = self.imgs[index] - if img is None: # not cached - path = self.img_files[index] - img = cv2.imread(path) # BGR - assert img is not None, 'Image Not Found ' + path - h0, w0 = img.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # resize image to img_size - if r != 1: # always resize down, only resize up if training with augmentation - interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR - img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp) - return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized - else: - return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized - - -def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5): - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV)) - dtype = img.dtype # uint8 - - x = np.arange(0, 256, dtype=np.int16) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype) - cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed - - -def hist_equalize(img, clahe=True, bgr=False): - # Equalize histogram on BGR image 'img' with img.shape(n,m,3) and range 0-255 - yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV) - if clahe: - c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - yuv[:, :, 0] = c.apply(yuv[:, :, 0]) - else: - yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram - return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB - - -def load_mosaic(self, index): - # loads images in a 4-mosaic - - labels4, segments4 = [], [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - #img4, labels4, segments4 = remove_background(img4, labels4, segments4) - #sample_segments(img4, labels4, segments4, probability=self.hyp['copy_paste']) - img4, labels4, segments4 = copy_paste(img4, labels4, segments4, probability=self.hyp['copy_paste']) - img4, labels4 = random_perspective(img4, labels4, segments4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img4, labels4 - - -def load_mosaic9(self, index): - # loads images in a 9-mosaic - - labels9, segments9 = [], [] - s = self.img_size - indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img9 - if i == 0: # center - img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - h0, w0 = h, w - c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - elif i == 1: # top - c = s, s - h, s + w, s - elif i == 2: # top right - c = s + wp, s - h, s + wp + w, s - elif i == 3: # right - c = s + w0, s, s + w0 + w, s + h - elif i == 4: # bottom right - c = s + w0, s + hp, s + w0 + w, s + hp + h - elif i == 5: # bottom - c = s + w0 - w, s + h0, s + w0, s + h0 + h - elif i == 6: # bottom left - c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - elif i == 7: # left - c = s - w, s + h0 - h, s, s + h0 - elif i == 8: # top left - c = s - w, s + h0 - hp - h, s, s + h0 - hp - - padx, pady = c[:2] - x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - labels9.append(labels) - segments9.extend(segments) - - # Image - img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] - hp, wp = h, w # height, width previous - - # Offset - yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y - img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - - # Concat/clip labels - labels9 = np.concatenate(labels9, 0) - labels9[:, [1, 3]] -= xc - labels9[:, [2, 4]] -= yc - c = np.array([xc, yc]) # centers - segments9 = [x - c for x in segments9] - - for x in (labels9[:, 1:], *segments9): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img9, labels9 = replicate(img9, labels9) # replicate - - # Augment - #img9, labels9, segments9 = remove_background(img9, labels9, segments9) - img9, labels9, segments9 = copy_paste(img9, labels9, segments9, probability=self.hyp['copy_paste']) - img9, labels9 = random_perspective(img9, labels9, segments9, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img9, labels9 - - -def load_samples(self, index): - # loads images in a 4-mosaic - - labels4, segments4 = [], [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - #img4, labels4, segments4 = remove_background(img4, labels4, segments4) - sample_labels, sample_images, sample_masks = sample_segments(img4, labels4, segments4, probability=0.5) - - return sample_labels, sample_images, sample_masks - - -def copy_paste(img, labels, segments, probability=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - if probability and n: - h, w, c = img.shape # height, width, channels - im_new = np.zeros(img.shape, np.uint8) - for j in random.sample(range(n), k=round(probability * n)): - l, s = labels[j], segments[j] - box = w - l[3], l[2], w - l[1], l[4] - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - if (ioa < 0.30).all(): # allow 30% obscuration of existing labels - labels = np.concatenate((labels, [[l[0], *box]]), 0) - segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=img, src2=im_new) - result = cv2.flip(result, 1) # augment segments (flip left-right) - i = result > 0 # pixels to replace - # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch - img[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - - return img, labels, segments - - -def remove_background(img, labels, segments): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - h, w, c = img.shape # height, width, channels - im_new = np.zeros(img.shape, np.uint8) - img_new = np.ones(img.shape, np.uint8) * 114 - for j in range(n): - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=img, src2=im_new) - - i = result > 0 # pixels to replace - img_new[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - - return img_new, labels, segments - - -def sample_segments(img, labels, segments, probability=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - sample_labels = [] - sample_images = [] - sample_masks = [] - if probability and n: - h, w, c = img.shape # height, width, channels - for j in random.sample(range(n), k=round(probability * n)): - l, s = labels[j], segments[j] - box = l[1].astype(int).clip(0,w-1), l[2].astype(int).clip(0,h-1), l[3].astype(int).clip(0,w-1), l[4].astype(int).clip(0,h-1) - - #print(box) - if (box[2] <= box[0]) or (box[3] <= box[1]): - continue - - sample_labels.append(l[0]) - - mask = np.zeros(img.shape, np.uint8) - - cv2.drawContours(mask, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - sample_masks.append(mask[box[1]:box[3],box[0]:box[2],:]) - - result = cv2.bitwise_and(src1=img, src2=mask) - i = result > 0 # pixels to replace - mask[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - #print(box) - sample_images.append(mask[box[1]:box[3],box[0]:box[2],:]) - - return sample_labels, sample_images, sample_masks - - -def replicate(img, labels): - # Replicate labels - h, w = img.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return img, labels - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32): - # Resize and pad image while meeting stride-multiple constraints - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) - - -def random_perspective(img, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, - border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = img.shape[0] + border[0] * 2 # shape(h,w,c) - width = img.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -img.shape[1] / 2 # x translation (pixels) - C[1, 2] = -img.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1.1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(img[:, :, ::-1]) # base - # ax[1].imshow(img2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - use_segments = any(x.any() for x in segments) - new = np.zeros((n, 4)) - if use_segments: # warp segments - segments = resample_segments(segments) # upsample - for i, segment in enumerate(segments): - xy = np.ones((len(segment), 3)) - xy[:, :2] = segment - xy = xy @ M.T # transform - xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine - - # clip - new[i] = segment2box(xy, width, height) - - else: # warp boxes - xy = np.ones((n * 4, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # clip - new[:, [0, 2]] = new[:, [0, 2]].clip(0, width) - new[:, [1, 3]] = new[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10) - targets = targets[i] - targets[:, 1:5] = new[i] - - return img, targets - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates - - -def bbox_ioa(box1, box2): - # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2 - box2 = box2.transpose() - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16 - - # Intersection over box2 area - return inter_area / box2_area - - -def cutout(image, labels): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - # create random masks - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def pastein(image, labels, sample_labels, sample_images, sample_masks): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - # create random masks - scales = [0.75] * 2 + [0.5] * 4 + [0.25] * 4 + [0.125] * 4 + [0.0625] * 6 # image size fraction - for s in scales: - if random.random() < 0.2: - continue - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - if len(labels): - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - else: - ioa = np.zeros(1) - - if (ioa < 0.30).all() and len(sample_labels) and (xmax > xmin+20) and (ymax > ymin+20): # allow 30% obscuration of existing labels - sel_ind = random.randint(0, len(sample_labels)-1) - #print(len(sample_labels)) - #print(sel_ind) - #print((xmax-xmin, ymax-ymin)) - #print(image[ymin:ymax, xmin:xmax].shape) - #print([[sample_labels[sel_ind], *box]]) - #print(labels.shape) - hs, ws, cs = sample_images[sel_ind].shape - r_scale = min((ymax-ymin)/hs, (xmax-xmin)/ws) - r_w = int(ws*r_scale) - r_h = int(hs*r_scale) - - if (r_w > 10) and (r_h > 10): - r_mask = cv2.resize(sample_masks[sel_ind], (r_w, r_h)) - r_image = cv2.resize(sample_images[sel_ind], (r_w, r_h)) - temp_crop = image[ymin:ymin+r_h, xmin:xmin+r_w] - m_ind = r_mask > 0 - if m_ind.astype(np.int).sum() > 60: - temp_crop[m_ind] = r_image[m_ind] - #print(sample_labels[sel_ind]) - #print(sample_images[sel_ind].shape) - #print(temp_crop.shape) - box = np.array([xmin, ymin, xmin+r_w, ymin+r_h], dtype=np.float32) - if len(labels): - labels = np.concatenate((labels, [[sample_labels[sel_ind], *box]]), 0) - else: - labels = np.array([[sample_labels[sel_ind], *box]]) - - image[ymin:ymin+r_h, xmin:xmin+r_w] = temp_crop - - return labels - -class Albumentations: - # YOLOv5 Albumentations class (optional, only used if package is installed) - def __init__(self): - self.transform = None - import albumentations as A - - self.transform = A.Compose([ - A.CLAHE(p=0.01), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.01), - A.RandomGamma(gamma_limit=[80, 120], p=0.01), - A.Blur(p=0.01), - A.MedianBlur(p=0.01), - A.ToGray(p=0.01), - A.ImageCompression(quality_lower=75, p=0.01),], - bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels'])) - - #logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p)) - - def __call__(self, im, labels, p=1.0): - if self.transform and random.random() < p: - new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed - im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) - return im, labels - - -def create_folder(path='./new'): - # Create folder - if os.path.exists(path): - shutil.rmtree(path) # delete output folder - os.makedirs(path) # make new output folder - - -def flatten_recursive(path='../coco'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(path + '_flat') - create_folder(new_path) - for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path='../coco/'): # from utils.datasets import *; extract_boxes('../coco128') - # Convert detection dataset into classification dataset, with one directory per class - - path = Path(path) # images dir - shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in img_formats: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file, 'r') as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path='../coco', weights=(0.9, 0.1, 0.0), annotated_only=False): - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - Usage: from utils.datasets import *; autosplit('../coco') - Arguments - path: Path to images directory - weights: Train, val, test weights (list) - annotated_only: Only use images with an annotated txt file - """ - path = Path(path) # images dir - files = sum([list(path.rglob(f"*.{img_ext}")) for img_ext in img_formats], []) # image files only - n = len(files) # number of files - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing - - print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - for i, img in tqdm(zip(indices, files), total=n): - if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - with open(path / txt[i], 'a') as f: - f.write(str(img) + '\n') # add image to txt file - - -def load_segmentations(self, index): - key = '/work/handsomejw66/coco17/' + self.img_files[index] - #print(key) - # /work/handsomejw66/coco17/ - return self.segs[key] diff --git a/spaces/Semibit/tts-server/Dockerfile b/spaces/Semibit/tts-server/Dockerfile deleted file mode 100644 index 00069499c1991c1fda289ac65bcff61c5b0e1c93..0000000000000000000000000000000000000000 --- a/spaces/Semibit/tts-server/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -FROM python:3.11.5-bookworm - -WORKDIR /app - -COPY . . - -RUN apt-get update && \ - xargs -a packages.txt apt-get install -y && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* && \ - rm packages.txt - - -RUN pip install -r requirements.txt - - -EXPOSE 7860 - -CMD ["python","/app/app.py"] diff --git a/spaces/Severian/ANIMA-7B-Biomimicry-LLM/README.md b/spaces/Severian/ANIMA-7B-Biomimicry-LLM/README.md deleted file mode 100644 index 6875f97c0c1e54930b1ab64034393c2946699c84..0000000000000000000000000000000000000000 --- a/spaces/Severian/ANIMA-7B-Biomimicry-LLM/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: A N I M A 7B - Biomimicry LLM -emoji: 🐲 -colorFrom: green -colorTo: black -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/README.md b/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/README.md deleted file mode 100644 index 07647f3c1864fd23e533f8bc511c3218eabceaa2..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI Agent With Google Search APIs -emoji: 🏆 -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Stearns/Soar/pysoarlib/util/parse_wm_printout.py b/spaces/Stearns/Soar/pysoarlib/util/parse_wm_printout.py deleted file mode 100644 index 5f4742b9aa0ede931f968f976277eb01ec3d6363..0000000000000000000000000000000000000000 --- a/spaces/Stearns/Soar/pysoarlib/util/parse_wm_printout.py +++ /dev/null @@ -1,75 +0,0 @@ - -def parse_wm_printout(text): - """ Given a printout of soar's working memory, parses it into a dictionary of wmes, - Where the keys are identifiers, and the values are lists of wme triples rooted at that id - - :param text: The output of a soar print command for working memory - :type text: str - - :returns dict{ str, list[ (str, str, str) ] } - - """ - - ### First: preprocess the output into a string of tokens - tokens = [] - quote = None - for word in text.split(): - # Handle quoted strings (Between | |) - if word[0] == '|': - quote = word - elif quote is not None: - quote += ' ' + word - if quote is not None: - if len(quote) > 1 and quote.endswith('|'): - tokens.append(quote) - quote = None - elif len(quote) > 1 and quote.endswith('|)'): - tokens.append(quote[:-1]) - quote = None - continue - - # Ignore operator preferences - if word in [ '+', '>', '<', '!', '=' ]: - continue - # Ignore activation values [+23.000] - if word.startswith("[+") and (word.endswith("]") or word.endswith("])")): - continue - # Ignore singleton lti's (@12533) - if word.startswith("(@") and word.endswith(")"): - continue - # Strip opening parens but add $ to indicate identifier - if word.startswith("("): - word = '$' + word[1:] - - # Don't care about closing parens - word = word.replace(")", "") - tokens.append(word) - - wmes = dict() - cur_id = None - cur_att = None - cur_wmes = [] - i = 0 - - for token in tokens: - if len(token) == 0: - continue - # Identifier - if token[0] == '$': - cur_id = token[1:] - cur_att = None - cur_wmes = [] - wmes[cur_id] = cur_wmes - # Attribute - elif token[0] == '^': - cur_att = token[1:] - # Value - elif cur_id is None: - print("ERROR: Value " + token + " encountered with no id") - elif cur_att is None: - print("ERROR: Value " + token + " encountered with no attribute") - else: - cur_wmes.append( (cur_id, cur_att, token) ) - - return wmes - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/history.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/history.py deleted file mode 100644 index fd5a8680bf697f2af724d13b2ea43c11f455e8b1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/history.py +++ /dev/null @@ -1,968 +0,0 @@ -""" History related magics and functionality """ - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - - -import atexit -import datetime -from pathlib import Path -import re -import sqlite3 -import threading - -from traitlets.config.configurable import LoggingConfigurable -from decorator import decorator -from IPython.utils.decorators import undoc -from IPython.paths import locate_profile -from traitlets import ( - Any, - Bool, - Dict, - Instance, - Integer, - List, - Unicode, - Union, - TraitError, - default, - observe, -) - -#----------------------------------------------------------------------------- -# Classes and functions -#----------------------------------------------------------------------------- - -@undoc -class DummyDB(object): - """Dummy DB that will act as a black hole for history. - - Only used in the absence of sqlite""" - def execute(*args, **kwargs): - return [] - - def commit(self, *args, **kwargs): - pass - - def __enter__(self, *args, **kwargs): - pass - - def __exit__(self, *args, **kwargs): - pass - - -@decorator -def only_when_enabled(f, self, *a, **kw): - """Decorator: return an empty list in the absence of sqlite.""" - if not self.enabled: - return [] - else: - return f(self, *a, **kw) - - -# use 16kB as threshold for whether a corrupt history db should be saved -# that should be at least 100 entries or so -_SAVE_DB_SIZE = 16384 - -@decorator -def catch_corrupt_db(f, self, *a, **kw): - """A decorator which wraps HistoryAccessor method calls to catch errors from - a corrupt SQLite database, move the old database out of the way, and create - a new one. - - We avoid clobbering larger databases because this may be triggered due to filesystem issues, - not just a corrupt file. - """ - try: - return f(self, *a, **kw) - except (sqlite3.DatabaseError, sqlite3.OperationalError) as e: - self._corrupt_db_counter += 1 - self.log.error("Failed to open SQLite history %s (%s).", self.hist_file, e) - if self.hist_file != ':memory:': - if self._corrupt_db_counter > self._corrupt_db_limit: - self.hist_file = ':memory:' - self.log.error("Failed to load history too many times, history will not be saved.") - elif self.hist_file.is_file(): - # move the file out of the way - base = str(self.hist_file.parent / self.hist_file.stem) - ext = self.hist_file.suffix - size = self.hist_file.stat().st_size - if size >= _SAVE_DB_SIZE: - # if there's significant content, avoid clobbering - now = datetime.datetime.now().isoformat().replace(':', '.') - newpath = base + '-corrupt-' + now + ext - # don't clobber previous corrupt backups - for i in range(100): - if not Path(newpath).exists(): - break - else: - newpath = base + '-corrupt-' + now + (u'-%i' % i) + ext - else: - # not much content, possibly empty; don't worry about clobbering - # maybe we should just delete it? - newpath = base + '-corrupt' + ext - self.hist_file.rename(newpath) - self.log.error("History file was moved to %s and a new file created.", newpath) - self.init_db() - return [] - else: - # Failed with :memory:, something serious is wrong - raise - - -class HistoryAccessorBase(LoggingConfigurable): - """An abstract class for History Accessors """ - - def get_tail(self, n=10, raw=True, output=False, include_latest=False): - raise NotImplementedError - - def search(self, pattern="*", raw=True, search_raw=True, - output=False, n=None, unique=False): - raise NotImplementedError - - def get_range(self, session, start=1, stop=None, raw=True,output=False): - raise NotImplementedError - - def get_range_by_str(self, rangestr, raw=True, output=False): - raise NotImplementedError - - -class HistoryAccessor(HistoryAccessorBase): - """Access the history database without adding to it. - - This is intended for use by standalone history tools. IPython shells use - HistoryManager, below, which is a subclass of this.""" - - # counter for init_db retries, so we don't keep trying over and over - _corrupt_db_counter = 0 - # after two failures, fallback on :memory: - _corrupt_db_limit = 2 - - # String holding the path to the history file - hist_file = Union( - [Instance(Path), Unicode()], - help="""Path to file to use for SQLite history database. - - By default, IPython will put the history database in the IPython - profile directory. If you would rather share one history among - profiles, you can set this value in each, so that they are consistent. - - Due to an issue with fcntl, SQLite is known to misbehave on some NFS - mounts. If you see IPython hanging, try setting this to something on a - local disk, e.g:: - - ipython --HistoryManager.hist_file=/tmp/ipython_hist.sqlite - - you can also use the specific value `:memory:` (including the colon - at both end but not the back ticks), to avoid creating an history file. - - """, - ).tag(config=True) - - enabled = Bool(True, - help="""enable the SQLite history - - set enabled=False to disable the SQLite history, - in which case there will be no stored history, no SQLite connection, - and no background saving thread. This may be necessary in some - threaded environments where IPython is embedded. - """, - ).tag(config=True) - - connection_options = Dict( - help="""Options for configuring the SQLite connection - - These options are passed as keyword args to sqlite3.connect - when establishing database connections. - """ - ).tag(config=True) - - # The SQLite database - db = Any() - @observe('db') - def _db_changed(self, change): - """validate the db, since it can be an Instance of two different types""" - new = change['new'] - connection_types = (DummyDB, sqlite3.Connection) - if not isinstance(new, connection_types): - msg = "%s.db must be sqlite3 Connection or DummyDB, not %r" % \ - (self.__class__.__name__, new) - raise TraitError(msg) - - def __init__(self, profile="default", hist_file="", **traits): - """Create a new history accessor. - - Parameters - ---------- - profile : str - The name of the profile from which to open history. - hist_file : str - Path to an SQLite history database stored by IPython. If specified, - hist_file overrides profile. - config : :class:`~traitlets.config.loader.Config` - Config object. hist_file can also be set through this. - """ - super(HistoryAccessor, self).__init__(**traits) - # defer setting hist_file from kwarg until after init, - # otherwise the default kwarg value would clobber any value - # set by config - if hist_file: - self.hist_file = hist_file - - try: - self.hist_file - except TraitError: - # No one has set the hist_file, yet. - self.hist_file = self._get_hist_file_name(profile) - - self.init_db() - - def _get_hist_file_name(self, profile='default'): - """Find the history file for the given profile name. - - This is overridden by the HistoryManager subclass, to use the shell's - active profile. - - Parameters - ---------- - profile : str - The name of a profile which has a history file. - """ - return Path(locate_profile(profile)) / "history.sqlite" - - @catch_corrupt_db - def init_db(self): - """Connect to the database, and create tables if necessary.""" - if not self.enabled: - self.db = DummyDB() - return - - # use detect_types so that timestamps return datetime objects - kwargs = dict(detect_types=sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES) - kwargs.update(self.connection_options) - self.db = sqlite3.connect(str(self.hist_file), **kwargs) - with self.db: - self.db.execute( - """CREATE TABLE IF NOT EXISTS sessions (session integer - primary key autoincrement, start timestamp, - end timestamp, num_cmds integer, remark text)""" - ) - self.db.execute( - """CREATE TABLE IF NOT EXISTS history - (session integer, line integer, source text, source_raw text, - PRIMARY KEY (session, line))""" - ) - # Output history is optional, but ensure the table's there so it can be - # enabled later. - self.db.execute( - """CREATE TABLE IF NOT EXISTS output_history - (session integer, line integer, output text, - PRIMARY KEY (session, line))""" - ) - # success! reset corrupt db count - self._corrupt_db_counter = 0 - - def writeout_cache(self): - """Overridden by HistoryManager to dump the cache before certain - database lookups.""" - pass - - ## ------------------------------- - ## Methods for retrieving history: - ## ------------------------------- - def _run_sql(self, sql, params, raw=True, output=False, latest=False): - """Prepares and runs an SQL query for the history database. - - Parameters - ---------- - sql : str - Any filtering expressions to go after SELECT ... FROM ... - params : tuple - Parameters passed to the SQL query (to replace "?") - raw, output : bool - See :meth:`get_range` - latest : bool - Select rows with max (session, line) - - Returns - ------- - Tuples as :meth:`get_range` - """ - toget = 'source_raw' if raw else 'source' - sqlfrom = "history" - if output: - sqlfrom = "history LEFT JOIN output_history USING (session, line)" - toget = "history.%s, output_history.output" % toget - if latest: - toget += ", MAX(session * 128 * 1024 + line)" - this_querry = "SELECT session, line, %s FROM %s " % (toget, sqlfrom) + sql - cur = self.db.execute(this_querry, params) - if latest: - cur = (row[:-1] for row in cur) - if output: # Regroup into 3-tuples, and parse JSON - return ((ses, lin, (inp, out)) for ses, lin, inp, out in cur) - return cur - - @only_when_enabled - @catch_corrupt_db - def get_session_info(self, session): - """Get info about a session. - - Parameters - ---------- - session : int - Session number to retrieve. - - Returns - ------- - session_id : int - Session ID number - start : datetime - Timestamp for the start of the session. - end : datetime - Timestamp for the end of the session, or None if IPython crashed. - num_cmds : int - Number of commands run, or None if IPython crashed. - remark : unicode - A manually set description. - """ - query = "SELECT * from sessions where session == ?" - return self.db.execute(query, (session,)).fetchone() - - @catch_corrupt_db - def get_last_session_id(self): - """Get the last session ID currently in the database. - - Within IPython, this should be the same as the value stored in - :attr:`HistoryManager.session_number`. - """ - for record in self.get_tail(n=1, include_latest=True): - return record[0] - - @catch_corrupt_db - def get_tail(self, n=10, raw=True, output=False, include_latest=False): - """Get the last n lines from the history database. - - Parameters - ---------- - n : int - The number of lines to get - raw, output : bool - See :meth:`get_range` - include_latest : bool - If False (default), n+1 lines are fetched, and the latest one - is discarded. This is intended to be used where the function - is called by a user command, which it should not return. - - Returns - ------- - Tuples as :meth:`get_range` - """ - self.writeout_cache() - if not include_latest: - n += 1 - cur = self._run_sql( - "ORDER BY session DESC, line DESC LIMIT ?", (n,), raw=raw, output=output - ) - if not include_latest: - return reversed(list(cur)[1:]) - return reversed(list(cur)) - - @catch_corrupt_db - def search(self, pattern="*", raw=True, search_raw=True, - output=False, n=None, unique=False): - """Search the database using unix glob-style matching (wildcards - * and ?). - - Parameters - ---------- - pattern : str - The wildcarded pattern to match when searching - search_raw : bool - If True, search the raw input, otherwise, the parsed input - raw, output : bool - See :meth:`get_range` - n : None or int - If an integer is given, it defines the limit of - returned entries. - unique : bool - When it is true, return only unique entries. - - Returns - ------- - Tuples as :meth:`get_range` - """ - tosearch = "source_raw" if search_raw else "source" - if output: - tosearch = "history." + tosearch - self.writeout_cache() - sqlform = "WHERE %s GLOB ?" % tosearch - params = (pattern,) - if unique: - sqlform += ' GROUP BY {0}'.format(tosearch) - if n is not None: - sqlform += " ORDER BY session DESC, line DESC LIMIT ?" - params += (n,) - elif unique: - sqlform += " ORDER BY session, line" - cur = self._run_sql(sqlform, params, raw=raw, output=output, latest=unique) - if n is not None: - return reversed(list(cur)) - return cur - - @catch_corrupt_db - def get_range(self, session, start=1, stop=None, raw=True,output=False): - """Retrieve input by session. - - Parameters - ---------- - session : int - Session number to retrieve. - start : int - First line to retrieve. - stop : int - End of line range (excluded from output itself). If None, retrieve - to the end of the session. - raw : bool - If True, return untranslated input - output : bool - If True, attempt to include output. This will be 'real' Python - objects for the current session, or text reprs from previous - sessions if db_log_output was enabled at the time. Where no output - is found, None is used. - - Returns - ------- - entries - An iterator over the desired lines. Each line is a 3-tuple, either - (session, line, input) if output is False, or - (session, line, (input, output)) if output is True. - """ - if stop: - lineclause = "line >= ? AND line < ?" - params = (session, start, stop) - else: - lineclause = "line>=?" - params = (session, start) - - return self._run_sql("WHERE session==? AND %s" % lineclause, - params, raw=raw, output=output) - - def get_range_by_str(self, rangestr, raw=True, output=False): - """Get lines of history from a string of ranges, as used by magic - commands %hist, %save, %macro, etc. - - Parameters - ---------- - rangestr : str - A string specifying ranges, e.g. "5 ~2/1-4". If empty string is used, - this will return everything from current session's history. - - See the documentation of :func:`%history` for the full details. - - raw, output : bool - As :meth:`get_range` - - Returns - ------- - Tuples as :meth:`get_range` - """ - for sess, s, e in extract_hist_ranges(rangestr): - for line in self.get_range(sess, s, e, raw=raw, output=output): - yield line - - -class HistoryManager(HistoryAccessor): - """A class to organize all history-related functionality in one place. - """ - # Public interface - - # An instance of the IPython shell we are attached to - shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', - allow_none=True) - # Lists to hold processed and raw history. These start with a blank entry - # so that we can index them starting from 1 - input_hist_parsed = List([""]) - input_hist_raw = List([""]) - # A list of directories visited during session - dir_hist = List() - @default('dir_hist') - def _dir_hist_default(self): - try: - return [Path.cwd()] - except OSError: - return [] - - # A dict of output history, keyed with ints from the shell's - # execution count. - output_hist = Dict() - # The text/plain repr of outputs. - output_hist_reprs = Dict() - - # The number of the current session in the history database - session_number = Integer() - - db_log_output = Bool(False, - help="Should the history database include output? (default: no)" - ).tag(config=True) - db_cache_size = Integer(0, - help="Write to database every x commands (higher values save disk access & power).\n" - "Values of 1 or less effectively disable caching." - ).tag(config=True) - # The input and output caches - db_input_cache = List() - db_output_cache = List() - - # History saving in separate thread - save_thread = Instance('IPython.core.history.HistorySavingThread', - allow_none=True) - save_flag = Instance(threading.Event, allow_none=True) - - # Private interface - # Variables used to store the three last inputs from the user. On each new - # history update, we populate the user's namespace with these, shifted as - # necessary. - _i00 = Unicode(u'') - _i = Unicode(u'') - _ii = Unicode(u'') - _iii = Unicode(u'') - - # A regex matching all forms of the exit command, so that we don't store - # them in the history (it's annoying to rewind the first entry and land on - # an exit call). - _exit_re = re.compile(r"(exit|quit)(\s*\(.*\))?$") - - def __init__(self, shell=None, config=None, **traits): - """Create a new history manager associated with a shell instance. - """ - super(HistoryManager, self).__init__(shell=shell, config=config, - **traits) - self.save_flag = threading.Event() - self.db_input_cache_lock = threading.Lock() - self.db_output_cache_lock = threading.Lock() - - try: - self.new_session() - except sqlite3.OperationalError: - self.log.error("Failed to create history session in %s. History will not be saved.", - self.hist_file, exc_info=True) - self.hist_file = ':memory:' - - if self.enabled and self.hist_file != ':memory:': - self.save_thread = HistorySavingThread(self) - self.save_thread.start() - - def _get_hist_file_name(self, profile=None): - """Get default history file name based on the Shell's profile. - - The profile parameter is ignored, but must exist for compatibility with - the parent class.""" - profile_dir = self.shell.profile_dir.location - return Path(profile_dir) / "history.sqlite" - - @only_when_enabled - def new_session(self, conn=None): - """Get a new session number.""" - if conn is None: - conn = self.db - - with conn: - cur = conn.execute( - """INSERT INTO sessions VALUES (NULL, ?, NULL, - NULL, '') """, - (datetime.datetime.now(),), - ) - self.session_number = cur.lastrowid - - def end_session(self): - """Close the database session, filling in the end time and line count.""" - self.writeout_cache() - with self.db: - self.db.execute("""UPDATE sessions SET end=?, num_cmds=? WHERE - session==?""", (datetime.datetime.now(), - len(self.input_hist_parsed)-1, self.session_number)) - self.session_number = 0 - - def name_session(self, name): - """Give the current session a name in the history database.""" - with self.db: - self.db.execute("UPDATE sessions SET remark=? WHERE session==?", - (name, self.session_number)) - - def reset(self, new_session=True): - """Clear the session history, releasing all object references, and - optionally open a new session.""" - self.output_hist.clear() - # The directory history can't be completely empty - self.dir_hist[:] = [Path.cwd()] - - if new_session: - if self.session_number: - self.end_session() - self.input_hist_parsed[:] = [""] - self.input_hist_raw[:] = [""] - self.new_session() - - # ------------------------------ - # Methods for retrieving history - # ------------------------------ - def get_session_info(self, session=0): - """Get info about a session. - - Parameters - ---------- - session : int - Session number to retrieve. The current session is 0, and negative - numbers count back from current session, so -1 is the previous session. - - Returns - ------- - session_id : int - Session ID number - start : datetime - Timestamp for the start of the session. - end : datetime - Timestamp for the end of the session, or None if IPython crashed. - num_cmds : int - Number of commands run, or None if IPython crashed. - remark : unicode - A manually set description. - """ - if session <= 0: - session += self.session_number - - return super(HistoryManager, self).get_session_info(session=session) - - @catch_corrupt_db - def get_tail(self, n=10, raw=True, output=False, include_latest=False): - """Get the last n lines from the history database. - - Most recent entry last. - - Completion will be reordered so that that the last ones are when - possible from current session. - - Parameters - ---------- - n : int - The number of lines to get - raw, output : bool - See :meth:`get_range` - include_latest : bool - If False (default), n+1 lines are fetched, and the latest one - is discarded. This is intended to be used where the function - is called by a user command, which it should not return. - - Returns - ------- - Tuples as :meth:`get_range` - """ - self.writeout_cache() - if not include_latest: - n += 1 - # cursor/line/entry - this_cur = list( - self._run_sql( - "WHERE session == ? ORDER BY line DESC LIMIT ? ", - (self.session_number, n), - raw=raw, - output=output, - ) - ) - other_cur = list( - self._run_sql( - "WHERE session != ? ORDER BY session DESC, line DESC LIMIT ?", - (self.session_number, n), - raw=raw, - output=output, - ) - ) - - everything = this_cur + other_cur - - everything = everything[:n] - - if not include_latest: - return list(everything)[:0:-1] - return list(everything)[::-1] - - def _get_range_session(self, start=1, stop=None, raw=True, output=False): - """Get input and output history from the current session. Called by - get_range, and takes similar parameters.""" - input_hist = self.input_hist_raw if raw else self.input_hist_parsed - - n = len(input_hist) - if start < 0: - start += n - if not stop or (stop > n): - stop = n - elif stop < 0: - stop += n - - for i in range(start, stop): - if output: - line = (input_hist[i], self.output_hist_reprs.get(i)) - else: - line = input_hist[i] - yield (0, i, line) - - def get_range(self, session=0, start=1, stop=None, raw=True,output=False): - """Retrieve input by session. - - Parameters - ---------- - session : int - Session number to retrieve. The current session is 0, and negative - numbers count back from current session, so -1 is previous session. - start : int - First line to retrieve. - stop : int - End of line range (excluded from output itself). If None, retrieve - to the end of the session. - raw : bool - If True, return untranslated input - output : bool - If True, attempt to include output. This will be 'real' Python - objects for the current session, or text reprs from previous - sessions if db_log_output was enabled at the time. Where no output - is found, None is used. - - Returns - ------- - entries - An iterator over the desired lines. Each line is a 3-tuple, either - (session, line, input) if output is False, or - (session, line, (input, output)) if output is True. - """ - if session <= 0: - session += self.session_number - if session==self.session_number: # Current session - return self._get_range_session(start, stop, raw, output) - return super(HistoryManager, self).get_range(session, start, stop, raw, - output) - - ## ---------------------------- - ## Methods for storing history: - ## ---------------------------- - def store_inputs(self, line_num, source, source_raw=None): - """Store source and raw input in history and create input cache - variables ``_i*``. - - Parameters - ---------- - line_num : int - The prompt number of this input. - source : str - Python input. - source_raw : str, optional - If given, this is the raw input without any IPython transformations - applied to it. If not given, ``source`` is used. - """ - if source_raw is None: - source_raw = source - source = source.rstrip('\n') - source_raw = source_raw.rstrip('\n') - - # do not store exit/quit commands - if self._exit_re.match(source_raw.strip()): - return - - self.input_hist_parsed.append(source) - self.input_hist_raw.append(source_raw) - - with self.db_input_cache_lock: - self.db_input_cache.append((line_num, source, source_raw)) - # Trigger to flush cache and write to DB. - if len(self.db_input_cache) >= self.db_cache_size: - self.save_flag.set() - - # update the auto _i variables - self._iii = self._ii - self._ii = self._i - self._i = self._i00 - self._i00 = source_raw - - # hackish access to user namespace to create _i1,_i2... dynamically - new_i = '_i%s' % line_num - to_main = {'_i': self._i, - '_ii': self._ii, - '_iii': self._iii, - new_i : self._i00 } - - if self.shell is not None: - self.shell.push(to_main, interactive=False) - - def store_output(self, line_num): - """If database output logging is enabled, this saves all the - outputs from the indicated prompt number to the database. It's - called by run_cell after code has been executed. - - Parameters - ---------- - line_num : int - The line number from which to save outputs - """ - if (not self.db_log_output) or (line_num not in self.output_hist_reprs): - return - output = self.output_hist_reprs[line_num] - - with self.db_output_cache_lock: - self.db_output_cache.append((line_num, output)) - if self.db_cache_size <= 1: - self.save_flag.set() - - def _writeout_input_cache(self, conn): - with conn: - for line in self.db_input_cache: - conn.execute("INSERT INTO history VALUES (?, ?, ?, ?)", - (self.session_number,)+line) - - def _writeout_output_cache(self, conn): - with conn: - for line in self.db_output_cache: - conn.execute("INSERT INTO output_history VALUES (?, ?, ?)", - (self.session_number,)+line) - - @only_when_enabled - def writeout_cache(self, conn=None): - """Write any entries in the cache to the database.""" - if conn is None: - conn = self.db - - with self.db_input_cache_lock: - try: - self._writeout_input_cache(conn) - except sqlite3.IntegrityError: - self.new_session(conn) - print("ERROR! Session/line number was not unique in", - "database. History logging moved to new session", - self.session_number) - try: - # Try writing to the new session. If this fails, don't - # recurse - self._writeout_input_cache(conn) - except sqlite3.IntegrityError: - pass - finally: - self.db_input_cache = [] - - with self.db_output_cache_lock: - try: - self._writeout_output_cache(conn) - except sqlite3.IntegrityError: - print("!! Session/line number for output was not unique", - "in database. Output will not be stored.") - finally: - self.db_output_cache = [] - - -class HistorySavingThread(threading.Thread): - """This thread takes care of writing history to the database, so that - the UI isn't held up while that happens. - - It waits for the HistoryManager's save_flag to be set, then writes out - the history cache. The main thread is responsible for setting the flag when - the cache size reaches a defined threshold.""" - daemon = True - stop_now = False - enabled = True - def __init__(self, history_manager): - super(HistorySavingThread, self).__init__(name="IPythonHistorySavingThread") - self.history_manager = history_manager - self.enabled = history_manager.enabled - atexit.register(self.stop) - - @only_when_enabled - def run(self): - # We need a separate db connection per thread: - try: - self.db = sqlite3.connect( - str(self.history_manager.hist_file), - **self.history_manager.connection_options, - ) - while True: - self.history_manager.save_flag.wait() - if self.stop_now: - self.db.close() - return - self.history_manager.save_flag.clear() - self.history_manager.writeout_cache(self.db) - except Exception as e: - print(("The history saving thread hit an unexpected error (%s)." - "History will not be written to the database.") % repr(e)) - - def stop(self): - """This can be called from the main thread to safely stop this thread. - - Note that it does not attempt to write out remaining history before - exiting. That should be done by calling the HistoryManager's - end_session method.""" - self.stop_now = True - self.history_manager.save_flag.set() - self.join() - - -# To match, e.g. ~5/8-~2/3 -range_re = re.compile(r""" -((?P~?\d+)/)? -(?P\d+)? -((?P[\-:]) - ((?P~?\d+)/)? - (?P\d+))? -$""", re.VERBOSE) - - -def extract_hist_ranges(ranges_str): - """Turn a string of history ranges into 3-tuples of (session, start, stop). - - Empty string results in a `[(0, 1, None)]`, i.e. "everything from current - session". - - Examples - -------- - >>> list(extract_hist_ranges("~8/5-~7/4 2")) - [(-8, 5, None), (-7, 1, 5), (0, 2, 3)] - """ - if ranges_str == "": - yield (0, 1, None) # Everything from current session - return - - for range_str in ranges_str.split(): - rmatch = range_re.match(range_str) - if not rmatch: - continue - start = rmatch.group("start") - if start: - start = int(start) - end = rmatch.group("end") - # If no end specified, get (a, a + 1) - end = int(end) if end else start + 1 - else: # start not specified - if not rmatch.group('startsess'): # no startsess - continue - start = 1 - end = None # provide the entire session hist - - if rmatch.group("sep") == "-": # 1-3 == 1:4 --> [1, 2, 3] - end += 1 - startsess = rmatch.group("startsess") or "0" - endsess = rmatch.group("endsess") or startsess - startsess = int(startsess.replace("~","-")) - endsess = int(endsess.replace("~","-")) - assert endsess >= startsess, "start session must be earlier than end session" - - if endsess == startsess: - yield (startsess, start, end) - continue - # Multiple sessions in one range: - yield (startsess, start, None) - for sess in range(startsess+1, endsess): - yield (sess, 1, None) - yield (endsess, 1, end) - - -def _format_lineno(session, line): - """Helper function to format line numbers properly.""" - if session == 0: - return str(line) - return "%s#%s" % (session, line) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/pretty.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/pretty.py deleted file mode 100644 index 34864507866fcfb03a5797600c412bd6dd4b3346..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/pretty.py +++ /dev/null @@ -1,953 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Python advanced pretty printer. This pretty printer is intended to -replace the old `pprint` python module which does not allow developers -to provide their own pretty print callbacks. - -This module is based on ruby's `prettyprint.rb` library by `Tanaka Akira`. - - -Example Usage -------------- - -To directly print the representation of an object use `pprint`:: - - from pretty import pprint - pprint(complex_object) - -To get a string of the output use `pretty`:: - - from pretty import pretty - string = pretty(complex_object) - - -Extending ---------- - -The pretty library allows developers to add pretty printing rules for their -own objects. This process is straightforward. All you have to do is to -add a `_repr_pretty_` method to your object and call the methods on the -pretty printer passed:: - - class MyObject(object): - - def _repr_pretty_(self, p, cycle): - ... - -Here's an example for a class with a simple constructor:: - - class MySimpleObject: - - def __init__(self, a, b, *, c=None): - self.a = a - self.b = b - self.c = c - - def _repr_pretty_(self, p, cycle): - ctor = CallExpression.factory(self.__class__.__name__) - if self.c is None: - p.pretty(ctor(a, b)) - else: - p.pretty(ctor(a, b, c=c)) - -Here is an example implementation of a `_repr_pretty_` method for a list -subclass:: - - class MyList(list): - - def _repr_pretty_(self, p, cycle): - if cycle: - p.text('MyList(...)') - else: - with p.group(8, 'MyList([', '])'): - for idx, item in enumerate(self): - if idx: - p.text(',') - p.breakable() - p.pretty(item) - -The `cycle` parameter is `True` if pretty detected a cycle. You *have* to -react to that or the result is an infinite loop. `p.text()` just adds -non breaking text to the output, `p.breakable()` either adds a whitespace -or breaks here. If you pass it an argument it's used instead of the -default space. `p.pretty` prettyprints another object using the pretty print -method. - -The first parameter to the `group` function specifies the extra indentation -of the next line. In this example the next item will either be on the same -line (if the items are short enough) or aligned with the right edge of the -opening bracket of `MyList`. - -If you just want to indent something you can use the group function -without open / close parameters. You can also use this code:: - - with p.indent(2): - ... - -Inheritance diagram: - -.. inheritance-diagram:: IPython.lib.pretty - :parts: 3 - -:copyright: 2007 by Armin Ronacher. - Portions (c) 2009 by Robert Kern. -:license: BSD License. -""" - -from contextlib import contextmanager -import datetime -import os -import re -import sys -import types -from collections import deque -from inspect import signature -from io import StringIO -from warnings import warn - -from IPython.utils.decorators import undoc -from IPython.utils.py3compat import PYPY - -__all__ = ['pretty', 'pprint', 'PrettyPrinter', 'RepresentationPrinter', - 'for_type', 'for_type_by_name', 'RawText', 'RawStringLiteral', 'CallExpression'] - - -MAX_SEQ_LENGTH = 1000 -_re_pattern_type = type(re.compile('')) - -def _safe_getattr(obj, attr, default=None): - """Safe version of getattr. - - Same as getattr, but will return ``default`` on any Exception, - rather than raising. - """ - try: - return getattr(obj, attr, default) - except Exception: - return default - -@undoc -class CUnicodeIO(StringIO): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warn(("CUnicodeIO is deprecated since IPython 6.0. " - "Please use io.StringIO instead."), - DeprecationWarning, stacklevel=2) - -def _sorted_for_pprint(items): - """ - Sort the given items for pretty printing. Since some predictable - sorting is better than no sorting at all, we sort on the string - representation if normal sorting fails. - """ - items = list(items) - try: - return sorted(items) - except Exception: - try: - return sorted(items, key=str) - except Exception: - return items - -def pretty(obj, verbose=False, max_width=79, newline='\n', max_seq_length=MAX_SEQ_LENGTH): - """ - Pretty print the object's representation. - """ - stream = StringIO() - printer = RepresentationPrinter(stream, verbose, max_width, newline, max_seq_length=max_seq_length) - printer.pretty(obj) - printer.flush() - return stream.getvalue() - - -def pprint(obj, verbose=False, max_width=79, newline='\n', max_seq_length=MAX_SEQ_LENGTH): - """ - Like `pretty` but print to stdout. - """ - printer = RepresentationPrinter(sys.stdout, verbose, max_width, newline, max_seq_length=max_seq_length) - printer.pretty(obj) - printer.flush() - sys.stdout.write(newline) - sys.stdout.flush() - -class _PrettyPrinterBase(object): - - @contextmanager - def indent(self, indent): - """with statement support for indenting/dedenting.""" - self.indentation += indent - try: - yield - finally: - self.indentation -= indent - - @contextmanager - def group(self, indent=0, open='', close=''): - """like begin_group / end_group but for the with statement.""" - self.begin_group(indent, open) - try: - yield - finally: - self.end_group(indent, close) - -class PrettyPrinter(_PrettyPrinterBase): - """ - Baseclass for the `RepresentationPrinter` prettyprinter that is used to - generate pretty reprs of objects. Contrary to the `RepresentationPrinter` - this printer knows nothing about the default pprinters or the `_repr_pretty_` - callback method. - """ - - def __init__(self, output, max_width=79, newline='\n', max_seq_length=MAX_SEQ_LENGTH): - self.output = output - self.max_width = max_width - self.newline = newline - self.max_seq_length = max_seq_length - self.output_width = 0 - self.buffer_width = 0 - self.buffer = deque() - - root_group = Group(0) - self.group_stack = [root_group] - self.group_queue = GroupQueue(root_group) - self.indentation = 0 - - def _break_one_group(self, group): - while group.breakables: - x = self.buffer.popleft() - self.output_width = x.output(self.output, self.output_width) - self.buffer_width -= x.width - while self.buffer and isinstance(self.buffer[0], Text): - x = self.buffer.popleft() - self.output_width = x.output(self.output, self.output_width) - self.buffer_width -= x.width - - def _break_outer_groups(self): - while self.max_width < self.output_width + self.buffer_width: - group = self.group_queue.deq() - if not group: - return - self._break_one_group(group) - - def text(self, obj): - """Add literal text to the output.""" - width = len(obj) - if self.buffer: - text = self.buffer[-1] - if not isinstance(text, Text): - text = Text() - self.buffer.append(text) - text.add(obj, width) - self.buffer_width += width - self._break_outer_groups() - else: - self.output.write(obj) - self.output_width += width - - def breakable(self, sep=' '): - """ - Add a breakable separator to the output. This does not mean that it - will automatically break here. If no breaking on this position takes - place the `sep` is inserted which default to one space. - """ - width = len(sep) - group = self.group_stack[-1] - if group.want_break: - self.flush() - self.output.write(self.newline) - self.output.write(' ' * self.indentation) - self.output_width = self.indentation - self.buffer_width = 0 - else: - self.buffer.append(Breakable(sep, width, self)) - self.buffer_width += width - self._break_outer_groups() - - def break_(self): - """ - Explicitly insert a newline into the output, maintaining correct indentation. - """ - group = self.group_queue.deq() - if group: - self._break_one_group(group) - self.flush() - self.output.write(self.newline) - self.output.write(' ' * self.indentation) - self.output_width = self.indentation - self.buffer_width = 0 - - - def begin_group(self, indent=0, open=''): - """ - Begin a group. - The first parameter specifies the indentation for the next line (usually - the width of the opening text), the second the opening text. All - parameters are optional. - """ - if open: - self.text(open) - group = Group(self.group_stack[-1].depth + 1) - self.group_stack.append(group) - self.group_queue.enq(group) - self.indentation += indent - - def _enumerate(self, seq): - """like enumerate, but with an upper limit on the number of items""" - for idx, x in enumerate(seq): - if self.max_seq_length and idx >= self.max_seq_length: - self.text(',') - self.breakable() - self.text('...') - return - yield idx, x - - def end_group(self, dedent=0, close=''): - """End a group. See `begin_group` for more details.""" - self.indentation -= dedent - group = self.group_stack.pop() - if not group.breakables: - self.group_queue.remove(group) - if close: - self.text(close) - - def flush(self): - """Flush data that is left in the buffer.""" - for data in self.buffer: - self.output_width += data.output(self.output, self.output_width) - self.buffer.clear() - self.buffer_width = 0 - - -def _get_mro(obj_class): - """ Get a reasonable method resolution order of a class and its superclasses - for both old-style and new-style classes. - """ - if not hasattr(obj_class, '__mro__'): - # Old-style class. Mix in object to make a fake new-style class. - try: - obj_class = type(obj_class.__name__, (obj_class, object), {}) - except TypeError: - # Old-style extension type that does not descend from object. - # FIXME: try to construct a more thorough MRO. - mro = [obj_class] - else: - mro = obj_class.__mro__[1:-1] - else: - mro = obj_class.__mro__ - return mro - - -class RepresentationPrinter(PrettyPrinter): - """ - Special pretty printer that has a `pretty` method that calls the pretty - printer for a python object. - - This class stores processing data on `self` so you must *never* use - this class in a threaded environment. Always lock it or reinstanciate - it. - - Instances also have a verbose flag callbacks can access to control their - output. For example the default instance repr prints all attributes and - methods that are not prefixed by an underscore if the printer is in - verbose mode. - """ - - def __init__(self, output, verbose=False, max_width=79, newline='\n', - singleton_pprinters=None, type_pprinters=None, deferred_pprinters=None, - max_seq_length=MAX_SEQ_LENGTH): - - PrettyPrinter.__init__(self, output, max_width, newline, max_seq_length=max_seq_length) - self.verbose = verbose - self.stack = [] - if singleton_pprinters is None: - singleton_pprinters = _singleton_pprinters.copy() - self.singleton_pprinters = singleton_pprinters - if type_pprinters is None: - type_pprinters = _type_pprinters.copy() - self.type_pprinters = type_pprinters - if deferred_pprinters is None: - deferred_pprinters = _deferred_type_pprinters.copy() - self.deferred_pprinters = deferred_pprinters - - def pretty(self, obj): - """Pretty print the given object.""" - obj_id = id(obj) - cycle = obj_id in self.stack - self.stack.append(obj_id) - self.begin_group() - try: - obj_class = _safe_getattr(obj, '__class__', None) or type(obj) - # First try to find registered singleton printers for the type. - try: - printer = self.singleton_pprinters[obj_id] - except (TypeError, KeyError): - pass - else: - return printer(obj, self, cycle) - # Next walk the mro and check for either: - # 1) a registered printer - # 2) a _repr_pretty_ method - for cls in _get_mro(obj_class): - if cls in self.type_pprinters: - # printer registered in self.type_pprinters - return self.type_pprinters[cls](obj, self, cycle) - else: - # deferred printer - printer = self._in_deferred_types(cls) - if printer is not None: - return printer(obj, self, cycle) - else: - # Finally look for special method names. - # Some objects automatically create any requested - # attribute. Try to ignore most of them by checking for - # callability. - if '_repr_pretty_' in cls.__dict__: - meth = cls._repr_pretty_ - if callable(meth): - return meth(obj, self, cycle) - if cls is not object \ - and callable(cls.__dict__.get('__repr__')): - return _repr_pprint(obj, self, cycle) - - return _default_pprint(obj, self, cycle) - finally: - self.end_group() - self.stack.pop() - - def _in_deferred_types(self, cls): - """ - Check if the given class is specified in the deferred type registry. - - Returns the printer from the registry if it exists, and None if the - class is not in the registry. Successful matches will be moved to the - regular type registry for future use. - """ - mod = _safe_getattr(cls, '__module__', None) - name = _safe_getattr(cls, '__name__', None) - key = (mod, name) - printer = None - if key in self.deferred_pprinters: - # Move the printer over to the regular registry. - printer = self.deferred_pprinters.pop(key) - self.type_pprinters[cls] = printer - return printer - - -class Printable(object): - - def output(self, stream, output_width): - return output_width - - -class Text(Printable): - - def __init__(self): - self.objs = [] - self.width = 0 - - def output(self, stream, output_width): - for obj in self.objs: - stream.write(obj) - return output_width + self.width - - def add(self, obj, width): - self.objs.append(obj) - self.width += width - - -class Breakable(Printable): - - def __init__(self, seq, width, pretty): - self.obj = seq - self.width = width - self.pretty = pretty - self.indentation = pretty.indentation - self.group = pretty.group_stack[-1] - self.group.breakables.append(self) - - def output(self, stream, output_width): - self.group.breakables.popleft() - if self.group.want_break: - stream.write(self.pretty.newline) - stream.write(' ' * self.indentation) - return self.indentation - if not self.group.breakables: - self.pretty.group_queue.remove(self.group) - stream.write(self.obj) - return output_width + self.width - - -class Group(Printable): - - def __init__(self, depth): - self.depth = depth - self.breakables = deque() - self.want_break = False - - -class GroupQueue(object): - - def __init__(self, *groups): - self.queue = [] - for group in groups: - self.enq(group) - - def enq(self, group): - depth = group.depth - while depth > len(self.queue) - 1: - self.queue.append([]) - self.queue[depth].append(group) - - def deq(self): - for stack in self.queue: - for idx, group in enumerate(reversed(stack)): - if group.breakables: - del stack[idx] - group.want_break = True - return group - for group in stack: - group.want_break = True - del stack[:] - - def remove(self, group): - try: - self.queue[group.depth].remove(group) - except ValueError: - pass - - -class RawText: - """ Object such that ``p.pretty(RawText(value))`` is the same as ``p.text(value)``. - - An example usage of this would be to show a list as binary numbers, using - ``p.pretty([RawText(bin(i)) for i in integers])``. - """ - def __init__(self, value): - self.value = value - - def _repr_pretty_(self, p, cycle): - p.text(self.value) - - -class CallExpression: - """ Object which emits a line-wrapped call expression in the form `__name(*args, **kwargs)` """ - def __init__(__self, __name, *args, **kwargs): - # dunders are to avoid clashes with kwargs, as python's name manging - # will kick in. - self = __self - self.name = __name - self.args = args - self.kwargs = kwargs - - @classmethod - def factory(cls, name): - def inner(*args, **kwargs): - return cls(name, *args, **kwargs) - return inner - - def _repr_pretty_(self, p, cycle): - # dunders are to avoid clashes with kwargs, as python's name manging - # will kick in. - - started = False - def new_item(): - nonlocal started - if started: - p.text(",") - p.breakable() - started = True - - prefix = self.name + "(" - with p.group(len(prefix), prefix, ")"): - for arg in self.args: - new_item() - p.pretty(arg) - for arg_name, arg in self.kwargs.items(): - new_item() - arg_prefix = arg_name + "=" - with p.group(len(arg_prefix), arg_prefix): - p.pretty(arg) - - -class RawStringLiteral: - """ Wrapper that shows a string with a `r` prefix """ - def __init__(self, value): - self.value = value - - def _repr_pretty_(self, p, cycle): - base_repr = repr(self.value) - if base_repr[:1] in 'uU': - base_repr = base_repr[1:] - prefix = 'ur' - else: - prefix = 'r' - base_repr = prefix + base_repr.replace('\\\\', '\\') - p.text(base_repr) - - -def _default_pprint(obj, p, cycle): - """ - The default print function. Used if an object does not provide one and - it's none of the builtin objects. - """ - klass = _safe_getattr(obj, '__class__', None) or type(obj) - if _safe_getattr(klass, '__repr__', None) is not object.__repr__: - # A user-provided repr. Find newlines and replace them with p.break_() - _repr_pprint(obj, p, cycle) - return - p.begin_group(1, '<') - p.pretty(klass) - p.text(' at 0x%x' % id(obj)) - if cycle: - p.text(' ...') - elif p.verbose: - first = True - for key in dir(obj): - if not key.startswith('_'): - try: - value = getattr(obj, key) - except AttributeError: - continue - if isinstance(value, types.MethodType): - continue - if not first: - p.text(',') - p.breakable() - p.text(key) - p.text('=') - step = len(key) + 1 - p.indentation += step - p.pretty(value) - p.indentation -= step - first = False - p.end_group(1, '>') - - -def _seq_pprinter_factory(start, end): - """ - Factory that returns a pprint function useful for sequences. Used by - the default pprint for tuples and lists. - """ - def inner(obj, p, cycle): - if cycle: - return p.text(start + '...' + end) - step = len(start) - p.begin_group(step, start) - for idx, x in p._enumerate(obj): - if idx: - p.text(',') - p.breakable() - p.pretty(x) - if len(obj) == 1 and isinstance(obj, tuple): - # Special case for 1-item tuples. - p.text(',') - p.end_group(step, end) - return inner - - -def _set_pprinter_factory(start, end): - """ - Factory that returns a pprint function useful for sets and frozensets. - """ - def inner(obj, p, cycle): - if cycle: - return p.text(start + '...' + end) - if len(obj) == 0: - # Special case. - p.text(type(obj).__name__ + '()') - else: - step = len(start) - p.begin_group(step, start) - # Like dictionary keys, we will try to sort the items if there aren't too many - if not (p.max_seq_length and len(obj) >= p.max_seq_length): - items = _sorted_for_pprint(obj) - else: - items = obj - for idx, x in p._enumerate(items): - if idx: - p.text(',') - p.breakable() - p.pretty(x) - p.end_group(step, end) - return inner - - -def _dict_pprinter_factory(start, end): - """ - Factory that returns a pprint function used by the default pprint of - dicts and dict proxies. - """ - def inner(obj, p, cycle): - if cycle: - return p.text('{...}') - step = len(start) - p.begin_group(step, start) - keys = obj.keys() - for idx, key in p._enumerate(keys): - if idx: - p.text(',') - p.breakable() - p.pretty(key) - p.text(': ') - p.pretty(obj[key]) - p.end_group(step, end) - return inner - - -def _super_pprint(obj, p, cycle): - """The pprint for the super type.""" - p.begin_group(8, '') - - - -class _ReFlags: - def __init__(self, value): - self.value = value - - def _repr_pretty_(self, p, cycle): - done_one = False - for flag in ('TEMPLATE', 'IGNORECASE', 'LOCALE', 'MULTILINE', 'DOTALL', - 'UNICODE', 'VERBOSE', 'DEBUG'): - if self.value & getattr(re, flag): - if done_one: - p.text('|') - p.text('re.' + flag) - done_one = True - - -def _re_pattern_pprint(obj, p, cycle): - """The pprint function for regular expression patterns.""" - re_compile = CallExpression.factory('re.compile') - if obj.flags: - p.pretty(re_compile(RawStringLiteral(obj.pattern), _ReFlags(obj.flags))) - else: - p.pretty(re_compile(RawStringLiteral(obj.pattern))) - - -def _types_simplenamespace_pprint(obj, p, cycle): - """The pprint function for types.SimpleNamespace.""" - namespace = CallExpression.factory('namespace') - if cycle: - p.pretty(namespace(RawText("..."))) - else: - p.pretty(namespace(**obj.__dict__)) - - -def _type_pprint(obj, p, cycle): - """The pprint for classes and types.""" - # Heap allocated types might not have the module attribute, - # and others may set it to None. - - # Checks for a __repr__ override in the metaclass. Can't compare the - # type(obj).__repr__ directly because in PyPy the representation function - # inherited from type isn't the same type.__repr__ - if [m for m in _get_mro(type(obj)) if "__repr__" in vars(m)][:1] != [type]: - _repr_pprint(obj, p, cycle) - return - - mod = _safe_getattr(obj, '__module__', None) - try: - name = obj.__qualname__ - if not isinstance(name, str): - # This can happen if the type implements __qualname__ as a property - # or other descriptor in Python 2. - raise Exception("Try __name__") - except Exception: - name = obj.__name__ - if not isinstance(name, str): - name = '' - - if mod in (None, '__builtin__', 'builtins', 'exceptions'): - p.text(name) - else: - p.text(mod + '.' + name) - - -def _repr_pprint(obj, p, cycle): - """A pprint that just redirects to the normal repr function.""" - # Find newlines and replace them with p.break_() - output = repr(obj) - lines = output.splitlines() - with p.group(): - for idx, output_line in enumerate(lines): - if idx: - p.break_() - p.text(output_line) - - -def _function_pprint(obj, p, cycle): - """Base pprint for all functions and builtin functions.""" - name = _safe_getattr(obj, '__qualname__', obj.__name__) - mod = obj.__module__ - if mod and mod not in ('__builtin__', 'builtins', 'exceptions'): - name = mod + '.' + name - try: - func_def = name + str(signature(obj)) - except ValueError: - func_def = name - p.text('' % func_def) - - -def _exception_pprint(obj, p, cycle): - """Base pprint for all exceptions.""" - name = getattr(obj.__class__, '__qualname__', obj.__class__.__name__) - if obj.__class__.__module__ not in ('exceptions', 'builtins'): - name = '%s.%s' % (obj.__class__.__module__, name) - - p.pretty(CallExpression(name, *getattr(obj, 'args', ()))) - - -#: the exception base -try: - _exception_base = BaseException -except NameError: - _exception_base = Exception - - -#: printers for builtin types -_type_pprinters = { - int: _repr_pprint, - float: _repr_pprint, - str: _repr_pprint, - tuple: _seq_pprinter_factory('(', ')'), - list: _seq_pprinter_factory('[', ']'), - dict: _dict_pprinter_factory('{', '}'), - set: _set_pprinter_factory('{', '}'), - frozenset: _set_pprinter_factory('frozenset({', '})'), - super: _super_pprint, - _re_pattern_type: _re_pattern_pprint, - type: _type_pprint, - types.FunctionType: _function_pprint, - types.BuiltinFunctionType: _function_pprint, - types.MethodType: _repr_pprint, - types.SimpleNamespace: _types_simplenamespace_pprint, - datetime.datetime: _repr_pprint, - datetime.timedelta: _repr_pprint, - _exception_base: _exception_pprint -} - -# render os.environ like a dict -_env_type = type(os.environ) -# future-proof in case os.environ becomes a plain dict? -if _env_type is not dict: - _type_pprinters[_env_type] = _dict_pprinter_factory('environ{', '}') - -try: - # In PyPy, types.DictProxyType is dict, setting the dictproxy printer - # using dict.setdefault avoids overwriting the dict printer - _type_pprinters.setdefault(types.DictProxyType, - _dict_pprinter_factory('dict_proxy({', '})')) - _type_pprinters[types.ClassType] = _type_pprint - _type_pprinters[types.SliceType] = _repr_pprint -except AttributeError: # Python 3 - _type_pprinters[types.MappingProxyType] = \ - _dict_pprinter_factory('mappingproxy({', '})') - _type_pprinters[slice] = _repr_pprint - -_type_pprinters[range] = _repr_pprint -_type_pprinters[bytes] = _repr_pprint - -#: printers for types specified by name -_deferred_type_pprinters = { -} - -def for_type(typ, func): - """ - Add a pretty printer for a given type. - """ - oldfunc = _type_pprinters.get(typ, None) - if func is not None: - # To support easy restoration of old pprinters, we need to ignore Nones. - _type_pprinters[typ] = func - return oldfunc - -def for_type_by_name(type_module, type_name, func): - """ - Add a pretty printer for a type specified by the module and name of a type - rather than the type object itself. - """ - key = (type_module, type_name) - oldfunc = _deferred_type_pprinters.get(key, None) - if func is not None: - # To support easy restoration of old pprinters, we need to ignore Nones. - _deferred_type_pprinters[key] = func - return oldfunc - - -#: printers for the default singletons -_singleton_pprinters = dict.fromkeys(map(id, [None, True, False, Ellipsis, - NotImplemented]), _repr_pprint) - - -def _defaultdict_pprint(obj, p, cycle): - cls_ctor = CallExpression.factory(obj.__class__.__name__) - if cycle: - p.pretty(cls_ctor(RawText("..."))) - else: - p.pretty(cls_ctor(obj.default_factory, dict(obj))) - -def _ordereddict_pprint(obj, p, cycle): - cls_ctor = CallExpression.factory(obj.__class__.__name__) - if cycle: - p.pretty(cls_ctor(RawText("..."))) - elif len(obj): - p.pretty(cls_ctor(list(obj.items()))) - else: - p.pretty(cls_ctor()) - -def _deque_pprint(obj, p, cycle): - cls_ctor = CallExpression.factory(obj.__class__.__name__) - if cycle: - p.pretty(cls_ctor(RawText("..."))) - elif obj.maxlen is not None: - p.pretty(cls_ctor(list(obj), maxlen=obj.maxlen)) - else: - p.pretty(cls_ctor(list(obj))) - -def _counter_pprint(obj, p, cycle): - cls_ctor = CallExpression.factory(obj.__class__.__name__) - if cycle: - p.pretty(cls_ctor(RawText("..."))) - elif len(obj): - p.pretty(cls_ctor(dict(obj.most_common()))) - else: - p.pretty(cls_ctor()) - - -def _userlist_pprint(obj, p, cycle): - cls_ctor = CallExpression.factory(obj.__class__.__name__) - if cycle: - p.pretty(cls_ctor(RawText("..."))) - else: - p.pretty(cls_ctor(obj.data)) - - -for_type_by_name('collections', 'defaultdict', _defaultdict_pprint) -for_type_by_name('collections', 'OrderedDict', _ordereddict_pprint) -for_type_by_name('collections', 'deque', _deque_pprint) -for_type_by_name('collections', 'Counter', _counter_pprint) -for_type_by_name("collections", "UserList", _userlist_pprint) - -if __name__ == '__main__': - from random import randrange - class Foo(object): - def __init__(self): - self.foo = 1 - self.bar = re.compile(r'\s+') - self.blub = dict.fromkeys(range(30), randrange(1, 40)) - self.hehe = 23424.234234 - self.list = ["blub", "blah", self] - - def get_foo(self): - print("foo") - - pprint(Foo(), verbose=True) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/cookiejar.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/cookiejar.py deleted file mode 100644 index 6c88b47e3583430e05ea671af5b6da2a557073ec..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/cookiejar.py +++ /dev/null @@ -1,415 +0,0 @@ -import asyncio -import contextlib -import datetime -import os # noqa -import pathlib -import pickle -import re -from collections import defaultdict -from http.cookies import BaseCookie, Morsel, SimpleCookie -from typing import ( # noqa - DefaultDict, - Dict, - Iterable, - Iterator, - List, - Mapping, - Optional, - Set, - Tuple, - Union, - cast, -) - -from yarl import URL - -from .abc import AbstractCookieJar, ClearCookiePredicate -from .helpers import is_ip_address, next_whole_second -from .typedefs import LooseCookies, PathLike, StrOrURL - -__all__ = ("CookieJar", "DummyCookieJar") - - -CookieItem = Union[str, "Morsel[str]"] - - -class CookieJar(AbstractCookieJar): - """Implements cookie storage adhering to RFC 6265.""" - - DATE_TOKENS_RE = re.compile( - r"[\x09\x20-\x2F\x3B-\x40\x5B-\x60\x7B-\x7E]*" - r"(?P[\x00-\x08\x0A-\x1F\d:a-zA-Z\x7F-\xFF]+)" - ) - - DATE_HMS_TIME_RE = re.compile(r"(\d{1,2}):(\d{1,2}):(\d{1,2})") - - DATE_DAY_OF_MONTH_RE = re.compile(r"(\d{1,2})") - - DATE_MONTH_RE = re.compile( - "(jan)|(feb)|(mar)|(apr)|(may)|(jun)|(jul)|" "(aug)|(sep)|(oct)|(nov)|(dec)", - re.I, - ) - - DATE_YEAR_RE = re.compile(r"(\d{2,4})") - - MAX_TIME = datetime.datetime.max.replace(tzinfo=datetime.timezone.utc) - - MAX_32BIT_TIME = datetime.datetime.utcfromtimestamp(2**31 - 1) - - def __init__( - self, - *, - unsafe: bool = False, - quote_cookie: bool = True, - treat_as_secure_origin: Union[StrOrURL, List[StrOrURL], None] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - super().__init__(loop=loop) - self._cookies: DefaultDict[Tuple[str, str], SimpleCookie[str]] = defaultdict( - SimpleCookie - ) - self._host_only_cookies: Set[Tuple[str, str]] = set() - self._unsafe = unsafe - self._quote_cookie = quote_cookie - if treat_as_secure_origin is None: - treat_as_secure_origin = [] - elif isinstance(treat_as_secure_origin, URL): - treat_as_secure_origin = [treat_as_secure_origin.origin()] - elif isinstance(treat_as_secure_origin, str): - treat_as_secure_origin = [URL(treat_as_secure_origin).origin()] - else: - treat_as_secure_origin = [ - URL(url).origin() if isinstance(url, str) else url.origin() - for url in treat_as_secure_origin - ] - self._treat_as_secure_origin = treat_as_secure_origin - self._next_expiration = next_whole_second() - self._expirations: Dict[Tuple[str, str, str], datetime.datetime] = {} - # #4515: datetime.max may not be representable on 32-bit platforms - self._max_time = self.MAX_TIME - try: - self._max_time.timestamp() - except OverflowError: - self._max_time = self.MAX_32BIT_TIME - - def save(self, file_path: PathLike) -> None: - file_path = pathlib.Path(file_path) - with file_path.open(mode="wb") as f: - pickle.dump(self._cookies, f, pickle.HIGHEST_PROTOCOL) - - def load(self, file_path: PathLike) -> None: - file_path = pathlib.Path(file_path) - with file_path.open(mode="rb") as f: - self._cookies = pickle.load(f) - - def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: - if predicate is None: - self._next_expiration = next_whole_second() - self._cookies.clear() - self._host_only_cookies.clear() - self._expirations.clear() - return - - to_del = [] - now = datetime.datetime.now(datetime.timezone.utc) - for (domain, path), cookie in self._cookies.items(): - for name, morsel in cookie.items(): - key = (domain, path, name) - if ( - key in self._expirations and self._expirations[key] <= now - ) or predicate(morsel): - to_del.append(key) - - for domain, path, name in to_del: - self._host_only_cookies.discard((domain, name)) - key = (domain, path, name) - if key in self._expirations: - del self._expirations[(domain, path, name)] - self._cookies[(domain, path)].pop(name, None) - - next_expiration = min(self._expirations.values(), default=self._max_time) - try: - self._next_expiration = next_expiration.replace( - microsecond=0 - ) + datetime.timedelta(seconds=1) - except OverflowError: - self._next_expiration = self._max_time - - def clear_domain(self, domain: str) -> None: - self.clear(lambda x: self._is_domain_match(domain, x["domain"])) - - def __iter__(self) -> "Iterator[Morsel[str]]": - self._do_expiration() - for val in self._cookies.values(): - yield from val.values() - - def __len__(self) -> int: - return sum(1 for i in self) - - def _do_expiration(self) -> None: - self.clear(lambda x: False) - - def _expire_cookie( - self, when: datetime.datetime, domain: str, path: str, name: str - ) -> None: - self._next_expiration = min(self._next_expiration, when) - self._expirations[(domain, path, name)] = when - - def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: - """Update cookies.""" - hostname = response_url.raw_host - - if not self._unsafe and is_ip_address(hostname): - # Don't accept cookies from IPs - return - - if isinstance(cookies, Mapping): - cookies = cookies.items() - - for name, cookie in cookies: - if not isinstance(cookie, Morsel): - tmp: SimpleCookie[str] = SimpleCookie() - tmp[name] = cookie # type: ignore[assignment] - cookie = tmp[name] - - domain = cookie["domain"] - - # ignore domains with trailing dots - if domain.endswith("."): - domain = "" - del cookie["domain"] - - if not domain and hostname is not None: - # Set the cookie's domain to the response hostname - # and set its host-only-flag - self._host_only_cookies.add((hostname, name)) - domain = cookie["domain"] = hostname - - if domain.startswith("."): - # Remove leading dot - domain = domain[1:] - cookie["domain"] = domain - - if hostname and not self._is_domain_match(domain, hostname): - # Setting cookies for different domains is not allowed - continue - - path = cookie["path"] - if not path or not path.startswith("/"): - # Set the cookie's path to the response path - path = response_url.path - if not path.startswith("/"): - path = "/" - else: - # Cut everything from the last slash to the end - path = "/" + path[1 : path.rfind("/")] - cookie["path"] = path - - max_age = cookie["max-age"] - if max_age: - try: - delta_seconds = int(max_age) - try: - max_age_expiration = datetime.datetime.now( - datetime.timezone.utc - ) + datetime.timedelta(seconds=delta_seconds) - except OverflowError: - max_age_expiration = self._max_time - self._expire_cookie(max_age_expiration, domain, path, name) - except ValueError: - cookie["max-age"] = "" - - else: - expires = cookie["expires"] - if expires: - expire_time = self._parse_date(expires) - if expire_time: - self._expire_cookie(expire_time, domain, path, name) - else: - cookie["expires"] = "" - - self._cookies[(domain, path)][name] = cookie - - self._do_expiration() - - def filter_cookies( - self, request_url: URL = URL() - ) -> Union["BaseCookie[str]", "SimpleCookie[str]"]: - """Returns this jar's cookies filtered by their attributes.""" - self._do_expiration() - request_url = URL(request_url) - filtered: Union["SimpleCookie[str]", "BaseCookie[str]"] = ( - SimpleCookie() if self._quote_cookie else BaseCookie() - ) - hostname = request_url.raw_host or "" - request_origin = URL() - with contextlib.suppress(ValueError): - request_origin = request_url.origin() - - is_not_secure = ( - request_url.scheme not in ("https", "wss") - and request_origin not in self._treat_as_secure_origin - ) - - for cookie in self: - name = cookie.key - domain = cookie["domain"] - - # Send shared cookies - if not domain: - filtered[name] = cookie.value - continue - - if not self._unsafe and is_ip_address(hostname): - continue - - if (domain, name) in self._host_only_cookies: - if domain != hostname: - continue - elif not self._is_domain_match(domain, hostname): - continue - - if not self._is_path_match(request_url.path, cookie["path"]): - continue - - if is_not_secure and cookie["secure"]: - continue - - # It's critical we use the Morsel so the coded_value - # (based on cookie version) is preserved - mrsl_val = cast("Morsel[str]", cookie.get(cookie.key, Morsel())) - mrsl_val.set(cookie.key, cookie.value, cookie.coded_value) - filtered[name] = mrsl_val - - return filtered - - @staticmethod - def _is_domain_match(domain: str, hostname: str) -> bool: - """Implements domain matching adhering to RFC 6265.""" - if hostname == domain: - return True - - if not hostname.endswith(domain): - return False - - non_matching = hostname[: -len(domain)] - - if not non_matching.endswith("."): - return False - - return not is_ip_address(hostname) - - @staticmethod - def _is_path_match(req_path: str, cookie_path: str) -> bool: - """Implements path matching adhering to RFC 6265.""" - if not req_path.startswith("/"): - req_path = "/" - - if req_path == cookie_path: - return True - - if not req_path.startswith(cookie_path): - return False - - if cookie_path.endswith("/"): - return True - - non_matching = req_path[len(cookie_path) :] - - return non_matching.startswith("/") - - @classmethod - def _parse_date(cls, date_str: str) -> Optional[datetime.datetime]: - """Implements date string parsing adhering to RFC 6265.""" - if not date_str: - return None - - found_time = False - found_day = False - found_month = False - found_year = False - - hour = minute = second = 0 - day = 0 - month = 0 - year = 0 - - for token_match in cls.DATE_TOKENS_RE.finditer(date_str): - - token = token_match.group("token") - - if not found_time: - time_match = cls.DATE_HMS_TIME_RE.match(token) - if time_match: - found_time = True - hour, minute, second = (int(s) for s in time_match.groups()) - continue - - if not found_day: - day_match = cls.DATE_DAY_OF_MONTH_RE.match(token) - if day_match: - found_day = True - day = int(day_match.group()) - continue - - if not found_month: - month_match = cls.DATE_MONTH_RE.match(token) - if month_match: - found_month = True - assert month_match.lastindex is not None - month = month_match.lastindex - continue - - if not found_year: - year_match = cls.DATE_YEAR_RE.match(token) - if year_match: - found_year = True - year = int(year_match.group()) - - if 70 <= year <= 99: - year += 1900 - elif 0 <= year <= 69: - year += 2000 - - if False in (found_day, found_month, found_year, found_time): - return None - - if not 1 <= day <= 31: - return None - - if year < 1601 or hour > 23 or minute > 59 or second > 59: - return None - - return datetime.datetime( - year, month, day, hour, minute, second, tzinfo=datetime.timezone.utc - ) - - -class DummyCookieJar(AbstractCookieJar): - """Implements a dummy cookie storage. - - It can be used with the ClientSession when no cookie processing is needed. - - """ - - def __init__(self, *, loop: Optional[asyncio.AbstractEventLoop] = None) -> None: - super().__init__(loop=loop) - - def __iter__(self) -> "Iterator[Morsel[str]]": - while False: - yield None - - def __len__(self) -> int: - return 0 - - def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: - pass - - def clear_domain(self, domain: str) -> None: - pass - - def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: - pass - - def filter_cookies(self, request_url: URL) -> "BaseCookie[str]": - return SimpleCookie() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_cmp.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_cmp.py deleted file mode 100644 index d9cbe22cde35ff08abb0f1261f2173091490e02f..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_cmp.py +++ /dev/null @@ -1,155 +0,0 @@ -# SPDX-License-Identifier: MIT - - -import functools -import types - -from ._make import _make_ne - - -_operation_names = {"eq": "==", "lt": "<", "le": "<=", "gt": ">", "ge": ">="} - - -def cmp_using( - eq=None, - lt=None, - le=None, - gt=None, - ge=None, - require_same_type=True, - class_name="Comparable", -): - """ - Create a class that can be passed into `attrs.field`'s ``eq``, ``order``, - and ``cmp`` arguments to customize field comparison. - - The resulting class will have a full set of ordering methods if at least - one of ``{lt, le, gt, ge}`` and ``eq`` are provided. - - :param Optional[callable] eq: `callable` used to evaluate equality of two - objects. - :param Optional[callable] lt: `callable` used to evaluate whether one - object is less than another object. - :param Optional[callable] le: `callable` used to evaluate whether one - object is less than or equal to another object. - :param Optional[callable] gt: `callable` used to evaluate whether one - object is greater than another object. - :param Optional[callable] ge: `callable` used to evaluate whether one - object is greater than or equal to another object. - - :param bool require_same_type: When `True`, equality and ordering methods - will return `NotImplemented` if objects are not of the same type. - - :param Optional[str] class_name: Name of class. Defaults to 'Comparable'. - - See `comparison` for more details. - - .. versionadded:: 21.1.0 - """ - - body = { - "__slots__": ["value"], - "__init__": _make_init(), - "_requirements": [], - "_is_comparable_to": _is_comparable_to, - } - - # Add operations. - num_order_functions = 0 - has_eq_function = False - - if eq is not None: - has_eq_function = True - body["__eq__"] = _make_operator("eq", eq) - body["__ne__"] = _make_ne() - - if lt is not None: - num_order_functions += 1 - body["__lt__"] = _make_operator("lt", lt) - - if le is not None: - num_order_functions += 1 - body["__le__"] = _make_operator("le", le) - - if gt is not None: - num_order_functions += 1 - body["__gt__"] = _make_operator("gt", gt) - - if ge is not None: - num_order_functions += 1 - body["__ge__"] = _make_operator("ge", ge) - - type_ = types.new_class( - class_name, (object,), {}, lambda ns: ns.update(body) - ) - - # Add same type requirement. - if require_same_type: - type_._requirements.append(_check_same_type) - - # Add total ordering if at least one operation was defined. - if 0 < num_order_functions < 4: - if not has_eq_function: - # functools.total_ordering requires __eq__ to be defined, - # so raise early error here to keep a nice stack. - raise ValueError( - "eq must be define is order to complete ordering from " - "lt, le, gt, ge." - ) - type_ = functools.total_ordering(type_) - - return type_ - - -def _make_init(): - """ - Create __init__ method. - """ - - def __init__(self, value): - """ - Initialize object with *value*. - """ - self.value = value - - return __init__ - - -def _make_operator(name, func): - """ - Create operator method. - """ - - def method(self, other): - if not self._is_comparable_to(other): - return NotImplemented - - result = func(self.value, other.value) - if result is NotImplemented: - return NotImplemented - - return result - - method.__name__ = f"__{name}__" - method.__doc__ = ( - f"Return a {_operation_names[name]} b. Computed by attrs." - ) - - return method - - -def _is_comparable_to(self, other): - """ - Check whether `other` is comparable to `self`. - """ - for func in self._requirements: - if not func(self, other): - return False - return True - - -def _check_same_type(self, other): - """ - Return True if *self* and *other* are of the same type, False otherwise. - """ - return other.value.__class__ is self.value.__class__ diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_run_in_console.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_run_in_console.py deleted file mode 100644 index a87a0e4b39673f75b6bb6cce813e8b12a78050e5..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_run_in_console.py +++ /dev/null @@ -1,153 +0,0 @@ -''' -Entry point module to run a file in the interactive console. -''' -import os -import sys -import traceback -from pydevconsole import InterpreterInterface, process_exec_queue, start_console_server, init_mpl_in_console -from _pydev_bundle._pydev_saved_modules import threading, _queue - -from _pydev_bundle import pydev_imports -from _pydevd_bundle.pydevd_utils import save_main_module -from _pydev_bundle.pydev_console_utils import StdIn -from pydevd_file_utils import get_fullname - - -def run_file(file, globals=None, locals=None, is_module=False): - module_name = None - entry_point_fn = None - if is_module: - file, _, entry_point_fn = file.partition(':') - module_name = file - filename = get_fullname(file) - if filename is None: - sys.stderr.write("No module named %s\n" % file) - return - else: - file = filename - - if os.path.isdir(file): - new_target = os.path.join(file, '__main__.py') - if os.path.isfile(new_target): - file = new_target - - if globals is None: - m = save_main_module(file, 'pydev_run_in_console') - - globals = m.__dict__ - try: - globals['__builtins__'] = __builtins__ - except NameError: - pass # Not there on Jython... - - if locals is None: - locals = globals - - if not is_module: - sys.path.insert(0, os.path.split(file)[0]) - - print('Running %s' % file) - try: - if not is_module: - pydev_imports.execfile(file, globals, locals) # execute the script - else: - # treat ':' as a seperator between module and entry point function - # if there is no entry point we run we same as with -m switch. Otherwise we perform - # an import and execute the entry point - if entry_point_fn: - mod = __import__(module_name, level=0, fromlist=[entry_point_fn], globals=globals, locals=locals) - func = getattr(mod, entry_point_fn) - func() - else: - # Run with the -m switch - from _pydevd_bundle import pydevd_runpy - pydevd_runpy._run_module_as_main(module_name) - except: - traceback.print_exc() - - return globals - - -def skip_successful_exit(*args): - """ System exit in file shouldn't kill interpreter (i.e. in `timeit`)""" - if len(args) == 1 and args[0] in (0, None): - pass - else: - raise SystemExit(*args) - - -def process_args(argv): - setup_args = {'file': '', 'module': False} - - setup_args['port'] = argv[1] - del argv[1] - setup_args['client_port'] = argv[1] - del argv[1] - - module_flag = "--module" - if module_flag in argv: - i = argv.index(module_flag) - if i != -1: - setup_args['module'] = True - setup_args['file'] = argv[i + 1] - del sys.argv[i] - else: - setup_args['file'] = argv[1] - - del argv[0] - - return setup_args - - -#======================================================================================================================= -# main -#======================================================================================================================= -if __name__ == '__main__': - setup = process_args(sys.argv) - - port = setup['port'] - client_port = setup['client_port'] - file = setup['file'] - is_module = setup['module'] - - from _pydev_bundle import pydev_localhost - - if int(port) == 0 and int(client_port) == 0: - (h, p) = pydev_localhost.get_socket_name() - client_port = p - - host = pydev_localhost.get_localhost() - - # replace exit (see comments on method) - # note that this does not work in jython!!! (sys method can't be replaced). - sys.exit = skip_successful_exit - - connect_status_queue = _queue.Queue() - interpreter = InterpreterInterface(host, int(client_port), threading.current_thread(), connect_status_queue=connect_status_queue) - - server_thread = threading.Thread(target=start_console_server, - name='ServerThread', - args=(host, int(port), interpreter)) - server_thread.daemon = True - server_thread.start() - - sys.stdin = StdIn(interpreter, host, client_port, sys.stdin) - - init_mpl_in_console(interpreter) - - try: - success = connect_status_queue.get(True, 60) - if not success: - raise ValueError() - except: - sys.stderr.write("Console server didn't start\n") - sys.stderr.flush() - sys.exit(1) - - globals = run_file(file, None, None, is_module) - - interpreter.get_namespace().update(globals) - - interpreter.ShowConsole() - - process_exec_queue(interpreter) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/crash.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/crash.py deleted file mode 100644 index a53172e551eb4a79cc52bedd7eebe660c2ccb3ab..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/crash.py +++ /dev/null @@ -1,1853 +0,0 @@ -#!~/.wine/drive_c/Python25/python.exe -# -*- coding: utf-8 -*- - -# Copyright (c) 2009-2014, Mario Vilas -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice,this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the copyright holder nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -""" -Crash dump support. - -@group Crash reporting: - Crash, CrashDictionary - -@group Warnings: - CrashWarning - -@group Deprecated classes: - CrashContainer, CrashTable, CrashTableMSSQL, - VolatileCrashContainer, DummyCrashContainer -""" - -__revision__ = "$Id$" - -__all__ = [ - - # Object that represents a crash in the debugee. - 'Crash', - - # Crash storage. - 'CrashDictionary', - - # Warnings. - 'CrashWarning', - - # Backwards compatibility with WinAppDbg 1.4 and before. - 'CrashContainer', - 'CrashTable', - 'CrashTableMSSQL', - 'VolatileCrashContainer', - 'DummyCrashContainer', -] - -from winappdbg import win32 -from winappdbg import compat -from winappdbg.system import System -from winappdbg.textio import HexDump, CrashDump -from winappdbg.util import StaticClass, MemoryAddresses, PathOperations - -import sys -import os -import time -import zlib -import warnings - -# lazy imports -sql = None -anydbm = None - -#============================================================================== - -# Secure alternative to pickle, use it if present. -try: - import cerealizer - pickle = cerealizer - - # There is no optimization function for cerealized objects. - def optimize(picklestring): - return picklestring - - # There is no HIGHEST_PROTOCOL in cerealizer. - HIGHEST_PROTOCOL = 0 - - # Note: it's important NOT to provide backwards compatibility, otherwise - # it'd be just the same as not having this! - # - # To disable this security upgrade simply uncomment the following line: - # - # raise ImportError("Fallback to pickle for backwards compatibility") - -# If cerealizer is not present fallback to the insecure pickle module. -except ImportError: - - # Faster implementation of the pickle module as a C extension. - try: - import cPickle as pickle - - # If all fails fallback to the classic pickle module. - except ImportError: - import pickle - - # Fetch the highest protocol version. - HIGHEST_PROTOCOL = pickle.HIGHEST_PROTOCOL - - # Try to use the pickle optimizer if found. - try: - from pickletools import optimize - except ImportError: - def optimize(picklestring): - return picklestring - -class Marshaller (StaticClass): - """ - Custom pickler for L{Crash} objects. Optimizes the pickled data when using - the standard C{pickle} (or C{cPickle}) module. The pickled data is then - compressed using zlib. - """ - - @staticmethod - def dumps(obj, protocol=HIGHEST_PROTOCOL): - return zlib.compress(optimize(pickle.dumps(obj)), 9) - - @staticmethod - def loads(data): - return pickle.loads(zlib.decompress(data)) - -#============================================================================== - -class CrashWarning (Warning): - """ - An error occurred while gathering crash data. - Some data may be incomplete or missing. - """ - -#============================================================================== - -# Crash object. Must be serializable. -class Crash (object): - """ - Represents a crash, bug, or another interesting event in the debugee. - - @group Basic information: - timeStamp, signature, eventCode, eventName, pid, tid, arch, os, bits, - registers, labelPC, pc, sp, fp - - @group Optional information: - debugString, - modFileName, - lpBaseOfDll, - exceptionCode, - exceptionName, - exceptionDescription, - exceptionAddress, - exceptionLabel, - firstChance, - faultType, - faultAddress, - faultLabel, - isOurBreakpoint, - isSystemBreakpoint, - stackTrace, - stackTracePC, - stackTraceLabels, - stackTracePretty - - @group Extra information: - commandLine, - environment, - environmentData, - registersPeek, - stackRange, - stackFrame, - stackPeek, - faultCode, - faultMem, - faultPeek, - faultDisasm, - memoryMap - - @group Report: - briefReport, fullReport, notesReport, environmentReport, isExploitable - - @group Notes: - addNote, getNotes, iterNotes, hasNotes, clearNotes, notes - - @group Miscellaneous: - fetch_extra_data - - @type timeStamp: float - @ivar timeStamp: Timestamp as returned by time.time(). - - @type signature: object - @ivar signature: Approximately unique signature for the Crash object. - - This signature can be used as an heuristic to determine if two crashes - were caused by the same software error. Ideally it should be treated as - as opaque serializable object that can be tested for equality. - - @type notes: list( str ) - @ivar notes: List of strings, each string is a note. - - @type eventCode: int - @ivar eventCode: Event code as defined by the Win32 API. - - @type eventName: str - @ivar eventName: Event code user-friendly name. - - @type pid: int - @ivar pid: Process global ID. - - @type tid: int - @ivar tid: Thread global ID. - - @type arch: str - @ivar arch: Processor architecture. - - @type os: str - @ivar os: Operating system version. - - May indicate a 64 bit version even if L{arch} and L{bits} indicate 32 - bits. This means the crash occurred inside a WOW64 process. - - @type bits: int - @ivar bits: C{32} or C{64} bits. - - @type commandLine: None or str - @ivar commandLine: Command line for the target process. - - C{None} if unapplicable or unable to retrieve. - - @type environmentData: None or list of str - @ivar environmentData: Environment data for the target process. - - C{None} if unapplicable or unable to retrieve. - - @type environment: None or dict( str S{->} str ) - @ivar environment: Environment variables for the target process. - - C{None} if unapplicable or unable to retrieve. - - @type registers: dict( str S{->} int ) - @ivar registers: Dictionary mapping register names to their values. - - @type registersPeek: None or dict( str S{->} str ) - @ivar registersPeek: Dictionary mapping register names to the data they point to. - - C{None} if unapplicable or unable to retrieve. - - @type labelPC: None or str - @ivar labelPC: Label pointing to the program counter. - - C{None} or invalid if unapplicable or unable to retrieve. - - @type debugString: None or str - @ivar debugString: Debug string sent by the debugee. - - C{None} if unapplicable or unable to retrieve. - - @type exceptionCode: None or int - @ivar exceptionCode: Exception code as defined by the Win32 API. - - C{None} if unapplicable or unable to retrieve. - - @type exceptionName: None or str - @ivar exceptionName: Exception code user-friendly name. - - C{None} if unapplicable or unable to retrieve. - - @type exceptionDescription: None or str - @ivar exceptionDescription: Exception description. - - C{None} if unapplicable or unable to retrieve. - - @type exceptionAddress: None or int - @ivar exceptionAddress: Memory address where the exception occured. - - C{None} if unapplicable or unable to retrieve. - - @type exceptionLabel: None or str - @ivar exceptionLabel: Label pointing to the exception address. - - C{None} or invalid if unapplicable or unable to retrieve. - - @type faultType: None or int - @ivar faultType: Access violation type. - Only applicable to memory faults. - Should be one of the following constants: - - - L{win32.ACCESS_VIOLATION_TYPE_READ} - - L{win32.ACCESS_VIOLATION_TYPE_WRITE} - - L{win32.ACCESS_VIOLATION_TYPE_DEP} - - C{None} if unapplicable or unable to retrieve. - - @type faultAddress: None or int - @ivar faultAddress: Access violation memory address. - Only applicable to memory faults. - - C{None} if unapplicable or unable to retrieve. - - @type faultLabel: None or str - @ivar faultLabel: Label pointing to the access violation memory address. - Only applicable to memory faults. - - C{None} if unapplicable or unable to retrieve. - - @type firstChance: None or bool - @ivar firstChance: - C{True} for first chance exceptions, C{False} for second chance. - - C{None} if unapplicable or unable to retrieve. - - @type isOurBreakpoint: bool - @ivar isOurBreakpoint: - C{True} for breakpoints defined by the L{Debug} class, - C{False} otherwise. - - C{None} if unapplicable. - - @type isSystemBreakpoint: bool - @ivar isSystemBreakpoint: - C{True} for known system-defined breakpoints, - C{False} otherwise. - - C{None} if unapplicable. - - @type modFileName: None or str - @ivar modFileName: File name of module where the program counter points to. - - C{None} or invalid if unapplicable or unable to retrieve. - - @type lpBaseOfDll: None or int - @ivar lpBaseOfDll: Base of module where the program counter points to. - - C{None} if unapplicable or unable to retrieve. - - @type stackTrace: None or tuple of tuple( int, int, str ) - @ivar stackTrace: - Stack trace of the current thread as a tuple of - ( frame pointer, return address, module filename ). - - C{None} or empty if unapplicable or unable to retrieve. - - @type stackTracePretty: None or tuple of tuple( int, str ) - @ivar stackTracePretty: - Stack trace of the current thread as a tuple of - ( frame pointer, return location ). - - C{None} or empty if unapplicable or unable to retrieve. - - @type stackTracePC: None or tuple( int... ) - @ivar stackTracePC: Tuple of return addresses in the stack trace. - - C{None} or empty if unapplicable or unable to retrieve. - - @type stackTraceLabels: None or tuple( str... ) - @ivar stackTraceLabels: - Tuple of labels pointing to the return addresses in the stack trace. - - C{None} or empty if unapplicable or unable to retrieve. - - @type stackRange: tuple( int, int ) - @ivar stackRange: - Stack beginning and end pointers, in memory addresses order. - - C{None} if unapplicable or unable to retrieve. - - @type stackFrame: None or str - @ivar stackFrame: Data pointed to by the stack pointer. - - C{None} or empty if unapplicable or unable to retrieve. - - @type stackPeek: None or dict( int S{->} str ) - @ivar stackPeek: Dictionary mapping stack offsets to the data they point to. - - C{None} or empty if unapplicable or unable to retrieve. - - @type faultCode: None or str - @ivar faultCode: Data pointed to by the program counter. - - C{None} or empty if unapplicable or unable to retrieve. - - @type faultMem: None or str - @ivar faultMem: Data pointed to by the exception address. - - C{None} or empty if unapplicable or unable to retrieve. - - @type faultPeek: None or dict( intS{->} str ) - @ivar faultPeek: Dictionary mapping guessed pointers at L{faultMem} to the data they point to. - - C{None} or empty if unapplicable or unable to retrieve. - - @type faultDisasm: None or tuple of tuple( long, int, str, str ) - @ivar faultDisasm: Dissassembly around the program counter. - - C{None} or empty if unapplicable or unable to retrieve. - - @type memoryMap: None or list of L{win32.MemoryBasicInformation} objects. - @ivar memoryMap: Memory snapshot of the program. May contain the actual - data from the entire process memory if requested. - See L{fetch_extra_data} for more details. - - C{None} or empty if unapplicable or unable to retrieve. - - @type _rowid: int - @ivar _rowid: Row ID in the database. Internally used by the DAO layer. - Only present in crash dumps retrieved from the database. Do not rely - on this property to be present in future versions of WinAppDbg. - """ - - def __init__(self, event): - """ - @type event: L{Event} - @param event: Event object for crash. - """ - - # First of all, take the timestamp. - self.timeStamp = time.time() - - # Notes are initially empty. - self.notes = list() - - # Get the process and thread, but dont't store them in the DB. - process = event.get_process() - thread = event.get_thread() - - # Determine the architecture. - self.os = System.os - self.arch = process.get_arch() - self.bits = process.get_bits() - - # The following properties are always retrieved for all events. - self.eventCode = event.get_event_code() - self.eventName = event.get_event_name() - self.pid = event.get_pid() - self.tid = event.get_tid() - self.registers = dict(thread.get_context()) - self.labelPC = process.get_label_at_address(self.pc) - - # The following properties are only retrieved for some events. - self.commandLine = None - self.environment = None - self.environmentData = None - self.registersPeek = None - self.debugString = None - self.modFileName = None - self.lpBaseOfDll = None - self.exceptionCode = None - self.exceptionName = None - self.exceptionDescription = None - self.exceptionAddress = None - self.exceptionLabel = None - self.firstChance = None - self.faultType = None - self.faultAddress = None - self.faultLabel = None - self.isOurBreakpoint = None - self.isSystemBreakpoint = None - self.stackTrace = None - self.stackTracePC = None - self.stackTraceLabels = None - self.stackTracePretty = None - self.stackRange = None - self.stackFrame = None - self.stackPeek = None - self.faultCode = None - self.faultMem = None - self.faultPeek = None - self.faultDisasm = None - self.memoryMap = None - - # Get information for debug string events. - if self.eventCode == win32.OUTPUT_DEBUG_STRING_EVENT: - self.debugString = event.get_debug_string() - - # Get information for module load and unload events. - # For create and exit process events, get the information - # for the main module. - elif self.eventCode in (win32.CREATE_PROCESS_DEBUG_EVENT, - win32.EXIT_PROCESS_DEBUG_EVENT, - win32.LOAD_DLL_DEBUG_EVENT, - win32.UNLOAD_DLL_DEBUG_EVENT): - aModule = event.get_module() - self.modFileName = event.get_filename() - if not self.modFileName: - self.modFileName = aModule.get_filename() - self.lpBaseOfDll = event.get_module_base() - if not self.lpBaseOfDll: - self.lpBaseOfDll = aModule.get_base() - - # Get some information for exception events. - # To get the remaining information call fetch_extra_data(). - elif self.eventCode == win32.EXCEPTION_DEBUG_EVENT: - - # Exception information. - self.exceptionCode = event.get_exception_code() - self.exceptionName = event.get_exception_name() - self.exceptionDescription = event.get_exception_description() - self.exceptionAddress = event.get_exception_address() - self.firstChance = event.is_first_chance() - self.exceptionLabel = process.get_label_at_address( - self.exceptionAddress) - if self.exceptionCode in (win32.EXCEPTION_ACCESS_VIOLATION, - win32.EXCEPTION_GUARD_PAGE, - win32.EXCEPTION_IN_PAGE_ERROR): - self.faultType = event.get_fault_type() - self.faultAddress = event.get_fault_address() - self.faultLabel = process.get_label_at_address( - self.faultAddress) - elif self.exceptionCode in (win32.EXCEPTION_BREAKPOINT, - win32.EXCEPTION_SINGLE_STEP): - self.isOurBreakpoint = hasattr(event, 'breakpoint') \ - and event.breakpoint - self.isSystemBreakpoint = \ - process.is_system_defined_breakpoint(self.exceptionAddress) - - # Stack trace. - try: - self.stackTracePretty = thread.get_stack_trace_with_labels() - except Exception: - e = sys.exc_info()[1] - warnings.warn( - "Cannot get stack trace with labels, reason: %s" % str(e), - CrashWarning) - try: - self.stackTrace = thread.get_stack_trace() - stackTracePC = [ ra for (_,ra,_) in self.stackTrace ] - self.stackTracePC = tuple(stackTracePC) - stackTraceLabels = [ process.get_label_at_address(ra) \ - for ra in self.stackTracePC ] - self.stackTraceLabels = tuple(stackTraceLabels) - except Exception: - e = sys.exc_info()[1] - warnings.warn("Cannot get stack trace, reason: %s" % str(e), - CrashWarning) - - def fetch_extra_data(self, event, takeMemorySnapshot = 0): - """ - Fetch extra data from the L{Event} object. - - @note: Since this method may take a little longer to run, it's best to - call it only after you've determined the crash is interesting and - you want to save it. - - @type event: L{Event} - @param event: Event object for crash. - - @type takeMemorySnapshot: int - @param takeMemorySnapshot: - Memory snapshot behavior: - - C{0} to take no memory information (default). - - C{1} to take only the memory map. - See L{Process.get_memory_map}. - - C{2} to take a full memory snapshot. - See L{Process.take_memory_snapshot}. - - C{3} to take a live memory snapshot. - See L{Process.generate_memory_snapshot}. - """ - - # Get the process and thread, we'll use them below. - process = event.get_process() - thread = event.get_thread() - - # Get the command line for the target process. - try: - self.commandLine = process.get_command_line() - except Exception: - e = sys.exc_info()[1] - warnings.warn("Cannot get command line, reason: %s" % str(e), - CrashWarning) - - # Get the environment variables for the target process. - try: - self.environmentData = process.get_environment_data() - self.environment = process.parse_environment_data( - self.environmentData) - except Exception: - e = sys.exc_info()[1] - warnings.warn("Cannot get environment, reason: %s" % str(e), - CrashWarning) - - # Data pointed to by registers. - self.registersPeek = thread.peek_pointers_in_registers() - - # Module where execution is taking place. - aModule = process.get_module_at_address(self.pc) - if aModule is not None: - self.modFileName = aModule.get_filename() - self.lpBaseOfDll = aModule.get_base() - - # Contents of the stack frame. - try: - self.stackRange = thread.get_stack_range() - except Exception: - e = sys.exc_info()[1] - warnings.warn("Cannot get stack range, reason: %s" % str(e), - CrashWarning) - try: - self.stackFrame = thread.get_stack_frame() - stackFrame = self.stackFrame - except Exception: - self.stackFrame = thread.peek_stack_data() - stackFrame = self.stackFrame[:64] - if stackFrame: - self.stackPeek = process.peek_pointers_in_data(stackFrame) - - # Code being executed. - self.faultCode = thread.peek_code_bytes() - try: - self.faultDisasm = thread.disassemble_around_pc(32) - except Exception: - e = sys.exc_info()[1] - warnings.warn("Cannot disassemble, reason: %s" % str(e), - CrashWarning) - - # For memory related exceptions, get the memory contents - # of the location that caused the exception to be raised. - if self.eventCode == win32.EXCEPTION_DEBUG_EVENT: - if self.pc != self.exceptionAddress and self.exceptionCode in ( - win32.EXCEPTION_ACCESS_VIOLATION, - win32.EXCEPTION_ARRAY_BOUNDS_EXCEEDED, - win32.EXCEPTION_DATATYPE_MISALIGNMENT, - win32.EXCEPTION_IN_PAGE_ERROR, - win32.EXCEPTION_STACK_OVERFLOW, - win32.EXCEPTION_GUARD_PAGE, - ): - self.faultMem = process.peek(self.exceptionAddress, 64) - if self.faultMem: - self.faultPeek = process.peek_pointers_in_data( - self.faultMem) - - # TODO: maybe add names and versions of DLLs and EXE? - - # Take a snapshot of the process memory. Additionally get the - # memory contents if requested. - if takeMemorySnapshot == 1: - self.memoryMap = process.get_memory_map() - mappedFilenames = process.get_mapped_filenames(self.memoryMap) - for mbi in self.memoryMap: - mbi.filename = mappedFilenames.get(mbi.BaseAddress, None) - mbi.content = None - elif takeMemorySnapshot == 2: - self.memoryMap = process.take_memory_snapshot() - elif takeMemorySnapshot == 3: - self.memoryMap = process.generate_memory_snapshot() - - @property - def pc(self): - """ - Value of the program counter register. - - @rtype: int - """ - try: - return self.registers['Eip'] # i386 - except KeyError: - return self.registers['Rip'] # amd64 - - @property - def sp(self): - """ - Value of the stack pointer register. - - @rtype: int - """ - try: - return self.registers['Esp'] # i386 - except KeyError: - return self.registers['Rsp'] # amd64 - - @property - def fp(self): - """ - Value of the frame pointer register. - - @rtype: int - """ - try: - return self.registers['Ebp'] # i386 - except KeyError: - return self.registers['Rbp'] # amd64 - - def __str__(self): - return self.fullReport() - - def key(self): - """ - Alias of L{signature}. Deprecated since WinAppDbg 1.5. - """ - warnings.warn("Crash.key() method was deprecated in WinAppDbg 1.5", - DeprecationWarning) - return self.signature - - @property - def signature(self): - if self.labelPC: - pc = self.labelPC - else: - pc = self.pc - if self.stackTraceLabels: - trace = self.stackTraceLabels - else: - trace = self.stackTracePC - return ( - self.arch, - self.eventCode, - self.exceptionCode, - pc, - trace, - self.debugString, - ) - # TODO - # add the name and version of the binary where the crash happened? - - def isExploitable(self): - """ - Guess how likely is it that the bug causing the crash can be leveraged - into an exploitable vulnerability. - - @note: Don't take this as an equivalent of a real exploitability - analysis, that can only be done by a human being! This is only - a guideline, useful for example to sort crashes - placing the most - interesting ones at the top. - - @see: The heuristics are similar to those of the B{!exploitable} - extension for I{WinDBG}, which can be downloaded from here: - - U{http://www.codeplex.com/msecdbg} - - @rtype: tuple( str, str, str ) - @return: The first element of the tuple is the result of the analysis, - being one of the following: - - - Not an exception - - Not exploitable - - Not likely exploitable - - Unknown - - Probably exploitable - - Exploitable - - The second element of the tuple is a code to identify the matched - heuristic rule. - - The third element of the tuple is a description string of the - reason behind the result. - """ - - # Terminal rules - - if self.eventCode != win32.EXCEPTION_DEBUG_EVENT: - return ("Not an exception", "NotAnException", "The event is not an exception.") - - if self.stackRange and self.pc is not None and self.stackRange[0] <= self.pc < self.stackRange[1]: - return ("Exploitable", "StackCodeExecution", "Code execution from the stack is considered exploitable.") - - # This rule is NOT from !exploitable - if self.stackRange and self.sp is not None and not (self.stackRange[0] <= self.sp < self.stackRange[1]): - return ("Exploitable", "StackPointerCorruption", "Stack pointer corruption is considered exploitable.") - - if self.exceptionCode == win32.EXCEPTION_ILLEGAL_INSTRUCTION: - return ("Exploitable", "IllegalInstruction", "An illegal instruction exception indicates that the attacker controls execution flow.") - - if self.exceptionCode == win32.EXCEPTION_PRIV_INSTRUCTION: - return ("Exploitable", "PrivilegedInstruction", "A privileged instruction exception indicates that the attacker controls execution flow.") - - if self.exceptionCode == win32.EXCEPTION_GUARD_PAGE: - return ("Exploitable", "GuardPage", "A guard page violation indicates a stack overflow has occured, and the stack of another thread was reached (possibly the overflow length is not controlled by the attacker).") - - if self.exceptionCode == win32.STATUS_STACK_BUFFER_OVERRUN: - return ("Exploitable", "GSViolation", "An overrun of a protected stack buffer has been detected. This is considered exploitable, and must be fixed.") - - if self.exceptionCode == win32.STATUS_HEAP_CORRUPTION: - return ("Exploitable", "HeapCorruption", "Heap Corruption has been detected. This is considered exploitable, and must be fixed.") - - if self.exceptionCode == win32.EXCEPTION_ACCESS_VIOLATION: - nearNull = self.faultAddress is None or MemoryAddresses.align_address_to_page_start(self.faultAddress) == 0 - controlFlow = self.__is_control_flow() - blockDataMove = self.__is_block_data_move() - if self.faultType == win32.EXCEPTION_EXECUTE_FAULT: - if nearNull: - return ("Probably exploitable", "DEPViolation", "User mode DEP access violations are probably exploitable if near NULL.") - else: - return ("Exploitable", "DEPViolation", "User mode DEP access violations are exploitable.") - elif self.faultType == win32.EXCEPTION_WRITE_FAULT: - if nearNull: - return ("Probably exploitable", "WriteAV", "User mode write access violations that are near NULL are probably exploitable.") - else: - return ("Exploitable", "WriteAV", "User mode write access violations that are not near NULL are exploitable.") - elif self.faultType == win32.EXCEPTION_READ_FAULT: - if self.faultAddress == self.pc: - if nearNull: - return ("Probably exploitable", "ReadAVonIP", "Access violations at the instruction pointer are probably exploitable if near NULL.") - else: - return ("Exploitable", "ReadAVonIP", "Access violations at the instruction pointer are exploitable if not near NULL.") - if controlFlow: - if nearNull: - return ("Probably exploitable", "ReadAVonControlFlow", "Access violations near null in control flow instructions are considered probably exploitable.") - else: - return ("Exploitable", "ReadAVonControlFlow", "Access violations not near null in control flow instructions are considered exploitable.") - if blockDataMove: - return ("Probably exploitable", "ReadAVonBlockMove", "This is a read access violation in a block data move, and is therefore classified as probably exploitable.") - - # Rule: Tainted information used to control branch addresses is considered probably exploitable - # Rule: Tainted information used to control the target of a later write is probably exploitable - - # Non terminal rules - - # XXX TODO add rule to check if code is in writeable memory (probably exploitable) - - # XXX TODO maybe we should be returning a list of tuples instead? - - result = ("Unknown", "Unknown", "Exploitability unknown.") - - if self.exceptionCode == win32.EXCEPTION_ACCESS_VIOLATION: - if self.faultType == win32.EXCEPTION_READ_FAULT: - if nearNull: - result = ("Not likely exploitable", "ReadAVNearNull", "This is a user mode read access violation near null, and is probably not exploitable.") - - elif self.exceptionCode == win32.EXCEPTION_INT_DIVIDE_BY_ZERO: - result = ("Not likely exploitable", "DivideByZero", "This is an integer divide by zero, and is probably not exploitable.") - - elif self.exceptionCode == win32.EXCEPTION_FLT_DIVIDE_BY_ZERO: - result = ("Not likely exploitable", "DivideByZero", "This is a floating point divide by zero, and is probably not exploitable.") - - elif self.exceptionCode in (win32.EXCEPTION_BREAKPOINT, win32.STATUS_WX86_BREAKPOINT): - result = ("Unknown", "Breakpoint", "While a breakpoint itself is probably not exploitable, it may also be an indication that an attacker is testing a target. In either case breakpoints should not exist in production code.") - - # Rule: If the stack contains unknown symbols in user mode, call that out - # Rule: Tainted information used to control the source of a later block move unknown, but called out explicitly - # Rule: Tainted information used as an argument to a function is an unknown risk, but called out explicitly - # Rule: Tainted information used to control branch selection is an unknown risk, but called out explicitly - - return result - - def __is_control_flow(self): - """ - Private method to tell if the instruction pointed to by the program - counter is a control flow instruction. - - Currently only works for x86 and amd64 architectures. - """ - jump_instructions = ( - 'jmp', 'jecxz', 'jcxz', - 'ja', 'jnbe', 'jae', 'jnb', 'jb', 'jnae', 'jbe', 'jna', 'jc', 'je', - 'jz', 'jnc', 'jne', 'jnz', 'jnp', 'jpo', 'jp', 'jpe', 'jg', 'jnle', - 'jge', 'jnl', 'jl', 'jnge', 'jle', 'jng', 'jno', 'jns', 'jo', 'js' - ) - call_instructions = ( 'call', 'ret', 'retn' ) - loop_instructions = ( 'loop', 'loopz', 'loopnz', 'loope', 'loopne' ) - control_flow_instructions = call_instructions + loop_instructions + \ - jump_instructions - isControlFlow = False - instruction = None - if self.pc is not None and self.faultDisasm: - for disasm in self.faultDisasm: - if disasm[0] == self.pc: - instruction = disasm[2].lower().strip() - break - if instruction: - for x in control_flow_instructions: - if x in instruction: - isControlFlow = True - break - return isControlFlow - - def __is_block_data_move(self): - """ - Private method to tell if the instruction pointed to by the program - counter is a block data move instruction. - - Currently only works for x86 and amd64 architectures. - """ - block_data_move_instructions = ('movs', 'stos', 'lods') - isBlockDataMove = False - instruction = None - if self.pc is not None and self.faultDisasm: - for disasm in self.faultDisasm: - if disasm[0] == self.pc: - instruction = disasm[2].lower().strip() - break - if instruction: - for x in block_data_move_instructions: - if x in instruction: - isBlockDataMove = True - break - return isBlockDataMove - - def briefReport(self): - """ - @rtype: str - @return: Short description of the event. - """ - if self.exceptionCode is not None: - if self.exceptionCode == win32.EXCEPTION_BREAKPOINT: - if self.isOurBreakpoint: - what = "Breakpoint hit" - elif self.isSystemBreakpoint: - what = "System breakpoint hit" - else: - what = "Assertion failed" - elif self.exceptionDescription: - what = self.exceptionDescription - elif self.exceptionName: - what = self.exceptionName - else: - what = "Exception %s" % \ - HexDump.integer(self.exceptionCode, self.bits) - if self.firstChance: - chance = 'first' - else: - chance = 'second' - if self.exceptionLabel: - where = self.exceptionLabel - elif self.exceptionAddress: - where = HexDump.address(self.exceptionAddress, self.bits) - elif self.labelPC: - where = self.labelPC - else: - where = HexDump.address(self.pc, self.bits) - msg = "%s (%s chance) at %s" % (what, chance, where) - elif self.debugString is not None: - if self.labelPC: - where = self.labelPC - else: - where = HexDump.address(self.pc, self.bits) - msg = "Debug string from %s: %r" % (where, self.debugString) - else: - if self.labelPC: - where = self.labelPC - else: - where = HexDump.address(self.pc, self.bits) - msg = "%s (%s) at %s" % ( - self.eventName, - HexDump.integer(self.eventCode, self.bits), - where - ) - return msg - - def fullReport(self, bShowNotes = True): - """ - @type bShowNotes: bool - @param bShowNotes: C{True} to show the user notes, C{False} otherwise. - - @rtype: str - @return: Long description of the event. - """ - msg = self.briefReport() - msg += '\n' - - if self.bits == 32: - width = 16 - else: - width = 8 - - if self.eventCode == win32.EXCEPTION_DEBUG_EVENT: - (exploitability, expcode, expdescription) = self.isExploitable() - msg += '\nSecurity risk level: %s\n' % exploitability - msg += ' %s\n' % expdescription - - if bShowNotes and self.notes: - msg += '\nNotes:\n' - msg += self.notesReport() - - if self.commandLine: - msg += '\nCommand line: %s\n' % self.commandLine - - if self.environment: - msg += '\nEnvironment:\n' - msg += self.environmentReport() - - if not self.labelPC: - base = HexDump.address(self.lpBaseOfDll, self.bits) - if self.modFileName: - fn = PathOperations.pathname_to_filename(self.modFileName) - msg += '\nRunning in %s (%s)\n' % (fn, base) - else: - msg += '\nRunning in module at %s\n' % base - - if self.registers: - msg += '\nRegisters:\n' - msg += CrashDump.dump_registers(self.registers) - if self.registersPeek: - msg += '\n' - msg += CrashDump.dump_registers_peek(self.registers, - self.registersPeek, - width = width) - - if self.faultDisasm: - msg += '\nCode disassembly:\n' - msg += CrashDump.dump_code(self.faultDisasm, self.pc, - bits = self.bits) - - if self.stackTrace: - msg += '\nStack trace:\n' - if self.stackTracePretty: - msg += CrashDump.dump_stack_trace_with_labels( - self.stackTracePretty, - bits = self.bits) - else: - msg += CrashDump.dump_stack_trace(self.stackTrace, - bits = self.bits) - - if self.stackFrame: - if self.stackPeek: - msg += '\nStack pointers:\n' - msg += CrashDump.dump_stack_peek(self.stackPeek, width = width) - msg += '\nStack dump:\n' - msg += HexDump.hexblock(self.stackFrame, self.sp, - bits = self.bits, width = width) - - if self.faultCode and not self.modFileName: - msg += '\nCode dump:\n' - msg += HexDump.hexblock(self.faultCode, self.pc, - bits = self.bits, width = width) - - if self.faultMem: - if self.faultPeek: - msg += '\nException address pointers:\n' - msg += CrashDump.dump_data_peek(self.faultPeek, - self.exceptionAddress, - bits = self.bits, - width = width) - msg += '\nException address dump:\n' - msg += HexDump.hexblock(self.faultMem, self.exceptionAddress, - bits = self.bits, width = width) - - if self.memoryMap: - msg += '\nMemory map:\n' - mappedFileNames = dict() - for mbi in self.memoryMap: - if hasattr(mbi, 'filename') and mbi.filename: - mappedFileNames[mbi.BaseAddress] = mbi.filename - msg += CrashDump.dump_memory_map(self.memoryMap, mappedFileNames, - bits = self.bits) - - if not msg.endswith('\n\n'): - if not msg.endswith('\n'): - msg += '\n' - msg += '\n' - return msg - - def environmentReport(self): - """ - @rtype: str - @return: The process environment variables, - merged and formatted for a report. - """ - msg = '' - if self.environment: - for key, value in compat.iteritems(self.environment): - msg += ' %s=%s\n' % (key, value) - return msg - - def notesReport(self): - """ - @rtype: str - @return: All notes, merged and formatted for a report. - """ - msg = '' - if self.notes: - for n in self.notes: - n = n.strip('\n') - if '\n' in n: - n = n.strip('\n') - msg += ' * %s\n' % n.pop(0) - for x in n: - msg += ' %s\n' % x - else: - msg += ' * %s\n' % n - return msg - - def addNote(self, msg): - """ - Add a note to the crash event. - - @type msg: str - @param msg: Note text. - """ - self.notes.append(msg) - - def clearNotes(self): - """ - Clear the notes of this crash event. - """ - self.notes = list() - - def getNotes(self): - """ - Get the list of notes of this crash event. - - @rtype: list( str ) - @return: List of notes. - """ - return self.notes - - def iterNotes(self): - """ - Iterate the notes of this crash event. - - @rtype: listiterator - @return: Iterator of the list of notes. - """ - return self.notes.__iter__() - - def hasNotes(self): - """ - @rtype: bool - @return: C{True} if there are notes for this crash event. - """ - return bool( self.notes ) - -#============================================================================== - -class CrashContainer (object): - """ - Old crash dump persistencer using a DBM database. - Doesn't support duplicate crashes. - - @warning: - DBM database support is provided for backwards compatibility with older - versions of WinAppDbg. New applications should not use this class. - Also, DBM databases in Python suffer from multiple problems that can - easily be avoided by switching to a SQL database. - - @see: If you really must use a DBM database, try the standard C{shelve} - module instead: U{http://docs.python.org/library/shelve.html} - - @group Marshalling configuration: - optimizeKeys, optimizeValues, compressKeys, compressValues, escapeKeys, - escapeValues, binaryKeys, binaryValues - - @type optimizeKeys: bool - @cvar optimizeKeys: Ignored by the current implementation. - - Up to WinAppDbg 1.4 this setting caused the database keys to be - optimized when pickled with the standard C{pickle} module. - - But with a DBM database backend that causes inconsistencies, since the - same key can be serialized into multiple optimized pickles, thus losing - uniqueness. - - @type optimizeValues: bool - @cvar optimizeValues: C{True} to optimize the marshalling of keys, C{False} - otherwise. Only used with the C{pickle} module, ignored when using the - more secure C{cerealizer} module. - - @type compressKeys: bool - @cvar compressKeys: C{True} to compress keys when marshalling, C{False} - to leave them uncompressed. - - @type compressValues: bool - @cvar compressValues: C{True} to compress values when marshalling, C{False} - to leave them uncompressed. - - @type escapeKeys: bool - @cvar escapeKeys: C{True} to escape keys when marshalling, C{False} - to leave them uncompressed. - - @type escapeValues: bool - @cvar escapeValues: C{True} to escape values when marshalling, C{False} - to leave them uncompressed. - - @type binaryKeys: bool - @cvar binaryKeys: C{True} to marshall keys to binary format (the Python - C{buffer} type), C{False} to use text marshalled keys (C{str} type). - - @type binaryValues: bool - @cvar binaryValues: C{True} to marshall values to binary format (the Python - C{buffer} type), C{False} to use text marshalled values (C{str} type). - """ - - optimizeKeys = False - optimizeValues = True - compressKeys = False - compressValues = True - escapeKeys = False - escapeValues = False - binaryKeys = False - binaryValues = False - - def __init__(self, filename = None, allowRepeatedKeys = False): - """ - @type filename: str - @param filename: (Optional) File name for crash database. - If no filename is specified, the container is volatile. - - Volatile containers are stored only in memory and - destroyed when they go out of scope. - - @type allowRepeatedKeys: bool - @param allowRepeatedKeys: - Currently not supported, always use C{False}. - """ - if allowRepeatedKeys: - raise NotImplementedError() - self.__filename = filename - if filename: - global anydbm - if not anydbm: - import anydbm - self.__db = anydbm.open(filename, 'c') - self.__keys = dict([ (self.unmarshall_key(mk), mk) - for mk in self.__db.keys() ]) - else: - self.__db = dict() - self.__keys = dict() - - def remove_key(self, key): - """ - Removes the given key from the set of known keys. - - @type key: L{Crash} key. - @param key: Key to remove. - """ - del self.__keys[key] - - def marshall_key(self, key): - """ - Marshalls a Crash key to be used in the database. - - @see: L{__init__} - - @type key: L{Crash} key. - @param key: Key to convert. - - @rtype: str or buffer - @return: Converted key. - """ - if key in self.__keys: - return self.__keys[key] - skey = pickle.dumps(key, protocol = 0) - if self.compressKeys: - skey = zlib.compress(skey, zlib.Z_BEST_COMPRESSION) - if self.escapeKeys: - skey = skey.encode('hex') - if self.binaryKeys: - skey = buffer(skey) - self.__keys[key] = skey - return skey - - def unmarshall_key(self, key): - """ - Unmarshalls a Crash key read from the database. - - @type key: str or buffer - @param key: Key to convert. - - @rtype: L{Crash} key. - @return: Converted key. - """ - key = str(key) - if self.escapeKeys: - key = key.decode('hex') - if self.compressKeys: - key = zlib.decompress(key) - key = pickle.loads(key) - return key - - def marshall_value(self, value, storeMemoryMap = False): - """ - Marshalls a Crash object to be used in the database. - By default the C{memoryMap} member is B{NOT} stored here. - - @warning: Setting the C{storeMemoryMap} argument to C{True} can lead to - a severe performance penalty! - - @type value: L{Crash} - @param value: Object to convert. - - @type storeMemoryMap: bool - @param storeMemoryMap: C{True} to store the memory map, C{False} - otherwise. - - @rtype: str - @return: Converted object. - """ - if hasattr(value, 'memoryMap'): - crash = value - memoryMap = crash.memoryMap - try: - crash.memoryMap = None - if storeMemoryMap and memoryMap is not None: - # convert the generator to a list - crash.memoryMap = list(memoryMap) - if self.optimizeValues: - value = pickle.dumps(crash, protocol = HIGHEST_PROTOCOL) - value = optimize(value) - else: - value = pickle.dumps(crash, protocol = 0) - finally: - crash.memoryMap = memoryMap - del memoryMap - del crash - if self.compressValues: - value = zlib.compress(value, zlib.Z_BEST_COMPRESSION) - if self.escapeValues: - value = value.encode('hex') - if self.binaryValues: - value = buffer(value) - return value - - def unmarshall_value(self, value): - """ - Unmarshalls a Crash object read from the database. - - @type value: str - @param value: Object to convert. - - @rtype: L{Crash} - @return: Converted object. - """ - value = str(value) - if self.escapeValues: - value = value.decode('hex') - if self.compressValues: - value = zlib.decompress(value) - value = pickle.loads(value) - return value - - # The interface is meant to be similar to a Python set. - # However it may not be necessary to implement all of the set methods. - # Other methods like get, has_key, iterkeys and itervalues - # are dictionary-like. - - def __len__(self): - """ - @rtype: int - @return: Count of known keys. - """ - return len(self.__keys) - - def __bool__(self): - """ - @rtype: bool - @return: C{False} if there are no known keys. - """ - return bool(self.__keys) - - def __contains__(self, crash): - """ - @type crash: L{Crash} - @param crash: Crash object. - - @rtype: bool - @return: - C{True} if a Crash object with the same key is in the container. - """ - return self.has_key( crash.key() ) - - def has_key(self, key): - """ - @type key: L{Crash} key. - @param key: Key to find. - - @rtype: bool - @return: C{True} if the key is present in the set of known keys. - """ - return key in self.__keys - - def iterkeys(self): - """ - @rtype: iterator - @return: Iterator of known L{Crash} keys. - """ - return compat.iterkeys(self.__keys) - - class __CrashContainerIterator (object): - """ - Iterator of Crash objects. Returned by L{CrashContainer.__iter__}. - """ - - def __init__(self, container): - """ - @type container: L{CrashContainer} - @param container: Crash set to iterate. - """ - # It's important to keep a reference to the CrashContainer, - # rather than it's underlying database. - # Otherwise the destructor of CrashContainer may close the - # database while we're still iterating it. - # - # TODO: lock the database when iterating it. - # - self.__container = container - self.__keys_iter = compat.iterkeys(container) - - def next(self): - """ - @rtype: L{Crash} - @return: A B{copy} of a Crash object in the L{CrashContainer}. - @raise StopIteration: No more items left. - """ - key = self.__keys_iter.next() - return self.__container.get(key) - - def __del__(self): - "Class destructor. Closes the database when this object is destroyed." - try: - if self.__filename: - self.__db.close() - except: - pass - - def __iter__(self): - """ - @see: L{itervalues} - @rtype: iterator - @return: Iterator of the contained L{Crash} objects. - """ - return self.itervalues() - - def itervalues(self): - """ - @rtype: iterator - @return: Iterator of the contained L{Crash} objects. - - @warning: A B{copy} of each object is returned, - so any changes made to them will be lost. - - To preserve changes do the following: - 1. Keep a reference to the object. - 2. Delete the object from the set. - 3. Modify the object and add it again. - """ - return self.__CrashContainerIterator(self) - - def add(self, crash): - """ - Adds a new crash to the container. - If the crash appears to be already known, it's ignored. - - @see: L{Crash.key} - - @type crash: L{Crash} - @param crash: Crash object to add. - """ - if crash not in self: - key = crash.key() - skey = self.marshall_key(key) - data = self.marshall_value(crash, storeMemoryMap = True) - self.__db[skey] = data - - def __delitem__(self, key): - """ - Removes a crash from the container. - - @type key: L{Crash} unique key. - @param key: Key of the crash to get. - """ - skey = self.marshall_key(key) - del self.__db[skey] - self.remove_key(key) - - def remove(self, crash): - """ - Removes a crash from the container. - - @type crash: L{Crash} - @param crash: Crash object to remove. - """ - del self[ crash.key() ] - - def get(self, key): - """ - Retrieves a crash from the container. - - @type key: L{Crash} unique key. - @param key: Key of the crash to get. - - @rtype: L{Crash} object. - @return: Crash matching the given key. - - @see: L{iterkeys} - @warning: A B{copy} of each object is returned, - so any changes made to them will be lost. - - To preserve changes do the following: - 1. Keep a reference to the object. - 2. Delete the object from the set. - 3. Modify the object and add it again. - """ - skey = self.marshall_key(key) - data = self.__db[skey] - crash = self.unmarshall_value(data) - return crash - - def __getitem__(self, key): - """ - Retrieves a crash from the container. - - @type key: L{Crash} unique key. - @param key: Key of the crash to get. - - @rtype: L{Crash} object. - @return: Crash matching the given key. - - @see: L{iterkeys} - @warning: A B{copy} of each object is returned, - so any changes made to them will be lost. - - To preserve changes do the following: - 1. Keep a reference to the object. - 2. Delete the object from the set. - 3. Modify the object and add it again. - """ - return self.get(key) - -#============================================================================== - -class CrashDictionary(object): - """ - Dictionary-like persistence interface for L{Crash} objects. - - Currently the only implementation is through L{sql.CrashDAO}. - """ - - def __init__(self, url, creator = None, allowRepeatedKeys = True): - """ - @type url: str - @param url: Connection URL of the crash database. - See L{sql.CrashDAO.__init__} for more details. - - @type creator: callable - @param creator: (Optional) Callback function that creates the SQL - database connection. - - Normally it's not necessary to use this argument. However in some - odd cases you may need to customize the database connection, for - example when using the integrated authentication in MSSQL. - - @type allowRepeatedKeys: bool - @param allowRepeatedKeys: - If C{True} all L{Crash} objects are stored. - - If C{False} any L{Crash} object with the same signature as a - previously existing object will be ignored. - """ - global sql - if sql is None: - from winappdbg import sql - self._allowRepeatedKeys = allowRepeatedKeys - self._dao = sql.CrashDAO(url, creator) - - def add(self, crash): - """ - Adds a new crash to the container. - - @note: - When the C{allowRepeatedKeys} parameter of the constructor - is set to C{False}, duplicated crashes are ignored. - - @see: L{Crash.key} - - @type crash: L{Crash} - @param crash: Crash object to add. - """ - self._dao.add(crash, self._allowRepeatedKeys) - - def get(self, key): - """ - Retrieves a crash from the container. - - @type key: L{Crash} signature. - @param key: Heuristic signature of the crash to get. - - @rtype: L{Crash} object. - @return: Crash matching the given signature. If more than one is found, - retrieve the newest one. - - @see: L{iterkeys} - @warning: A B{copy} of each object is returned, - so any changes made to them will be lost. - - To preserve changes do the following: - 1. Keep a reference to the object. - 2. Delete the object from the set. - 3. Modify the object and add it again. - """ - found = self._dao.find(signature=key, limit=1, order=-1) - if not found: - raise KeyError(key) - return found[0] - - def __iter__(self): - """ - @rtype: iterator - @return: Iterator of the contained L{Crash} objects. - """ - offset = 0 - limit = 10 - while 1: - found = self._dao.find(offset=offset, limit=limit) - if not found: - break - offset += len(found) - for crash in found: - yield crash - - def itervalues(self): - """ - @rtype: iterator - @return: Iterator of the contained L{Crash} objects. - """ - return self.__iter__() - - def iterkeys(self): - """ - @rtype: iterator - @return: Iterator of the contained L{Crash} heuristic signatures. - """ - for crash in self: - yield crash.signature # FIXME this gives repeated results! - - def __contains__(self, crash): - """ - @type crash: L{Crash} - @param crash: Crash object. - - @rtype: bool - @return: C{True} if the Crash object is in the container. - """ - return self._dao.count(signature=crash.signature) > 0 - - def has_key(self, key): - """ - @type key: L{Crash} signature. - @param key: Heuristic signature of the crash to get. - - @rtype: bool - @return: C{True} if a matching L{Crash} object is in the container. - """ - return self._dao.count(signature=key) > 0 - - def __len__(self): - """ - @rtype: int - @return: Count of L{Crash} elements in the container. - """ - return self._dao.count() - - def __bool__(self): - """ - @rtype: bool - @return: C{False} if the container is empty. - """ - return bool( len(self) ) - -class CrashTable(CrashDictionary): - """ - Old crash dump persistencer using a SQLite database. - - @warning: - Superceded by L{CrashDictionary} since WinAppDbg 1.5. - New applications should not use this class. - """ - - def __init__(self, location = None, allowRepeatedKeys = True): - """ - @type location: str - @param location: (Optional) Location of the crash database. - If the location is a filename, it's an SQLite database file. - - If no location is specified, the container is volatile. - Volatile containers are stored only in memory and - destroyed when they go out of scope. - - @type allowRepeatedKeys: bool - @param allowRepeatedKeys: - If C{True} all L{Crash} objects are stored. - - If C{False} any L{Crash} object with the same signature as a - previously existing object will be ignored. - """ - warnings.warn( - "The %s class is deprecated since WinAppDbg 1.5." % self.__class__, - DeprecationWarning) - if location: - url = "sqlite:///%s" % location - else: - url = "sqlite://" - super(CrashTable, self).__init__(url, allowRepeatedKeys) - -class CrashTableMSSQL (CrashDictionary): - """ - Old crash dump persistencer using a Microsoft SQL Server database. - - @warning: - Superceded by L{CrashDictionary} since WinAppDbg 1.5. - New applications should not use this class. - """ - - def __init__(self, location = None, allowRepeatedKeys = True): - """ - @type location: str - @param location: Location of the crash database. - It must be an ODBC connection string. - - @type allowRepeatedKeys: bool - @param allowRepeatedKeys: - If C{True} all L{Crash} objects are stored. - - If C{False} any L{Crash} object with the same signature as a - previously existing object will be ignored. - """ - warnings.warn( - "The %s class is deprecated since WinAppDbg 1.5." % self.__class__, - DeprecationWarning) - import urllib - url = "mssql+pyodbc:///?odbc_connect=" + urllib.quote_plus(location) - super(CrashTableMSSQL, self).__init__(url, allowRepeatedKeys) - -class VolatileCrashContainer (CrashTable): - """ - Old in-memory crash dump storage. - - @warning: - Superceded by L{CrashDictionary} since WinAppDbg 1.5. - New applications should not use this class. - """ - - def __init__(self, allowRepeatedKeys = True): - """ - Volatile containers are stored only in memory and - destroyed when they go out of scope. - - @type allowRepeatedKeys: bool - @param allowRepeatedKeys: - If C{True} all L{Crash} objects are stored. - - If C{False} any L{Crash} object with the same key as a - previously existing object will be ignored. - """ - super(VolatileCrashContainer, self).__init__( - allowRepeatedKeys=allowRepeatedKeys) - -class DummyCrashContainer(object): - """ - Fakes a database of volatile Crash objects, - trying to mimic part of it's interface, but - doesn't actually store anything. - - Normally applications don't need to use this. - - @see: L{CrashDictionary} - """ - - def __init__(self, allowRepeatedKeys = True): - """ - Fake containers don't store L{Crash} objects, but they implement the - interface properly. - - @type allowRepeatedKeys: bool - @param allowRepeatedKeys: - Mimics the duplicate filter behavior found in real containers. - """ - self.__keys = set() - self.__count = 0 - self.__allowRepeatedKeys = allowRepeatedKeys - - def __contains__(self, crash): - """ - @type crash: L{Crash} - @param crash: Crash object. - - @rtype: bool - @return: C{True} if the Crash object is in the container. - """ - return crash.signature in self.__keys - - def __len__(self): - """ - @rtype: int - @return: Count of L{Crash} elements in the container. - """ - if self.__allowRepeatedKeys: - return self.__count - return len( self.__keys ) - - def __bool__(self): - """ - @rtype: bool - @return: C{False} if the container is empty. - """ - return bool( len(self) ) - - def add(self, crash): - """ - Adds a new crash to the container. - - @note: - When the C{allowRepeatedKeys} parameter of the constructor - is set to C{False}, duplicated crashes are ignored. - - @see: L{Crash.key} - - @type crash: L{Crash} - @param crash: Crash object to add. - """ - self.__keys.add( crash.signature ) - self.__count += 1 - - def get(self, key): - """ - This method is not supported. - """ - raise NotImplementedError() - - def has_key(self, key): - """ - @type key: L{Crash} signature. - @param key: Heuristic signature of the crash to get. - - @rtype: bool - @return: C{True} if a matching L{Crash} object is in the container. - """ - return self.__keys.has_key( key ) - - def iterkeys(self): - """ - @rtype: iterator - @return: Iterator of the contained L{Crash} object keys. - - @see: L{get} - @warning: A B{copy} of each object is returned, - so any changes made to them will be lost. - - To preserve changes do the following: - 1. Keep a reference to the object. - 2. Delete the object from the set. - 3. Modify the object and add it again. - """ - return iter(self.__keys) - -#============================================================================== -# Register the Crash class with the secure serializer. - -try: - cerealizer.register(Crash) - cerealizer.register(win32.MemoryBasicInformation) -except NameError: - pass diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/common/sockets.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/common/sockets.py deleted file mode 100644 index 453dbf1aa7754e393272f01f67ae81f531907b98..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/common/sockets.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -import socket -import sys -import threading - -from debugpy.common import log -from debugpy.common.util import hide_thread_from_debugger - - -def create_server(host, port=0, backlog=socket.SOMAXCONN, timeout=None): - """Return a local server socket listening on the given port.""" - - assert backlog > 0 - if host is None: - host = "127.0.0.1" - if port is None: - port = 0 - - try: - server = _new_sock() - if port != 0: - # If binding to a specific port, make sure that the user doesn't have - # to wait until the OS times out the socket to be able to use that port - # again.if the server or the adapter crash or are force-killed. - if sys.platform == "win32": - server.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) - else: - try: - server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - except (AttributeError, OSError): - pass # Not available everywhere - server.bind((host, port)) - if timeout is not None: - server.settimeout(timeout) - server.listen(backlog) - except Exception: - server.close() - raise - return server - - -def create_client(): - """Return a client socket that may be connected to a remote address.""" - return _new_sock() - - -def _new_sock(): - sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) - - # Set TCP keepalive on an open socket. - # It activates after 1 second (TCP_KEEPIDLE,) of idleness, - # then sends a keepalive ping once every 3 seconds (TCP_KEEPINTVL), - # and closes the connection after 5 failed ping (TCP_KEEPCNT), or 15 seconds - try: - sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) - except (AttributeError, OSError): - pass # May not be available everywhere. - try: - sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 1) - except (AttributeError, OSError): - pass # May not be available everywhere. - try: - sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 3) - except (AttributeError, OSError): - pass # May not be available everywhere. - try: - sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 5) - except (AttributeError, OSError): - pass # May not be available everywhere. - return sock - - -def shut_down(sock, how=socket.SHUT_RDWR): - """Shut down the given socket.""" - sock.shutdown(how) - - -def close_socket(sock): - """Shutdown and close the socket.""" - try: - shut_down(sock) - except Exception: - pass - sock.close() - - -def serve(name, handler, host, port=0, backlog=socket.SOMAXCONN, timeout=None): - """Accepts TCP connections on the specified host and port, and invokes the - provided handler function for every new connection. - - Returns the created server socket. - """ - - assert backlog > 0 - - try: - listener = create_server(host, port, backlog, timeout) - except Exception: - log.reraise_exception( - "Error listening for incoming {0} connections on {1}:{2}:", name, host, port - ) - host, port = listener.getsockname() - log.info("Listening for incoming {0} connections on {1}:{2}...", name, host, port) - - def accept_worker(): - while True: - try: - sock, (other_host, other_port) = listener.accept() - except (OSError, socket.error): - # Listener socket has been closed. - break - - log.info( - "Accepted incoming {0} connection from {1}:{2}.", - name, - other_host, - other_port, - ) - handler(sock) - - thread = threading.Thread(target=accept_worker) - thread.daemon = True - hide_thread_from_debugger(thread) - thread.start() - - return listener diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/utils/pos_embed.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/utils/pos_embed.py deleted file mode 100644 index aa11d60db65fa98c140e7d75bdf985ff7ece8f18..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/utils/pos_embed.py +++ /dev/null @@ -1,122 +0,0 @@ -# -------------------------------------------------------- -# Position embedding utils -# -------------------------------------------------------- - -from typing import Tuple - -import numpy as np -import torch - - -# -------------------------------------------------------- -# 2D sine-cosine position embedding -# References: -# Transformer: https://github.com/tensorflow/models/blob/master/official/nlp/transformer/model_utils.py -# MoCo v3: https://github.com/facebookresearch/moco-v3 -# -------------------------------------------------------- -def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False): - """ - grid_size: int of the grid height and width - return: - pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token) - """ - grid_h = np.arange(grid_size, dtype=np.float32) - grid_w = np.arange(grid_size, dtype=np.float32) - grid = np.meshgrid(grid_w, grid_h) # here w goes first - grid = np.stack(grid, axis=0) - - grid = grid.reshape([2, 1, grid_size, grid_size]) - pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) - if cls_token: - pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0) - return pos_embed - - -def get_2d_sincos_pos_embed_from_grid(embed_dim, grid): - assert embed_dim % 2 == 0 - - # use half of dimensions to encode grid_h - emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2) - emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2) - - emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D) - return emb - - -def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): - """ - embed_dim: output dimension for each position - pos: a list of positions to be encoded: size (M,) - out: (M, D) - """ - assert embed_dim % 2 == 0 - omega = np.arange(embed_dim // 2, dtype=np.float) - omega /= embed_dim / 2.0 - omega = 1.0 / 10000 ** omega # (D/2,) - - pos = pos.reshape(-1) # (M,) - out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product - - emb_sin = np.sin(out) # (M, D/2) - emb_cos = np.cos(out) # (M, D/2) - - emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D) - return emb - - -# -------------------------------------------------------- -# Interpolate position embeddings for high-resolution -# References: -# DeiT: https://github.com/facebookresearch/deit -# -------------------------------------------------------- -def interpolate_pos_embed(model, checkpoint_model, pos_embed_key): - if pos_embed_key in checkpoint_model: - pos_embed_checkpoint = checkpoint_model[pos_embed_key] - embedding_size = pos_embed_checkpoint.shape[-1] - num_patches = model.num_patches - if pos_embed_key.startswith("decoder"): - num_extra_tokens = model.decoder_pos_embed.shape[-2] - num_patches - else: - num_extra_tokens = model.pos_embed.shape[-2] - num_patches - # height (== width) for the checkpoint position embedding - orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5) - # height (== width) for the new position embedding - new_size = int(num_patches ** 0.5) - # class_token and dist_token are kept unchanged - if orig_size != new_size: - print( - "Position interpolate from %dx%d to %dx%d" - % (orig_size, orig_size, new_size, new_size) - ) - extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens] - # only the position tokens are interpolated - pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:] - pos_tokens = pos_tokens.reshape( - -1, orig_size, orig_size, embedding_size - ).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, - size=(new_size, new_size), - mode="bicubic", - align_corners=False, - ) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - checkpoint_model[pos_embed_key] = new_pos_embed - - -def interpolate_pos_embed_online( - pos_embed, orig_size: Tuple[int], new_size: Tuple[int], num_extra_tokens: int -): - extra_tokens = pos_embed[:, :num_extra_tokens] - pos_tokens = pos_embed[:, num_extra_tokens:] - embedding_size = pos_tokens.shape[-1] - pos_tokens = pos_tokens.reshape( - -1, orig_size[0], orig_size[1], embedding_size - ).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, size=new_size, mode="bicubic", align_corners=False, - ) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - return new_pos_embed diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py deleted file mode 100644 index 0cd262999d8b2cb8e14a5c32190ae73f479d8e81..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='ASPPHead', - in_channels=64, - in_index=4, - channels=16, - dilations=(1, 12, 24, 36), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/TEnngal/bingo/src/components/ui/separator.tsx b/spaces/TEnngal/bingo/src/components/ui/separator.tsx deleted file mode 100644 index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/ui/separator.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SeparatorPrimitive from '@radix-ui/react-separator' - -import { cn } from '@/lib/utils' - -const Separator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, orientation = 'horizontal', decorative = true, ...props }, - ref - ) => ( - - ) -) -Separator.displayName = SeparatorPrimitive.Root.displayName - -export { Separator } diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/cvae_decoder.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/cvae_decoder.py deleted file mode 100644 index 055bd6881b5b01551a4311be2c095108ecbade30..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/cvae_decoder.py +++ /dev/null @@ -1,388 +0,0 @@ -from einops import rearrange, repeat -import torch -import torch.nn as nn - -from risk_biased.models.cvae_params import CVAEParams -from risk_biased.models.nn_blocks import ( - MCG, - MAB, - MHB, - SequenceDecoderLSTM, - SequenceDecoderMLP, - SequenceEncoderLSTM, - SequenceEncoderMLP, - SequenceEncoderMaskedLSTM, -) - - -class DecoderNN(nn.Module): - """Decoder neural network that decodes input tensors into a single output tensor. - It contains an interaction layer that (re-)compute the interactions between the agents in the scene. - This implies that a given latent sample for one agent will be affecting the predictions of the othe agents too. - - Args: - params: dataclass defining the necessary parameters - - """ - - def __init__( - self, - params: CVAEParams, - ) -> None: - super().__init__() - self.dt = params.dt - self.state_dim = params.state_dim - self.dynamic_state_dim = params.dynamic_state_dim - self.hidden_dim = params.hidden_dim - self.num_steps_future = params.num_steps_future - self.latent_dim = params.latent_dim - - if params.sequence_encoder_type == "MLP": - self._agent_encoder_past = SequenceEncoderMLP( - params.state_dim, - params.hidden_dim, - params.num_hidden_layers, - params.num_steps, - params.is_mlp_residual, - ) - elif params.sequence_encoder_type == "LSTM": - self._agent_encoder_past = SequenceEncoderLSTM( - params.state_dim, params.hidden_dim - ) - elif params.sequence_encoder_type == "maskedLSTM": - self._agent_encoder_past = SequenceEncoderMaskedLSTM( - params.state_dim, params.hidden_dim - ) - else: - raise RuntimeError( - f"Got sequence encoder type {params.sequence_decoder_type} but only knows one of: 'MLP', 'LSTM', 'maskedLSTM' " - ) - - self._combine_z_past = nn.Linear( - params.hidden_dim + params.latent_dim, params.hidden_dim - ) - - if params.interaction_type == "Attention" or params.interaction_type == "MAB": - self._interaction = MAB( - params.hidden_dim, params.num_attention_heads, params.num_blocks - ) - elif ( - params.interaction_type == "ContextGating" - or params.interaction_type == "MCG" - ): - self._interaction = MCG( - params.hidden_dim, - params.mcg_dim_expansion, - params.mcg_num_layers, - params.num_blocks, - params.is_mlp_residual, - ) - elif params.interaction_type == "Hybrid" or params.interaction_type == "MHB": - self._interaction = MHB( - params.hidden_dim, - params.num_attention_heads, - params.mcg_dim_expansion, - params.mcg_num_layers, - params.num_blocks, - params.is_mlp_residual, - ) - else: - self._interaction = lambda x, *args, **kwargs: x - - if params.sequence_decoder_type == "MLP": - self._decoder = SequenceDecoderMLP( - params.hidden_dim, - params.num_hidden_layers, - params.num_steps_future, - params.is_mlp_residual, - ) - elif params.sequence_decoder_type == "LSTM": - self._decoder = SequenceDecoderLSTM(params.hidden_dim) - elif params.sequence_decoder_type == "maskedLSTM": - self._decoder = SequenceDecoderLSTM(params.hidden_dim) - else: - raise RuntimeError( - f"Got sequence decoder type {params.sequence_decoder_type} but only knows one of: 'MLP', 'LSTM', 'maskedLSTM' " - ) - - def forward( - self, - z_samples: torch.Tensor, - mask_z: torch.Tensor, - x: torch.Tensor, - mask_x: torch.Tensor, - encoded_absolute: torch.Tensor, - encoded_map: torch.Tensor, - mask_map: torch.Tensor, - ) -> torch.Tensor: - """Forward function that decodes input tensors into an output tensor of size - (batch_size, num_agents, (n_samples), num_steps_future, state_dim) - - Args: - z_samples: (batch_size, num_agents, (n_samples), latent_dim) tensor of history - mask_z: (batch_size, num_agents) tensor of bool mask - x: (batch_size, num_agents, num_steps, state_dim) tensor of history for all agents - mask_x: (batch_size, num_agents, num_steps) tensor of bool mask - encoded_absolute: (batch_size, num_agents, feature_size) tensor of the encoded absolute agent positions - encoded_map: (batch_size, num_objects, map_feature_dim) tensor of encoded map objects - mask_map: (batch_size, num_objects) tensor of bool mask - - Returns: - (batch_size, num_agents, (n_samples), num_steps_future, state_dim) output tensor - """ - - encoded_x = self._agent_encoder_past(x, mask_x) - squeeze_output_sample_dim = False - if z_samples.ndim == 3: - batch_size, num_agents, latent_dim = z_samples.shape - num_samples = 1 - z_samples = rearrange(z_samples, "b a l -> b a () l") - squeeze_output_sample_dim = True - else: - batch_size, num_agents, num_samples, latent_dim = z_samples.shape - mask_z = repeat(mask_z, "b a -> (b s) a", s=num_samples) - mask_map = repeat(mask_map, "b o -> (b s) o", s=num_samples) - encoded_x = repeat(encoded_x, "b a l -> (b s) a l", s=num_samples) - encoded_absolute = repeat( - encoded_absolute, "b a l -> (b s) a l", s=num_samples - ) - encoded_map = repeat(encoded_map, "b o l -> (b s) o l", s=num_samples) - - z_samples = rearrange(z_samples, "b a s l -> (b s) a l") - - h = self._combine_z_past(torch.cat([z_samples, encoded_x], dim=-1)) - - h = self._interaction(h, mask_z, encoded_absolute, encoded_map, mask_map) - - h = self._decoder(h, self.num_steps_future) - - if not squeeze_output_sample_dim: - h = rearrange(h, "(b s) a t l -> b a s t l", b=batch_size, s=num_samples) - - return h - - -class CVAEAccelerationDecoder(nn.Module): - """Decoder architecture for conditional variational autoencoder - - Args: - model: decoder neural network that transforms input tensors to an output sequence - """ - - def __init__( - self, - model: nn.Module, - ) -> None: - super().__init__() - self._model = model - self._output_layer = nn.Linear(model.hidden_dim, 2) - - def forward( - self, - z_samples: torch.Tensor, - mask_z: torch.Tensor, - x: torch.Tensor, - mask_x: torch.Tensor, - encoded_absolute: torch.Tensor, - encoded_map: torch.Tensor, - mask_map: torch.Tensor, - offset: torch.Tensor, - ) -> torch.Tensor: - """Forward function that decodes input tensors into an output tensor of size - (batch_size, num_agents, (n_samples), num_steps_future, state_dim=5) - It first predicts accelerations that are doubly integrated to produce the output - state sequence with positions angles and velocities (x, y, theta, vx, vy) or (x, y, vx, vy) or (x, y) - - Args: - z_samples: (batch_size, num_agents, (n_samples), latent_dim) tensor of history - mask_z: (batch_size, num_agents) tensor of bool mask - x: (batch_size, num_agents, num_steps, state_dim) tensor of history for all agents - mask_x: (batch_size, num_agents, num_steps) tensor of bool mask - encoded_absolute: (batch_size, num_agents, feature_size) tensor of the encoded absolute agent positions - encoded_map: (batch_size, num_objects, map_feature_dim) tensor of encoded map objects - mask_map: (batch_size, num_objects) tensor of bool mask - - Returns: - (batch_size, num_agents, (n_samples), num_steps_future, state_dim) output tensor. Sample dimension - does not exist if z_samples is a 2D tensor. - """ - - h = self._model( - z_samples, mask_z, x, mask_x, encoded_absolute, encoded_map, mask_map - ) - h = self._output_layer(h) - - dt = self._model.dt - initial_position = x[..., -1:, :2].clone() - # If shape is 5 it should be (x, y, angle, vx, vy) - if offset.shape[-1] == 5: - initial_velocity = offset[..., 3:5].clone().unsqueeze(-2) - # else if shape is 4 it should be (x, y, vx, vy) - elif offset.shape[-1] == 4: - initial_velocity = offset[..., 2:4].clone().unsqueeze(-2) - elif x.shape[-1] == 5: - initial_velocity = x[..., -1:, 3:5].clone() - elif x.shape[-1] == 4: - initial_velocity = x[..., -1:, 2:4].clone() - else: - initial_velocity = (x[..., -1:, :] - x[..., -2:-1, :]) / dt - - output = torch.zeros( - (*h.shape[:-1], self._model.dynamic_state_dim), device=h.device - ) - # There might be a sample dimension in the output tensor, then adapt the shape of initial position and velocity - if output.ndim == 5: - initial_position = initial_position.unsqueeze(-3) - initial_velocity = initial_velocity.unsqueeze(-3) - - if self._model.dynamic_state_dim == 5: - output[..., 3:5] = h.cumsum(-2) * dt - output[..., :2] = (output[..., 3:5].clone() + initial_velocity).cumsum( - -2 - ) * dt + initial_position - output[..., 2] = torch.atan2(output[..., 4].clone(), output[..., 3].clone()) - elif self._model.dynamic_state_dim == 4: - output[..., 2:4] = h.cumsum(-2) * dt - output[..., :2] = (output[..., 2:4].clone() + initial_velocity).cumsum( - -2 - ) * dt + initial_position - else: - velocity = h.cumsum(-2) * dt - output = (velocity.clone() + initial_velocity).cumsum( - -2 - ) * dt + initial_position - return output - - -class CVAEParametrizedDecoder(nn.Module): - """Decoder architecture for conditional variational autoencoder - - Args: - model: decoder neural network that transforms input tensors to an output sequence - """ - - def __init__( - self, - model: nn.Module, - ) -> None: - super().__init__() - self._model = model - self._order = 3 - self._output_layer = nn.Linear( - model.hidden_dim * model.num_steps_future, - 2 * self._order + model.num_steps_future, - ) - - def polynomial(self, x: torch.Tensor, params: torch.Tensor): - """Polynomial function that takes a tensor of shape (batch_size, num_agents, (n_samples), num_steps_future) and - a parameter tensor of shape (batch_size, num_agents, (n_samples), self._order*2) and returns a tensor of shape (batch_size, num_agents, (n_samples), num_steps_future) - """ - h = x.clone() - squeeze = False - if h.ndim == 3: - h = h.unsqueeze(2) - params = params.unsqueeze(2) - squeeze = True - h = repeat( - h, - "batch agents samples sequence -> batch agents samples sequence two order", - order=self._order, - two=2, - ).cumprod(-1) - h = h * params.view(*params.shape[:-1], 1, 2, self._order) - h = h.sum(-1) - if squeeze: - h = h.squeeze(2) - return h - - def dpolynomial(self, x: torch.Tensor, params: torch.Tensor): - """Derivative of the polynomial function that takes a tensor of shape (batch_size, num_agents, (n_samples), num_steps_future) and - a parameter tensor of shape (batch_size, num_agents, (n_samples), self._order*2) and returns a tensor of shape (batch_size, num_agents, (n_samples), num_steps_future) - """ - h = x.clone() - squeeze = False - if h.ndim == 3: - h = h.unsqueeze(2) - params = params.unsqueeze(2) - squeeze = True - h = repeat( - h, - "batch agents samples sequence -> batch agents samples sequence two order", - order=self._order - 1, - two=2, - ) - h = torch.cat((torch.ones_like(h[..., :1]), h.cumprod(-1)), -1) - h = h * params.view(*params.shape[:-1], 1, 2, self._order) - h = h * torch.arange(self._order).view(*([1] * params.ndim), -1).to(x.device) - h = h.sum(-1) - if squeeze: - h = h.squeeze(2) - return h - - def forward( - self, - z_samples: torch.Tensor, - mask_z: torch.Tensor, - x: torch.Tensor, - mask_x: torch.Tensor, - encoded_absolute: torch.Tensor, - encoded_map: torch.Tensor, - mask_map: torch.Tensor, - offset: torch.Tensor, - ) -> torch.Tensor: - """Forward function that decodes input tensors into an output tensor of size - (batch_size, num_agents, (n_samples), num_steps_future, state_dim=5) - It first predicts accelerations that are doubly integrated to produce the output - state sequence with positions angles and velocities (x, y, theta, vx, vy) or (x, y, vx, vy) or (x, y) - - Args: - z_samples: (batch_size, num_agents, (n_samples), latent_dim) tensor of history - mask_z: (batch_size, num_agents) tensor of bool mask - x: (batch_size, num_agents, num_steps, state_dim) tensor of history for all agents - mask_x: (batch_size, num_agents, num_steps) tensor of bool mask - encoded_absolute: (batch_size, num_agents, feature_size) tensor of the encoded absolute agent positions - encoded_map: (batch_size, num_objects, map_feature_dim) tensor of encoded map objects - mask_map: (batch_size, num_objects) tensor of bool mask - - Returns: - (batch_size, num_agents, (n_samples), num_steps_future, state_dim) output tensor. Sample dimension - does not exist if z_samples is a 2D tensor. - """ - - squeeze_output_sample_dim = z_samples.ndim == 3 - batch_size = z_samples.shape[0] - - h = self._model( - z_samples, mask_z, x, mask_x, encoded_absolute, encoded_map, mask_map - ) - if squeeze_output_sample_dim: - h = rearrange( - h, "batch agents sequence features -> batch agents (sequence features)" - ) - else: - h = rearrange( - h, - "(batch samples) agents sequence features -> batch agents samples (sequence features)", - batch=batch_size, - ) - h = self._output_layer(h) - - output = torch.zeros( - ( - *h.shape[:-1], - self._model.num_steps_future, - self._model.dynamic_state_dim, - ), - device=h.device, - ) - params = h[..., : 2 * self._order] - dldt = torch.relu(h[..., 2 * self._order :]) - distance = dldt.cumsum(-2) - output[..., :2] = self.polynomial(distance, params) - if self._model.dynamic_state_dim == 5: - output[..., 3:5] = dldt * self.dpolynomial(distance, params) - output[..., 2] = torch.atan2(output[..., 4].clone(), output[..., 3].clone()) - elif self._model.dynamic_state_dim == 4: - output[..., 2:4] = dldt * self.dpolynomial(distance, params) - - return output diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/version.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/version.py deleted file mode 100644 index c7c8bb6ff4f8ed84e466a66cac6b953b901626ea..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/version.py +++ /dev/null @@ -1,739 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2017 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -""" -Implementation of a flexible versioning scheme providing support for PEP-440, -setuptools-compatible and semantic versioning. -""" - -import logging -import re - -from .compat import string_types -from .util import parse_requirement - -__all__ = ['NormalizedVersion', 'NormalizedMatcher', - 'LegacyVersion', 'LegacyMatcher', - 'SemanticVersion', 'SemanticMatcher', - 'UnsupportedVersionError', 'get_scheme'] - -logger = logging.getLogger(__name__) - - -class UnsupportedVersionError(ValueError): - """This is an unsupported version.""" - pass - - -class Version(object): - def __init__(self, s): - self._string = s = s.strip() - self._parts = parts = self.parse(s) - assert isinstance(parts, tuple) - assert len(parts) > 0 - - def parse(self, s): - raise NotImplementedError('please implement in a subclass') - - def _check_compatible(self, other): - if type(self) != type(other): - raise TypeError('cannot compare %r and %r' % (self, other)) - - def __eq__(self, other): - self._check_compatible(other) - return self._parts == other._parts - - def __ne__(self, other): - return not self.__eq__(other) - - def __lt__(self, other): - self._check_compatible(other) - return self._parts < other._parts - - def __gt__(self, other): - return not (self.__lt__(other) or self.__eq__(other)) - - def __le__(self, other): - return self.__lt__(other) or self.__eq__(other) - - def __ge__(self, other): - return self.__gt__(other) or self.__eq__(other) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - def __hash__(self): - return hash(self._parts) - - def __repr__(self): - return "%s('%s')" % (self.__class__.__name__, self._string) - - def __str__(self): - return self._string - - @property - def is_prerelease(self): - raise NotImplementedError('Please implement in subclasses.') - - -class Matcher(object): - version_class = None - - # value is either a callable or the name of a method - _operators = { - '<': lambda v, c, p: v < c, - '>': lambda v, c, p: v > c, - '<=': lambda v, c, p: v == c or v < c, - '>=': lambda v, c, p: v == c or v > c, - '==': lambda v, c, p: v == c, - '===': lambda v, c, p: v == c, - # by default, compatible => >=. - '~=': lambda v, c, p: v == c or v > c, - '!=': lambda v, c, p: v != c, - } - - # this is a method only to support alternative implementations - # via overriding - def parse_requirement(self, s): - return parse_requirement(s) - - def __init__(self, s): - if self.version_class is None: - raise ValueError('Please specify a version class') - self._string = s = s.strip() - r = self.parse_requirement(s) - if not r: - raise ValueError('Not valid: %r' % s) - self.name = r.name - self.key = self.name.lower() # for case-insensitive comparisons - clist = [] - if r.constraints: - # import pdb; pdb.set_trace() - for op, s in r.constraints: - if s.endswith('.*'): - if op not in ('==', '!='): - raise ValueError('\'.*\' not allowed for ' - '%r constraints' % op) - # Could be a partial version (e.g. for '2.*') which - # won't parse as a version, so keep it as a string - vn, prefix = s[:-2], True - # Just to check that vn is a valid version - self.version_class(vn) - else: - # Should parse as a version, so we can create an - # instance for the comparison - vn, prefix = self.version_class(s), False - clist.append((op, vn, prefix)) - self._parts = tuple(clist) - - def match(self, version): - """ - Check if the provided version matches the constraints. - - :param version: The version to match against this instance. - :type version: String or :class:`Version` instance. - """ - if isinstance(version, string_types): - version = self.version_class(version) - for operator, constraint, prefix in self._parts: - f = self._operators.get(operator) - if isinstance(f, string_types): - f = getattr(self, f) - if not f: - msg = ('%r not implemented ' - 'for %s' % (operator, self.__class__.__name__)) - raise NotImplementedError(msg) - if not f(version, constraint, prefix): - return False - return True - - @property - def exact_version(self): - result = None - if len(self._parts) == 1 and self._parts[0][0] in ('==', '==='): - result = self._parts[0][1] - return result - - def _check_compatible(self, other): - if type(self) != type(other) or self.name != other.name: - raise TypeError('cannot compare %s and %s' % (self, other)) - - def __eq__(self, other): - self._check_compatible(other) - return self.key == other.key and self._parts == other._parts - - def __ne__(self, other): - return not self.__eq__(other) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - def __hash__(self): - return hash(self.key) + hash(self._parts) - - def __repr__(self): - return "%s(%r)" % (self.__class__.__name__, self._string) - - def __str__(self): - return self._string - - -PEP440_VERSION_RE = re.compile(r'^v?(\d+!)?(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' - r'(\.(post)(\d+))?(\.(dev)(\d+))?' - r'(\+([a-zA-Z\d]+(\.[a-zA-Z\d]+)?))?$') - - -def _pep_440_key(s): - s = s.strip() - m = PEP440_VERSION_RE.match(s) - if not m: - raise UnsupportedVersionError('Not a valid version: %s' % s) - groups = m.groups() - nums = tuple(int(v) for v in groups[1].split('.')) - while len(nums) > 1 and nums[-1] == 0: - nums = nums[:-1] - - if not groups[0]: - epoch = 0 - else: - epoch = int(groups[0][:-1]) - pre = groups[4:6] - post = groups[7:9] - dev = groups[10:12] - local = groups[13] - if pre == (None, None): - pre = () - else: - pre = pre[0], int(pre[1]) - if post == (None, None): - post = () - else: - post = post[0], int(post[1]) - if dev == (None, None): - dev = () - else: - dev = dev[0], int(dev[1]) - if local is None: - local = () - else: - parts = [] - for part in local.split('.'): - # to ensure that numeric compares as > lexicographic, avoid - # comparing them directly, but encode a tuple which ensures - # correct sorting - if part.isdigit(): - part = (1, int(part)) - else: - part = (0, part) - parts.append(part) - local = tuple(parts) - if not pre: - # either before pre-release, or final release and after - if not post and dev: - # before pre-release - pre = ('a', -1) # to sort before a0 - else: - pre = ('z',) # to sort after all pre-releases - # now look at the state of post and dev. - if not post: - post = ('_',) # sort before 'a' - if not dev: - dev = ('final',) - - #print('%s -> %s' % (s, m.groups())) - return epoch, nums, pre, post, dev, local - - -_normalized_key = _pep_440_key - - -class NormalizedVersion(Version): - """A rational version. - - Good: - 1.2 # equivalent to "1.2.0" - 1.2.0 - 1.2a1 - 1.2.3a2 - 1.2.3b1 - 1.2.3c1 - 1.2.3.4 - TODO: fill this out - - Bad: - 1 # minimum two numbers - 1.2a # release level must have a release serial - 1.2.3b - """ - def parse(self, s): - result = _normalized_key(s) - # _normalized_key loses trailing zeroes in the release - # clause, since that's needed to ensure that X.Y == X.Y.0 == X.Y.0.0 - # However, PEP 440 prefix matching needs it: for example, - # (~= 1.4.5.0) matches differently to (~= 1.4.5.0.0). - m = PEP440_VERSION_RE.match(s) # must succeed - groups = m.groups() - self._release_clause = tuple(int(v) for v in groups[1].split('.')) - return result - - PREREL_TAGS = set(['a', 'b', 'c', 'rc', 'dev']) - - @property - def is_prerelease(self): - return any(t[0] in self.PREREL_TAGS for t in self._parts if t) - - -def _match_prefix(x, y): - x = str(x) - y = str(y) - if x == y: - return True - if not x.startswith(y): - return False - n = len(y) - return x[n] == '.' - - -class NormalizedMatcher(Matcher): - version_class = NormalizedVersion - - # value is either a callable or the name of a method - _operators = { - '~=': '_match_compatible', - '<': '_match_lt', - '>': '_match_gt', - '<=': '_match_le', - '>=': '_match_ge', - '==': '_match_eq', - '===': '_match_arbitrary', - '!=': '_match_ne', - } - - def _adjust_local(self, version, constraint, prefix): - if prefix: - strip_local = '+' not in constraint and version._parts[-1] - else: - # both constraint and version are - # NormalizedVersion instances. - # If constraint does not have a local component, - # ensure the version doesn't, either. - strip_local = not constraint._parts[-1] and version._parts[-1] - if strip_local: - s = version._string.split('+', 1)[0] - version = self.version_class(s) - return version, constraint - - def _match_lt(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if version >= constraint: - return False - release_clause = constraint._release_clause - pfx = '.'.join([str(i) for i in release_clause]) - return not _match_prefix(version, pfx) - - def _match_gt(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if version <= constraint: - return False - release_clause = constraint._release_clause - pfx = '.'.join([str(i) for i in release_clause]) - return not _match_prefix(version, pfx) - - def _match_le(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - return version <= constraint - - def _match_ge(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - return version >= constraint - - def _match_eq(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if not prefix: - result = (version == constraint) - else: - result = _match_prefix(version, constraint) - return result - - def _match_arbitrary(self, version, constraint, prefix): - return str(version) == str(constraint) - - def _match_ne(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if not prefix: - result = (version != constraint) - else: - result = not _match_prefix(version, constraint) - return result - - def _match_compatible(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if version == constraint: - return True - if version < constraint: - return False -# if not prefix: -# return True - release_clause = constraint._release_clause - if len(release_clause) > 1: - release_clause = release_clause[:-1] - pfx = '.'.join([str(i) for i in release_clause]) - return _match_prefix(version, pfx) - -_REPLACEMENTS = ( - (re.compile('[.+-]$'), ''), # remove trailing puncts - (re.compile(r'^[.](\d)'), r'0.\1'), # .N -> 0.N at start - (re.compile('^[.-]'), ''), # remove leading puncts - (re.compile(r'^\((.*)\)$'), r'\1'), # remove parentheses - (re.compile(r'^v(ersion)?\s*(\d+)'), r'\2'), # remove leading v(ersion) - (re.compile(r'^r(ev)?\s*(\d+)'), r'\2'), # remove leading v(ersion) - (re.compile('[.]{2,}'), '.'), # multiple runs of '.' - (re.compile(r'\b(alfa|apha)\b'), 'alpha'), # misspelt alpha - (re.compile(r'\b(pre-alpha|prealpha)\b'), - 'pre.alpha'), # standardise - (re.compile(r'\(beta\)$'), 'beta'), # remove parentheses -) - -_SUFFIX_REPLACEMENTS = ( - (re.compile('^[:~._+-]+'), ''), # remove leading puncts - (re.compile('[,*")([\\]]'), ''), # remove unwanted chars - (re.compile('[~:+_ -]'), '.'), # replace illegal chars - (re.compile('[.]{2,}'), '.'), # multiple runs of '.' - (re.compile(r'\.$'), ''), # trailing '.' -) - -_NUMERIC_PREFIX = re.compile(r'(\d+(\.\d+)*)') - - -def _suggest_semantic_version(s): - """ - Try to suggest a semantic form for a version for which - _suggest_normalized_version couldn't come up with anything. - """ - result = s.strip().lower() - for pat, repl in _REPLACEMENTS: - result = pat.sub(repl, result) - if not result: - result = '0.0.0' - - # Now look for numeric prefix, and separate it out from - # the rest. - #import pdb; pdb.set_trace() - m = _NUMERIC_PREFIX.match(result) - if not m: - prefix = '0.0.0' - suffix = result - else: - prefix = m.groups()[0].split('.') - prefix = [int(i) for i in prefix] - while len(prefix) < 3: - prefix.append(0) - if len(prefix) == 3: - suffix = result[m.end():] - else: - suffix = '.'.join([str(i) for i in prefix[3:]]) + result[m.end():] - prefix = prefix[:3] - prefix = '.'.join([str(i) for i in prefix]) - suffix = suffix.strip() - if suffix: - #import pdb; pdb.set_trace() - # massage the suffix. - for pat, repl in _SUFFIX_REPLACEMENTS: - suffix = pat.sub(repl, suffix) - - if not suffix: - result = prefix - else: - sep = '-' if 'dev' in suffix else '+' - result = prefix + sep + suffix - if not is_semver(result): - result = None - return result - - -def _suggest_normalized_version(s): - """Suggest a normalized version close to the given version string. - - If you have a version string that isn't rational (i.e. NormalizedVersion - doesn't like it) then you might be able to get an equivalent (or close) - rational version from this function. - - This does a number of simple normalizations to the given string, based - on observation of versions currently in use on PyPI. Given a dump of - those version during PyCon 2009, 4287 of them: - - 2312 (53.93%) match NormalizedVersion without change - with the automatic suggestion - - 3474 (81.04%) match when using this suggestion method - - @param s {str} An irrational version string. - @returns A rational version string, or None, if couldn't determine one. - """ - try: - _normalized_key(s) - return s # already rational - except UnsupportedVersionError: - pass - - rs = s.lower() - - # part of this could use maketrans - for orig, repl in (('-alpha', 'a'), ('-beta', 'b'), ('alpha', 'a'), - ('beta', 'b'), ('rc', 'c'), ('-final', ''), - ('-pre', 'c'), - ('-release', ''), ('.release', ''), ('-stable', ''), - ('+', '.'), ('_', '.'), (' ', ''), ('.final', ''), - ('final', '')): - rs = rs.replace(orig, repl) - - # if something ends with dev or pre, we add a 0 - rs = re.sub(r"pre$", r"pre0", rs) - rs = re.sub(r"dev$", r"dev0", rs) - - # if we have something like "b-2" or "a.2" at the end of the - # version, that is probably beta, alpha, etc - # let's remove the dash or dot - rs = re.sub(r"([abc]|rc)[\-\.](\d+)$", r"\1\2", rs) - - # 1.0-dev-r371 -> 1.0.dev371 - # 0.1-dev-r79 -> 0.1.dev79 - rs = re.sub(r"[\-\.](dev)[\-\.]?r?(\d+)$", r".\1\2", rs) - - # Clean: 2.0.a.3, 2.0.b1, 0.9.0~c1 - rs = re.sub(r"[.~]?([abc])\.?", r"\1", rs) - - # Clean: v0.3, v1.0 - if rs.startswith('v'): - rs = rs[1:] - - # Clean leading '0's on numbers. - #TODO: unintended side-effect on, e.g., "2003.05.09" - # PyPI stats: 77 (~2%) better - rs = re.sub(r"\b0+(\d+)(?!\d)", r"\1", rs) - - # Clean a/b/c with no version. E.g. "1.0a" -> "1.0a0". Setuptools infers - # zero. - # PyPI stats: 245 (7.56%) better - rs = re.sub(r"(\d+[abc])$", r"\g<1>0", rs) - - # the 'dev-rNNN' tag is a dev tag - rs = re.sub(r"\.?(dev-r|dev\.r)\.?(\d+)$", r".dev\2", rs) - - # clean the - when used as a pre delimiter - rs = re.sub(r"-(a|b|c)(\d+)$", r"\1\2", rs) - - # a terminal "dev" or "devel" can be changed into ".dev0" - rs = re.sub(r"[\.\-](dev|devel)$", r".dev0", rs) - - # a terminal "dev" can be changed into ".dev0" - rs = re.sub(r"(?![\.\-])dev$", r".dev0", rs) - - # a terminal "final" or "stable" can be removed - rs = re.sub(r"(final|stable)$", "", rs) - - # The 'r' and the '-' tags are post release tags - # 0.4a1.r10 -> 0.4a1.post10 - # 0.9.33-17222 -> 0.9.33.post17222 - # 0.9.33-r17222 -> 0.9.33.post17222 - rs = re.sub(r"\.?(r|-|-r)\.?(\d+)$", r".post\2", rs) - - # Clean 'r' instead of 'dev' usage: - # 0.9.33+r17222 -> 0.9.33.dev17222 - # 1.0dev123 -> 1.0.dev123 - # 1.0.git123 -> 1.0.dev123 - # 1.0.bzr123 -> 1.0.dev123 - # 0.1a0dev.123 -> 0.1a0.dev123 - # PyPI stats: ~150 (~4%) better - rs = re.sub(r"\.?(dev|git|bzr)\.?(\d+)$", r".dev\2", rs) - - # Clean '.pre' (normalized from '-pre' above) instead of 'c' usage: - # 0.2.pre1 -> 0.2c1 - # 0.2-c1 -> 0.2c1 - # 1.0preview123 -> 1.0c123 - # PyPI stats: ~21 (0.62%) better - rs = re.sub(r"\.?(pre|preview|-c)(\d+)$", r"c\g<2>", rs) - - # Tcl/Tk uses "px" for their post release markers - rs = re.sub(r"p(\d+)$", r".post\1", rs) - - try: - _normalized_key(rs) - except UnsupportedVersionError: - rs = None - return rs - -# -# Legacy version processing (distribute-compatible) -# - -_VERSION_PART = re.compile(r'([a-z]+|\d+|[\.-])', re.I) -_VERSION_REPLACE = { - 'pre': 'c', - 'preview': 'c', - '-': 'final-', - 'rc': 'c', - 'dev': '@', - '': None, - '.': None, -} - - -def _legacy_key(s): - def get_parts(s): - result = [] - for p in _VERSION_PART.split(s.lower()): - p = _VERSION_REPLACE.get(p, p) - if p: - if '0' <= p[:1] <= '9': - p = p.zfill(8) - else: - p = '*' + p - result.append(p) - result.append('*final') - return result - - result = [] - for p in get_parts(s): - if p.startswith('*'): - if p < '*final': - while result and result[-1] == '*final-': - result.pop() - while result and result[-1] == '00000000': - result.pop() - result.append(p) - return tuple(result) - - -class LegacyVersion(Version): - def parse(self, s): - return _legacy_key(s) - - @property - def is_prerelease(self): - result = False - for x in self._parts: - if (isinstance(x, string_types) and x.startswith('*') and - x < '*final'): - result = True - break - return result - - -class LegacyMatcher(Matcher): - version_class = LegacyVersion - - _operators = dict(Matcher._operators) - _operators['~='] = '_match_compatible' - - numeric_re = re.compile(r'^(\d+(\.\d+)*)') - - def _match_compatible(self, version, constraint, prefix): - if version < constraint: - return False - m = self.numeric_re.match(str(constraint)) - if not m: - logger.warning('Cannot compute compatible match for version %s ' - ' and constraint %s', version, constraint) - return True - s = m.groups()[0] - if '.' in s: - s = s.rsplit('.', 1)[0] - return _match_prefix(version, s) - -# -# Semantic versioning -# - -_SEMVER_RE = re.compile(r'^(\d+)\.(\d+)\.(\d+)' - r'(-[a-z0-9]+(\.[a-z0-9-]+)*)?' - r'(\+[a-z0-9]+(\.[a-z0-9-]+)*)?$', re.I) - - -def is_semver(s): - return _SEMVER_RE.match(s) - - -def _semantic_key(s): - def make_tuple(s, absent): - if s is None: - result = (absent,) - else: - parts = s[1:].split('.') - # We can't compare ints and strings on Python 3, so fudge it - # by zero-filling numeric values so simulate a numeric comparison - result = tuple([p.zfill(8) if p.isdigit() else p for p in parts]) - return result - - m = is_semver(s) - if not m: - raise UnsupportedVersionError(s) - groups = m.groups() - major, minor, patch = [int(i) for i in groups[:3]] - # choose the '|' and '*' so that versions sort correctly - pre, build = make_tuple(groups[3], '|'), make_tuple(groups[5], '*') - return (major, minor, patch), pre, build - - -class SemanticVersion(Version): - def parse(self, s): - return _semantic_key(s) - - @property - def is_prerelease(self): - return self._parts[1][0] != '|' - - -class SemanticMatcher(Matcher): - version_class = SemanticVersion - - -class VersionScheme(object): - def __init__(self, key, matcher, suggester=None): - self.key = key - self.matcher = matcher - self.suggester = suggester - - def is_valid_version(self, s): - try: - self.matcher.version_class(s) - result = True - except UnsupportedVersionError: - result = False - return result - - def is_valid_matcher(self, s): - try: - self.matcher(s) - result = True - except UnsupportedVersionError: - result = False - return result - - def is_valid_constraint_list(self, s): - """ - Used for processing some metadata fields - """ - # See issue #140. Be tolerant of a single trailing comma. - if s.endswith(','): - s = s[:-1] - return self.is_valid_matcher('dummy_name (%s)' % s) - - def suggest(self, s): - if self.suggester is None: - result = None - else: - result = self.suggester(s) - return result - -_SCHEMES = { - 'normalized': VersionScheme(_normalized_key, NormalizedMatcher, - _suggest_normalized_version), - 'legacy': VersionScheme(_legacy_key, LegacyMatcher, lambda self, s: s), - 'semantic': VersionScheme(_semantic_key, SemanticMatcher, - _suggest_semantic_version), -} - -_SCHEMES['default'] = _SCHEMES['normalized'] - - -def get_scheme(name): - if name not in _SCHEMES: - raise ValueError('unknown scheme name: %r' % name) - return _SCHEMES[name] diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/warnings.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/warnings.py deleted file mode 100644 index 4ea782e5099d0cd36416d1031d4a0fa199b24f60..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/warnings.py +++ /dev/null @@ -1,104 +0,0 @@ -"""Provide basic warnings used by setuptools modules. - -Using custom classes (other than ``UserWarning``) allow users to set -``PYTHONWARNINGS`` filters to run tests and prepare for upcoming changes in -setuptools. -""" - -import os -import warnings -from datetime import date -from inspect import cleandoc -from textwrap import indent -from typing import Optional, Tuple - -_DueDate = Tuple[int, int, int] # time tuple -_INDENT = 8 * " " -_TEMPLATE = f"""{80 * '*'}\n{{details}}\n{80 * '*'}""" - - -class SetuptoolsWarning(UserWarning): - """Base class in ``setuptools`` warning hierarchy.""" - - @classmethod - def emit( - cls, - summary: Optional[str] = None, - details: Optional[str] = None, - due_date: Optional[_DueDate] = None, - see_docs: Optional[str] = None, - see_url: Optional[str] = None, - stacklevel: int = 2, - **kwargs - ): - """Private: reserved for ``setuptools`` internal use only""" - # Default values: - summary_ = summary or getattr(cls, "_SUMMARY", None) or "" - details_ = details or getattr(cls, "_DETAILS", None) or "" - due_date = due_date or getattr(cls, "_DUE_DATE", None) - docs_ref = see_docs or getattr(cls, "_SEE_DOCS", None) - docs_url = docs_ref and f"https://setuptools.pypa.io/en/latest/{docs_ref}" - see_url = see_url or getattr(cls, "_SEE_URL", None) - due = date(*due_date) if due_date else None - - text = cls._format(summary_, details_, due, see_url or docs_url, kwargs) - if due and due < date.today() and _should_enforce(): - raise cls(text) - warnings.warn(text, cls, stacklevel=stacklevel + 1) - - @classmethod - def _format( - cls, - summary: str, - details: str, - due_date: Optional[date] = None, - see_url: Optional[str] = None, - format_args: Optional[dict] = None, - ): - """Private: reserved for ``setuptools`` internal use only""" - today = date.today() - summary = cleandoc(summary).format_map(format_args or {}) - possible_parts = [ - cleandoc(details).format_map(format_args or {}), - ( - f"\nBy {due_date:%Y-%b-%d}, you need to update your project and remove " - "deprecated calls\nor your builds will no longer be supported." - if due_date and due_date > today else None - ), - ( - "\nThis deprecation is overdue, please update your project and remove " - "deprecated\ncalls to avoid build errors in the future." - if due_date and due_date < today else None - ), - (f"\nSee {see_url} for details." if see_url else None) - - ] - parts = [x for x in possible_parts if x] - if parts: - body = indent(_TEMPLATE.format(details="\n".join(parts)), _INDENT) - return "\n".join([summary, "!!\n", body, "\n!!"]) - return summary - - -class InformationOnly(SetuptoolsWarning): - """Currently there is no clear way of displaying messages to the users - that use the setuptools backend directly via ``pip``. - The only thing that might work is a warning, although it is not the - most appropriate tool for the job... - - See pypa/packaging-problems#558. - """ - - -class SetuptoolsDeprecationWarning(SetuptoolsWarning): - """ - Base class for warning deprecations in ``setuptools`` - - This class is not derived from ``DeprecationWarning``, and as such is - visible by default. - """ - - -def _should_enforce(): - enforce = os.getenv("SETUPTOOLS_ENFORCE_DEPRECATION", "false").lower() - return enforce in ("true", "on", "ok", "1") diff --git a/spaces/TangibleAI/mathtext/mathtext/plot_calls.py b/spaces/TangibleAI/mathtext/mathtext/plot_calls.py deleted file mode 100644 index fec4c2ec565160b27a0670c8933604ea2145d0d2..0000000000000000000000000000000000000000 --- a/spaces/TangibleAI/mathtext/mathtext/plot_calls.py +++ /dev/null @@ -1,116 +0,0 @@ -import math -from datetime import datetime - -import matplotlib.pyplot as plt -import pandas as pd - -pd.set_option('display.max_columns', None) -pd.set_option('display.max_rows', None) - -log_files = [ - 'call_history_sentiment_1_bash.csv', - 'call_history_text2int_1_bash.csv', -] - -for log_file in log_files: - path_ = f"./data/{log_file}" - df = pd.read_csv(filepath_or_buffer=path_, sep=";") - df["finished_ts"] = df["finished"].apply( - lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S.%f").timestamp()) - df["started_ts"] = df["started"].apply( - lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S.%f").timestamp()) - df["elapsed"] = df["finished_ts"] - df["started_ts"] - - df["success"] = df["outputs"].apply(lambda x: 0 if "Time-out" in x else 1) - - student_numbers = sorted(df['active_students'].unique()) - - bins_dict = dict() # bins size for each group - min_finished_dict = dict() # zero time for each group - - for student_number in student_numbers: - # for each student group calculates bins size and zero time - min_finished = df["finished_ts"][df["active_students"] == student_number].min() - max_finished = df["finished_ts"][df["active_students"] == student_number].max() - bins = math.ceil(max_finished - min_finished) - bins_dict.update({student_number: bins}) - min_finished_dict.update({student_number: min_finished}) - print(f"student number: {student_number}") - print(f"min finished: {min_finished}") - print(f"max finished: {max_finished}") - print(f"bins finished seconds: {bins}, minutes: {bins / 60}") - - df["time_line"] = None - for student_number in student_numbers: - # calculates time-line for each student group - df["time_line"] = df.apply( - lambda x: x["finished_ts"] - min_finished_dict[student_number] - if x["active_students"] == student_number - else x["time_line"], - axis=1 - ) - - # creates a '.csv' from the dataframe - df.to_csv(f"./data/processed_{log_file}", index=False, sep=";") - - result = df.groupby(['active_students', 'success']) \ - .agg({ - 'elapsed': ['mean', 'median', 'min', 'max'], - 'success': ['count'], - }) - - print(f"Results for {log_file}") - print(result, "\n") - - title = None - if "sentiment" in log_file.lower(): - title = "API result for 'sentiment-analysis' endpoint" - elif "text2int" in log_file.lower(): - title = "API result for 'text2int' endpoint" - - for student_number in student_numbers: - # Prints percentage of the successful and failed calls - try: - failed_calls = result.loc[(student_number, 0), 'success'][0] - except: - failed_calls = 0 - successful_calls = result.loc[(student_number, 1), 'success'][0] - percentage = (successful_calls / (failed_calls + successful_calls)) * 100 - print(f"Percentage of successful API calls for {student_number} students: {percentage.__round__(2)}") - - rows = len(student_numbers) - - fig, axs = plt.subplots(rows, 2) # (rows, columns) - - for index, student_number in enumerate(student_numbers): - # creates a boxplot for each test group - data = df[df["active_students"] == student_number] - axs[index][0].boxplot(x=data["elapsed"]) # axs[row][column] - # axs[index][0].set_title(f'Boxplot for {student_number} students') - axs[index][0].set_xlabel(f'student number {student_number}') - axs[index][0].set_ylabel('Elapsed time (s)') - - # creates a histogram for each test group - axs[index][1].hist(x=data["elapsed"], bins=25) # axs[row][column] - # axs[index][1].set_title(f'Histogram for {student_number} students') - axs[index][1].set_xlabel('seconds') - axs[index][1].set_ylabel('Count of API calls') - - fig.suptitle(title, fontsize=16) - - fig, axs = plt.subplots(rows, 1) # (rows, columns) - - for index, student_number in enumerate(student_numbers): - # creates a histogram and shows API calls on a timeline for each test group - data = df[df["active_students"] == student_number] - - print(data["time_line"].head(10)) - - axs[index].hist(x=data["time_line"], bins=bins_dict[student_number]) # axs[row][column] - # axs[index][1].set_title(f'Histogram for {student_number} students') - axs[index].set_xlabel('seconds') - axs[index].set_ylabel('Count of API calls') - - fig.suptitle(title, fontsize=16) - -plt.show() diff --git a/spaces/ThirdEyeData/TagDiciphering/keypoint_ops.py b/spaces/ThirdEyeData/TagDiciphering/keypoint_ops.py deleted file mode 100644 index cfcaa529b4d457ecee32496327706a64aa303185..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/TagDiciphering/keypoint_ops.py +++ /dev/null @@ -1,366 +0,0 @@ -# Copyright 2017 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Keypoint operations. - -Keypoints are represented as tensors of shape [num_instances, num_keypoints, 2], -where the last dimension holds rank 2 tensors of the form [y, x] representing -the coordinates of the keypoint. -""" -import numpy as np -import tensorflow as tf - - -def scale(keypoints, y_scale, x_scale, scope=None): - """Scales keypoint coordinates in x and y dimensions. - - Args: - keypoints: a tensor of shape [num_instances, num_keypoints, 2] - y_scale: (float) scalar tensor - x_scale: (float) scalar tensor - scope: name scope. - - Returns: - new_keypoints: a tensor of shape [num_instances, num_keypoints, 2] - """ - with tf.name_scope(scope, 'Scale'): - y_scale = tf.cast(y_scale, tf.float32) - x_scale = tf.cast(x_scale, tf.float32) - new_keypoints = keypoints * [[[y_scale, x_scale]]] - return new_keypoints - - -def clip_to_window(keypoints, window, scope=None): - """Clips keypoints to a window. - - This op clips any input keypoints to a window. - - Args: - keypoints: a tensor of shape [num_instances, num_keypoints, 2] - window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max] - window to which the op should clip the keypoints. - scope: name scope. - - Returns: - new_keypoints: a tensor of shape [num_instances, num_keypoints, 2] - """ - with tf.name_scope(scope, 'ClipToWindow'): - y, x = tf.split(value=keypoints, num_or_size_splits=2, axis=2) - win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window) - y = tf.maximum(tf.minimum(y, win_y_max), win_y_min) - x = tf.maximum(tf.minimum(x, win_x_max), win_x_min) - new_keypoints = tf.concat([y, x], 2) - return new_keypoints - - -def prune_outside_window(keypoints, window, scope=None): - """Prunes keypoints that fall outside a given window. - - This function replaces keypoints that fall outside the given window with nan. - See also clip_to_window which clips any keypoints that fall outside the given - window. - - Args: - keypoints: a tensor of shape [num_instances, num_keypoints, 2] - window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max] - window outside of which the op should prune the keypoints. - scope: name scope. - - Returns: - new_keypoints: a tensor of shape [num_instances, num_keypoints, 2] - """ - with tf.name_scope(scope, 'PruneOutsideWindow'): - y, x = tf.split(value=keypoints, num_or_size_splits=2, axis=2) - win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window) - - valid_indices = tf.logical_and( - tf.logical_and(y >= win_y_min, y <= win_y_max), - tf.logical_and(x >= win_x_min, x <= win_x_max)) - - new_y = tf.where(valid_indices, y, np.nan * tf.ones_like(y)) - new_x = tf.where(valid_indices, x, np.nan * tf.ones_like(x)) - new_keypoints = tf.concat([new_y, new_x], 2) - - return new_keypoints - - -def change_coordinate_frame(keypoints, window, scope=None): - """Changes coordinate frame of the keypoints to be relative to window's frame. - - Given a window of the form [y_min, x_min, y_max, x_max], changes keypoint - coordinates from keypoints of shape [num_instances, num_keypoints, 2] - to be relative to this window. - - An example use case is data augmentation: where we are given groundtruth - keypoints and would like to randomly crop the image to some window. In this - case we need to change the coordinate frame of each groundtruth keypoint to be - relative to this new window. - - Args: - keypoints: a tensor of shape [num_instances, num_keypoints, 2] - window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max] - window we should change the coordinate frame to. - scope: name scope. - - Returns: - new_keypoints: a tensor of shape [num_instances, num_keypoints, 2] - """ - with tf.name_scope(scope, 'ChangeCoordinateFrame'): - win_height = window[2] - window[0] - win_width = window[3] - window[1] - new_keypoints = scale(keypoints - [window[0], window[1]], 1.0 / win_height, - 1.0 / win_width) - return new_keypoints - - -def keypoints_to_enclosing_bounding_boxes(keypoints): - """Creates enclosing bounding boxes from keypoints. - - Args: - keypoints: a [num_instances, num_keypoints, 2] float32 tensor with keypoints - in [y, x] format. - - Returns: - A [num_instances, 4] float32 tensor that tightly covers all the keypoints - for each instance. - """ - ymin = tf.math.reduce_min(keypoints[:, :, 0], axis=1) - xmin = tf.math.reduce_min(keypoints[:, :, 1], axis=1) - ymax = tf.math.reduce_max(keypoints[:, :, 0], axis=1) - xmax = tf.math.reduce_max(keypoints[:, :, 1], axis=1) - return tf.stack([ymin, xmin, ymax, xmax], axis=1) - - -def to_normalized_coordinates(keypoints, height, width, - check_range=True, scope=None): - """Converts absolute keypoint coordinates to normalized coordinates in [0, 1]. - - Usually one uses the dynamic shape of the image or conv-layer tensor: - keypoints = keypoint_ops.to_normalized_coordinates(keypoints, - tf.shape(images)[1], - tf.shape(images)[2]), - - This function raises an assertion failed error at graph execution time when - the maximum coordinate is smaller than 1.01 (which means that coordinates are - already normalized). The value 1.01 is to deal with small rounding errors. - - Args: - keypoints: A tensor of shape [num_instances, num_keypoints, 2]. - height: Maximum value for y coordinate of absolute keypoint coordinates. - width: Maximum value for x coordinate of absolute keypoint coordinates. - check_range: If True, checks if the coordinates are normalized. - scope: name scope. - - Returns: - tensor of shape [num_instances, num_keypoints, 2] with normalized - coordinates in [0, 1]. - """ - with tf.name_scope(scope, 'ToNormalizedCoordinates'): - height = tf.cast(height, tf.float32) - width = tf.cast(width, tf.float32) - - if check_range: - max_val = tf.reduce_max(keypoints) - max_assert = tf.Assert(tf.greater(max_val, 1.01), - ['max value is lower than 1.01: ', max_val]) - with tf.control_dependencies([max_assert]): - width = tf.identity(width) - - return scale(keypoints, 1.0 / height, 1.0 / width) - - -def to_absolute_coordinates(keypoints, height, width, - check_range=True, scope=None): - """Converts normalized keypoint coordinates to absolute pixel coordinates. - - This function raises an assertion failed error when the maximum keypoint - coordinate value is larger than 1.01 (in which case coordinates are already - absolute). - - Args: - keypoints: A tensor of shape [num_instances, num_keypoints, 2] - height: Maximum value for y coordinate of absolute keypoint coordinates. - width: Maximum value for x coordinate of absolute keypoint coordinates. - check_range: If True, checks if the coordinates are normalized or not. - scope: name scope. - - Returns: - tensor of shape [num_instances, num_keypoints, 2] with absolute coordinates - in terms of the image size. - - """ - with tf.name_scope(scope, 'ToAbsoluteCoordinates'): - height = tf.cast(height, tf.float32) - width = tf.cast(width, tf.float32) - - # Ensure range of input keypoints is correct. - if check_range: - max_val = tf.reduce_max(keypoints) - max_assert = tf.Assert(tf.greater_equal(1.01, max_val), - ['maximum keypoint coordinate value is larger ' - 'than 1.01: ', max_val]) - with tf.control_dependencies([max_assert]): - width = tf.identity(width) - - return scale(keypoints, height, width) - - -def flip_horizontal(keypoints, flip_point, flip_permutation, scope=None): - """Flips the keypoints horizontally around the flip_point. - - This operation flips the x coordinate for each keypoint around the flip_point - and also permutes the keypoints in a manner specified by flip_permutation. - - Args: - keypoints: a tensor of shape [num_instances, num_keypoints, 2] - flip_point: (float) scalar tensor representing the x coordinate to flip the - keypoints around. - flip_permutation: rank 1 int32 tensor containing the keypoint flip - permutation. This specifies the mapping from original keypoint indices - to the flipped keypoint indices. This is used primarily for keypoints - that are not reflection invariant. E.g. Suppose there are 3 keypoints - representing ['head', 'right_eye', 'left_eye'], then a logical choice for - flip_permutation might be [0, 2, 1] since we want to swap the 'left_eye' - and 'right_eye' after a horizontal flip. - scope: name scope. - - Returns: - new_keypoints: a tensor of shape [num_instances, num_keypoints, 2] - """ - with tf.name_scope(scope, 'FlipHorizontal'): - keypoints = tf.transpose(keypoints, [1, 0, 2]) - keypoints = tf.gather(keypoints, flip_permutation) - v, u = tf.split(value=keypoints, num_or_size_splits=2, axis=2) - u = flip_point * 2.0 - u - new_keypoints = tf.concat([v, u], 2) - new_keypoints = tf.transpose(new_keypoints, [1, 0, 2]) - return new_keypoints - - -def flip_vertical(keypoints, flip_point, flip_permutation, scope=None): - """Flips the keypoints vertically around the flip_point. - - This operation flips the y coordinate for each keypoint around the flip_point - and also permutes the keypoints in a manner specified by flip_permutation. - - Args: - keypoints: a tensor of shape [num_instances, num_keypoints, 2] - flip_point: (float) scalar tensor representing the y coordinate to flip the - keypoints around. - flip_permutation: rank 1 int32 tensor containing the keypoint flip - permutation. This specifies the mapping from original keypoint indices - to the flipped keypoint indices. This is used primarily for keypoints - that are not reflection invariant. E.g. Suppose there are 3 keypoints - representing ['head', 'right_eye', 'left_eye'], then a logical choice for - flip_permutation might be [0, 2, 1] since we want to swap the 'left_eye' - and 'right_eye' after a horizontal flip. - scope: name scope. - - Returns: - new_keypoints: a tensor of shape [num_instances, num_keypoints, 2] - """ - with tf.name_scope(scope, 'FlipVertical'): - keypoints = tf.transpose(keypoints, [1, 0, 2]) - keypoints = tf.gather(keypoints, flip_permutation) - v, u = tf.split(value=keypoints, num_or_size_splits=2, axis=2) - v = flip_point * 2.0 - v - new_keypoints = tf.concat([v, u], 2) - new_keypoints = tf.transpose(new_keypoints, [1, 0, 2]) - return new_keypoints - - -def rot90(keypoints, scope=None): - """Rotates the keypoints counter-clockwise by 90 degrees. - - Args: - keypoints: a tensor of shape [num_instances, num_keypoints, 2] - scope: name scope. - - Returns: - new_keypoints: a tensor of shape [num_instances, num_keypoints, 2] - """ - with tf.name_scope(scope, 'Rot90'): - keypoints = tf.transpose(keypoints, [1, 0, 2]) - v, u = tf.split(value=keypoints[:, :, ::-1], num_or_size_splits=2, axis=2) - v = 1.0 - v - new_keypoints = tf.concat([v, u], 2) - new_keypoints = tf.transpose(new_keypoints, [1, 0, 2]) - return new_keypoints - - -def keypoint_weights_from_visibilities(keypoint_visibilities, - per_keypoint_weights=None): - """Returns a keypoint weights tensor. - - During training, it is often beneficial to consider only those keypoints that - are labeled. This function returns a weights tensor that combines default - per-keypoint weights, as well as the visibilities of individual keypoints. - - The returned tensor satisfies: - keypoint_weights[i, k] = per_keypoint_weights[k] * keypoint_visibilities[i, k] - where per_keypoint_weights[k] is set to 1 if not provided. - - Args: - keypoint_visibilities: A [num_instances, num_keypoints] boolean tensor - indicating whether a keypoint is labeled (and perhaps even visible). - per_keypoint_weights: A list or 1-d tensor of length `num_keypoints` with - per-keypoint weights. If None, will use 1 for each visible keypoint - weight. - - Returns: - A [num_instances, num_keypoints] float32 tensor with keypoint weights. Those - keypoints deemed visible will have the provided per-keypoint weight, and - all others will be set to zero. - """ - if per_keypoint_weights is None: - num_keypoints = keypoint_visibilities.shape.as_list()[1] - per_keypoint_weight_mult = tf.ones((1, num_keypoints,), dtype=tf.float32) - else: - per_keypoint_weight_mult = tf.expand_dims(per_keypoint_weights, axis=0) - return per_keypoint_weight_mult * tf.cast(keypoint_visibilities, tf.float32) - - -def set_keypoint_visibilities(keypoints, initial_keypoint_visibilities=None): - """Sets keypoint visibilities based on valid/invalid keypoints. - - Some keypoint operations set invisible keypoints (e.g. cropped keypoints) to - NaN, without affecting any keypoint "visibility" variables. This function is - used to update (or create) keypoint visibilities to agree with visible / - invisible keypoint coordinates. - - Args: - keypoints: a float32 tensor of shape [num_instances, num_keypoints, 2]. - initial_keypoint_visibilities: a boolean tensor of shape - [num_instances, num_keypoints]. If provided, will maintain the visibility - designation of a keypoint, so long as the corresponding coordinates are - not NaN. If not provided, will create keypoint visibilities directly from - the values in `keypoints` (i.e. NaN coordinates map to False, otherwise - they map to True). - - Returns: - keypoint_visibilities: a bool tensor of shape [num_instances, num_keypoints] - indicating whether a keypoint is visible or not. - """ - if initial_keypoint_visibilities is not None: - keypoint_visibilities = tf.cast(initial_keypoint_visibilities, tf.bool) - else: - keypoint_visibilities = tf.ones_like(keypoints[:, :, 0], dtype=tf.bool) - - keypoints_with_nan = tf.math.reduce_any(tf.math.is_nan(keypoints), axis=2) - keypoint_visibilities = tf.where( - keypoints_with_nan, - tf.zeros_like(keypoint_visibilities, dtype=tf.bool), - keypoint_visibilities) - return keypoint_visibilities diff --git a/spaces/ThunderJames/PhotoRealistic/README.md b/spaces/ThunderJames/PhotoRealistic/README.md deleted file mode 100644 index a993b54b8606e3f3e11c0298e09836f2403940ce..0000000000000000000000000000000000000000 --- a/spaces/ThunderJames/PhotoRealistic/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: PhotoRealistic -emoji: 🌖 -colorFrom: green -colorTo: gray -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Tuana/what-would-mother-say/README.md b/spaces/Tuana/what-would-mother-say/README.md deleted file mode 100644 index 46974eb92f74f149274525a58cc182aadd98930b..0000000000000000000000000000000000000000 --- a/spaces/Tuana/what-would-mother-say/README.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: What would mother say? -emoji: 🫶 -colorFrom: pink -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -# What would mother say? - -This app includes a Haystack agent with access to 2 tools: -- `MastodonRetriever`: Useful for when you need to retrive the latest posts from a username to get an understanding of their style -- `WebSearch`: Useful for when you need to research the latest about a new topic - -We build an Agent that aims to first understand the style in which a username posts. Then, it uses the WebSearch tool to gain knowledge on a topic that the LLM may not have info on, to generate a post in the users style about that topic. -### Try it out on [🤗 Spaces](https://huggingface.co/spaces/Tuana/what-would-mother-say) - -##### A showcase of a Haystack Agent with a custom `TwitterRetriever` Node and a `WebQAPipeline` as tools. - -**Custom Haystack Node** - -This repo contains a streamlit application that given a query about what a certain twitter username would post on a given topic, generates a post in their style (or tries to). It does so by using a custom Haystack node I've built called the [`MastodonFetcher`](https://haystack.deepset.ai/integrations/mastodon-fetcher) - -**Custom PromptTemplates** - -It's been built with [Haystack](https://haystack.deepset.ai) using the [`Agent`](https://docs.haystack.deepset.ai/docs/agent) and by creating a custom [`PromptTemplate`](https://docs.haystack.deepset.ai/docs/prompt_node#templates) - -All the prompt templates used in this demo, both for the `WebQAPipeline` and the `Agent` can be found in `./prompts`. - -image - -## To learn more about the Agent - -Check out our tutorial on the Conversational Agent [here](https://haystack.deepset.ai/tutorials/24_building_chat_app) - -## Installation and Running -1. Install requirements: -`pip install -r requirements.txt` -2. Run the streamlit app: -`streamlit run app.py` -3. Createa a `.env` and add your Twitter Bearer token, OpenAI Key, and SerperDev Key: - -`TWITTER_BEARER_TOKEN` - -`SERPER_KEY` - -`OPENAI_API_KEY` - -This will start up the app on `localhost:8501` where you will find a simple search bar - -#### The Haystack Community is on [Discord](https://discord.com/invite/VBpFzsgRVF) diff --git a/spaces/Vegecken/sovits4dzl/preprocess_hubert_f0.py b/spaces/Vegecken/sovits4dzl/preprocess_hubert_f0.py deleted file mode 100644 index 29a1c7ee028fefbe7905d235447d98cda34ce840..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/preprocess_hubert_f0.py +++ /dev/null @@ -1,62 +0,0 @@ -import math -import multiprocessing -import os -import argparse -from random import shuffle - -import torch -from glob import glob -from tqdm import tqdm - -import utils -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import librosa -import numpy as np - -hps = utils.get_hparams_from_file("configs/config.json") -sampling_rate = hps.data.sampling_rate -hop_length = hps.data.hop_length - - -def process_one(filename, hmodel): - # print(filename) - wav, sr = librosa.load(filename, sr=sampling_rate) - soft_path = filename + ".soft.pt" - if not os.path.exists(soft_path): - devive = torch.device("cuda" if torch.cuda.is_available() else "cpu") - wav16k = librosa.resample(wav, orig_sr=sampling_rate, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(devive) - c = utils.get_hubert_content(hmodel, wav_16k_tensor=wav16k) - torch.save(c.cpu(), soft_path) - f0_path = filename + ".f0.npy" - if not os.path.exists(f0_path): - f0 = utils.compute_f0_dio(wav, sampling_rate=sampling_rate, hop_length=hop_length) - np.save(f0_path, f0) - - -def process_batch(filenames): - print("Loading hubert for content...") - device = "cuda" if torch.cuda.is_available() else "cpu" - hmodel = utils.get_hubert_model().to(device) - print("Loaded hubert.") - for filename in tqdm(filenames): - process_one(filename, hmodel) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--in_dir", type=str, default="dataset/44k", help="path to input dir") - - args = parser.parse_args() - filenames = glob(f'{args.in_dir}/*/*.wav', recursive=True) # [:10] - shuffle(filenames) - multiprocessing.set_start_method('spawn') - - num_processes = 1 - chunk_size = int(math.ceil(len(filenames) / num_processes)) - chunks = [filenames[i:i + chunk_size] for i in range(0, len(filenames), chunk_size)] - print([len(c) for c in chunks]) - processes = [multiprocessing.Process(target=process_batch, args=(chunk,)) for chunk in chunks] - for p in processes: - p.start() diff --git a/spaces/Wayben/ChatGPT/modules/utils.py b/spaces/Wayben/ChatGPT/modules/utils.py deleted file mode 100644 index a4dfab86fef9d4f1bab12d027b3589e274755153..0000000000000000000000000000000000000000 --- a/spaces/Wayben/ChatGPT/modules/utils.py +++ /dev/null @@ -1,424 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re -import html - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter - -from modules.presets import * -import modules.shared as shared - -logging.basicConfig( - level=logging.INFO, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
    {highlighted_code}
    ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"\n```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - result += ALREADY_CONVERTED_MARK - return result - - -def convert_asis(userinput): - return f"

    {html.escape(userinput)}

    "+ALREADY_CONVERTED_MARK - -def detect_converted_mark(userinput): - if userinput.endswith(ALREADY_CONVERTED_MARK): - return True - else: - return False - - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def construct_token_message(token, stream=False): - return f"Token 计数: {token}" - - -def delete_last_conversation(chatbot, history, previous_token_count): - if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]: - logging.info("由于包含报错信息,只删除chatbot记录") - chatbot.pop() - return chatbot, history - if len(history) > 0: - logging.info("删除了一组对话历史") - history.pop() - history.pop() - if len(chatbot) > 0: - logging.info("删除了一组chatbot对话") - chatbot.pop() - if len(previous_token_count) > 0: - logging.info("删除了一组对话的token计数记录") - previous_token_count.pop() - return ( - chatbot, - history, - previous_token_count, - construct_token_message(sum(previous_token_count)), - ) - - -def save_file(filename, system, history, chatbot): - logging.info("保存对话历史中……") - os.makedirs(HISTORY_DIR, exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.info("保存对话历史完毕") - return os.path.join(HISTORY_DIR, filename) - - -def save_chat_history(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, system, history, chatbot) - - -def export_markdown(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, system, history, chatbot) - - -def load_chat_history(filename, system, history, chatbot): - logging.info("加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.info("加载对话历史完毕") - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - except FileNotFoundError: - logging.info("没有找到对话历史文件,不执行任何操作") - return filename, system, history, chatbot - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False): - logging.info("获取历史记录文件名列表") - return get_file_names(HISTORY_DIR, plain) - - -def load_template(filename, mode=0): - logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - logging.info("Loading template...") - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices, value=choices[0] - ) - - -def get_template_names(plain=False): - logging.info("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_state(): - logging.info("重置状态") - return [], [], [], construct_token_message(0) - - -def reset_textbox(): - logging.debug("重置文本框") - return gr.update(value="") - - -def reset_default(): - newurl = shared.state.reset_api_url() - os.environ.pop("HTTPS_PROXY", None) - os.environ.pop("https_proxy", None) - return gr.update(value=newurl), gr.update(value=""), "API URL 和代理已重置" - - -def change_api_url(url): - shared.state.set_api_url(url) - msg = f"API地址更改为了{url}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def sha1sum(filename): - sha1 = hashlib.sha1() - sha1.update(filename.encode("utf-8")) - return sha1.hexdigest() - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - response = requests.get("https://ipapi.co/json/", timeout=5) - try: - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用,但请注意,如果您的IP地址在不受支持的地区,您可能会遇到问题。" - ) - else: - return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。" - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = f"您的IP区域:{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i - 1 - total = total - lst[i] - return 1 - - -def start_outputing(): - logging.debug("显示取消按钮,隐藏发送按钮") - return gr.Button.update(visible=False), gr.Button.update(visible=True) - - -def end_outputing(): - return ( - gr.Button.update(visible=True), - gr.Button.update(visible=False), - ) - - -def cancel_outputing(): - logging.info("中止输出……") - shared.state.interrupt() - -def transfer_input(inputs): - # 一次性返回,降低延迟 - textbox = reset_textbox() - outputing = start_outputing() - return inputs, gr.update(value="") diff --git a/spaces/Wootang01/vocabulary_categorizer_two/README.md b/spaces/Wootang01/vocabulary_categorizer_two/README.md deleted file mode 100644 index 0210c7708f7929bab2d8cf015993803d47bfdf64..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/vocabulary_categorizer_two/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Vocabulary_categorizer_two -emoji: 💩 -colorFrom: blue -colorTo: red -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Xenova/react-translator/index.html b/spaces/Xenova/react-translator/index.html deleted file mode 100644 index b93f7aa3d580d92b8fe36fe759393b9b63d3b2a1..0000000000000000000000000000000000000000 --- a/spaces/Xenova/react-translator/index.html +++ /dev/null @@ -1,14 +0,0 @@ - - - - - - Transformers.js - Sample react application - - - - -
    - - - diff --git a/spaces/Xhaheen/Face-Real-ESRGAN/arch_util.py b/spaces/Xhaheen/Face-Real-ESRGAN/arch_util.py deleted file mode 100644 index 90e18463b983f645e0bd189d55ade4b627c5418e..0000000000000000000000000000000000000000 --- a/spaces/Xhaheen/Face-Real-ESRGAN/arch_util.py +++ /dev/null @@ -1,197 +0,0 @@ -import math -import torch -from torch import nn as nn -from torch.nn import functional as F -from torch.nn import init as init -from torch.nn.modules.batchnorm import _BatchNorm - -@torch.no_grad() -def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs): - """Initialize network weights. - - Args: - module_list (list[nn.Module] | nn.Module): Modules to be initialized. - scale (float): Scale initialized weights, especially for residual - blocks. Default: 1. - bias_fill (float): The value to fill bias. Default: 0 - kwargs (dict): Other arguments for initialization function. - """ - if not isinstance(module_list, list): - module_list = [module_list] - for module in module_list: - for m in module.modules(): - if isinstance(m, nn.Conv2d): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, nn.Linear): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, _BatchNorm): - init.constant_(m.weight, 1) - if m.bias is not None: - m.bias.data.fill_(bias_fill) - - -def make_layer(basic_block, num_basic_block, **kwarg): - """Make layers by stacking the same blocks. - - Args: - basic_block (nn.module): nn.module class for basic block. - num_basic_block (int): number of blocks. - - Returns: - nn.Sequential: Stacked blocks in nn.Sequential. - """ - layers = [] - for _ in range(num_basic_block): - layers.append(basic_block(**kwarg)) - return nn.Sequential(*layers) - - -class ResidualBlockNoBN(nn.Module): - """Residual block without BN. - - It has a style of: - ---Conv-ReLU-Conv-+- - |________________| - - Args: - num_feat (int): Channel number of intermediate features. - Default: 64. - res_scale (float): Residual scale. Default: 1. - pytorch_init (bool): If set to True, use pytorch default init, - otherwise, use default_init_weights. Default: False. - """ - - def __init__(self, num_feat=64, res_scale=1, pytorch_init=False): - super(ResidualBlockNoBN, self).__init__() - self.res_scale = res_scale - self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.relu = nn.ReLU(inplace=True) - - if not pytorch_init: - default_init_weights([self.conv1, self.conv2], 0.1) - - def forward(self, x): - identity = x - out = self.conv2(self.relu(self.conv1(x))) - return identity + out * self.res_scale - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros', align_corners=True): - """Warp an image or feature map with optical flow. - - Args: - x (Tensor): Tensor with size (n, c, h, w). - flow (Tensor): Tensor with size (n, h, w, 2), normal value. - interp_mode (str): 'nearest' or 'bilinear'. Default: 'bilinear'. - padding_mode (str): 'zeros' or 'border' or 'reflection'. - Default: 'zeros'. - align_corners (bool): Before pytorch 1.3, the default value is - align_corners=True. After pytorch 1.3, the default value is - align_corners=False. Here, we use the True as default. - - Returns: - Tensor: Warped image or feature map. - """ - assert x.size()[-2:] == flow.size()[1:3] - _, _, h, w = x.size() - # create mesh grid - grid_y, grid_x = torch.meshgrid(torch.arange(0, h).type_as(x), torch.arange(0, w).type_as(x)) - grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2 - grid.requires_grad = False - - vgrid = grid + flow - # scale grid to [-1,1] - vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(w - 1, 1) - 1.0 - vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(h - 1, 1) - 1.0 - vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3) - output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode, align_corners=align_corners) - - # TODO, what if align_corners=False - return output - - -def resize_flow(flow, size_type, sizes, interp_mode='bilinear', align_corners=False): - """Resize a flow according to ratio or shape. - - Args: - flow (Tensor): Precomputed flow. shape [N, 2, H, W]. - size_type (str): 'ratio' or 'shape'. - sizes (list[int | float]): the ratio for resizing or the final output - shape. - 1) The order of ratio should be [ratio_h, ratio_w]. For - downsampling, the ratio should be smaller than 1.0 (i.e., ratio - < 1.0). For upsampling, the ratio should be larger than 1.0 (i.e., - ratio > 1.0). - 2) The order of output_size should be [out_h, out_w]. - interp_mode (str): The mode of interpolation for resizing. - Default: 'bilinear'. - align_corners (bool): Whether align corners. Default: False. - - Returns: - Tensor: Resized flow. - """ - _, _, flow_h, flow_w = flow.size() - if size_type == 'ratio': - output_h, output_w = int(flow_h * sizes[0]), int(flow_w * sizes[1]) - elif size_type == 'shape': - output_h, output_w = sizes[0], sizes[1] - else: - raise ValueError(f'Size type should be ratio or shape, but got type {size_type}.') - - input_flow = flow.clone() - ratio_h = output_h / flow_h - ratio_w = output_w / flow_w - input_flow[:, 0, :, :] *= ratio_w - input_flow[:, 1, :, :] *= ratio_h - resized_flow = F.interpolate( - input=input_flow, size=(output_h, output_w), mode=interp_mode, align_corners=align_corners) - return resized_flow - - -# TODO: may write a cpp file -def pixel_unshuffle(x, scale): - """ Pixel unshuffle. - - Args: - x (Tensor): Input feature with shape (b, c, hh, hw). - scale (int): Downsample ratio. - - Returns: - Tensor: the pixel unshuffled feature. - """ - b, c, hh, hw = x.size() - out_channel = c * (scale**2) - assert hh % scale == 0 and hw % scale == 0 - h = hh // scale - w = hw // scale - x_view = x.view(b, c, h, scale, w, scale) - return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w) \ No newline at end of file diff --git a/spaces/Zeebra/chatGPT_whisper_AI_voice_assistant/chatGPT_whisper_AI_voice_assistant/config.py b/spaces/Zeebra/chatGPT_whisper_AI_voice_assistant/chatGPT_whisper_AI_voice_assistant/config.py deleted file mode 100644 index 9fe5d3ad4a5df62cdf31c240c88539c65c5b5151..0000000000000000000000000000000000000000 --- a/spaces/Zeebra/chatGPT_whisper_AI_voice_assistant/chatGPT_whisper_AI_voice_assistant/config.py +++ /dev/null @@ -1,3 +0,0 @@ -API_KEYS = { - 'openai':'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', -} \ No newline at end of file diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/model_download/yolov5_model_p5_all.sh b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/model_download/yolov5_model_p5_all.sh deleted file mode 100644 index a8e11f6c73445e2e7855d7b62c2b8ebbb7236e9d..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/model_download/yolov5_model_p5_all.sh +++ /dev/null @@ -1,8 +0,0 @@ -cd ./yolov5 - -# 下载YOLOv5模型 -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5l.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x.pt \ No newline at end of file diff --git a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/utils/dataloaders.py b/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/utils/dataloaders.py deleted file mode 100644 index c1ad1f1a4b833df0c62ee702559a52c2a7aaa197..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/utils/dataloaders.py +++ /dev/null @@ -1,1129 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Dataloaders and dataset utils -""" - -import contextlib -import glob -import hashlib -import json -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import Pool, ThreadPool -from pathlib import Path -from threading import Thread -from urllib.parse import urlparse -from zipfile import ZipFile - -import numpy as np -import torch -import torch.nn.functional as F -import torchvision -import yaml -from PIL import ExifTags, Image, ImageOps -from torch.utils.data import DataLoader, Dataset, dataloader, distributed -from tqdm import tqdm - -from utils.augmentations import (Albumentations, augment_hsv, classify_albumentations, classify_transforms, copy_paste, - letterbox, mixup, random_perspective) -from utils.general import (DATASETS_DIR, LOGGER, NUM_THREADS, check_dataset, check_requirements, check_yaml, clean_str, - cv2, is_colab, is_kaggle, segments2boxes, xyn2xy, xywh2xyxy, xywhn2xyxy, xyxy2xywhn) -from utils.torch_utils import torch_distributed_zero_first - -# Parameters -HELP_URL = 'See https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -IMG_FORMATS = 'bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp', 'pfm' # include image suffixes -VID_FORMATS = 'asf', 'avi', 'gif', 'm4v', 'mkv', 'mov', 'mp4', 'mpeg', 'mpg', 'ts', 'wmv' # include video suffixes -BAR_FORMAT = '{l_bar}{bar:10}{r_bar}{bar:-10b}' # tqdm bar format -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -PIN_MEMORY = str(os.getenv('PIN_MEMORY', True)).lower() == 'true' # global pin_memory for dataloaders - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - - -def get_hash(paths): - # Returns a single hash value of a list of paths (files or dirs) - size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - h = hashlib.md5(str(size).encode()) # hash sizes - h.update(''.join(paths).encode()) # hash paths - return h.hexdigest() # return hash - - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - with contextlib.suppress(Exception): - rotation = dict(img._getexif().items())[orientation] - if rotation in [6, 8]: # rotation 270 or 90 - s = (s[1], s[0]) - return s - - -def exif_transpose(image): - """ - Transpose a PIL image accordingly if it has an EXIF Orientation tag. - Inplace version of https://github.com/python-pillow/Pillow/blob/master/src/PIL/ImageOps.py exif_transpose() - - :param image: The image to transpose. - :return: An image. - """ - exif = image.getexif() - orientation = exif.get(0x0112, 1) # default 1 - if orientation > 1: - method = { - 2: Image.FLIP_LEFT_RIGHT, - 3: Image.ROTATE_180, - 4: Image.FLIP_TOP_BOTTOM, - 5: Image.TRANSPOSE, - 6: Image.ROTATE_270, - 7: Image.TRANSVERSE, - 8: Image.ROTATE_90}.get(orientation) - if method is not None: - image = image.transpose(method) - del exif[0x0112] - image.info["exif"] = exif.tobytes() - return image - - -def seed_worker(worker_id): - # Set dataloader worker seed https://pytorch.org/docs/stable/notes/randomness.html#dataloader - worker_seed = torch.initial_seed() % 2 ** 32 - np.random.seed(worker_seed) - random.seed(worker_seed) - - -def create_dataloader(path, - imgsz, - batch_size, - stride, - single_cls=False, - hyp=None, - augment=False, - cache=False, - pad=0.0, - rect=False, - rank=-1, - workers=8, - image_weights=False, - quad=False, - prefix='', - shuffle=False): - if rect and shuffle: - LOGGER.warning('WARNING: --rect is incompatible with DataLoader shuffle, setting shuffle=False') - shuffle = False - with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP - dataset = LoadImagesAndLabels( - path, - imgsz, - batch_size, - augment=augment, # augmentation - hyp=hyp, # hyperparameters - rect=rect, # rectangular batches - cache_images=cache, - single_cls=single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - prefix=prefix) - - batch_size = min(batch_size, len(dataset)) - nd = torch.cuda.device_count() # number of CUDA devices - nw = min([os.cpu_count() // max(nd, 1), batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle) - loader = DataLoader if image_weights else InfiniteDataLoader # only DataLoader allows for attribute updates - generator = torch.Generator() - generator.manual_seed(0) - return loader(dataset, - batch_size=batch_size, - shuffle=shuffle and sampler is None, - num_workers=nw, - sampler=sampler, - pin_memory=PIN_MEMORY, - collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn, - worker_init_fn=seed_worker, - generator=generator), dataset - - -class InfiniteDataLoader(dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for _ in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler: - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - - -class LoadImages: - # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - def __init__(self, path, img_size=640, stride=32, auto=True, transforms=None, vid_stride=1): - files = [] - for p in sorted(path) if isinstance(path, (list, tuple)) else [path]: - p = str(Path(p).resolve()) - if '*' in p: - files.extend(sorted(glob.glob(p, recursive=True))) # glob - elif os.path.isdir(p): - files.extend(sorted(glob.glob(os.path.join(p, '*.*')))) # dir - elif os.path.isfile(p): - files.append(p) # files - else: - raise FileNotFoundError(f'{p} does not exist') - - images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS] - videos = [x for x in files if x.split('.')[-1].lower() in VID_FORMATS] - ni, nv = len(images), len(videos) - - self.img_size = img_size - self.stride = stride - self.files = images + videos - self.nf = ni + nv # number of files - self.video_flag = [False] * ni + [True] * nv - self.mode = 'image' - self.auto = auto - self.transforms = transforms # optional - self.vid_stride = vid_stride # video frame-rate stride - if any(videos): - self._new_video(videos[0]) # new video - else: - self.cap = None - assert self.nf > 0, f'No images or videos found in {p}. ' \ - f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - - def __iter__(self): - self.count = 0 - return self - - def __next__(self): - if self.count == self.nf: - raise StopIteration - path = self.files[self.count] - - if self.video_flag[self.count]: - # Read video - self.mode = 'video' - ret_val, im0 = self.cap.read() - self.cap.set(cv2.CAP_PROP_POS_FRAMES, self.vid_stride * (self.frame + 1)) # read at vid_stride - while not ret_val: - self.count += 1 - self.cap.release() - if self.count == self.nf: # last video - raise StopIteration - path = self.files[self.count] - self._new_video(path) - ret_val, im0 = self.cap.read() - - self.frame += 1 - # im0 = self._cv2_rotate(im0) # for use if cv2 autorotation is False - s = f'video {self.count + 1}/{self.nf} ({self.frame}/{self.frames}) {path}: ' - - else: - # Read image - self.count += 1 - im0 = cv2.imread(path) # BGR - assert im0 is not None, f'Image Not Found {path}' - s = f'image {self.count}/{self.nf} {path}: ' - - if self.transforms: - im = self.transforms(im0) # transforms - else: - im = letterbox(im0, self.img_size, stride=self.stride, auto=self.auto)[0] # padded resize - im = im.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - im = np.ascontiguousarray(im) # contiguous - - return path, im, im0, self.cap, s - - def _new_video(self, path): - # Create a new video capture object - self.frame = 0 - self.cap = cv2.VideoCapture(path) - self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT) / self.vid_stride) - self.orientation = int(self.cap.get(cv2.CAP_PROP_ORIENTATION_META)) # rotation degrees - # self.cap.set(cv2.CAP_PROP_ORIENTATION_AUTO, 0) # disable https://github.com/ultralytics/yolov5/issues/8493 - - def _cv2_rotate(self, im): - # Rotate a cv2 video manually - if self.orientation == 0: - return cv2.rotate(im, cv2.ROTATE_90_CLOCKWISE) - elif self.orientation == 180: - return cv2.rotate(im, cv2.ROTATE_90_COUNTERCLOCKWISE) - elif self.orientation == 90: - return cv2.rotate(im, cv2.ROTATE_180) - return im - - def __len__(self): - return self.nf # number of files - - -class LoadStreams: - # YOLOv5 streamloader, i.e. `python detect.py --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP streams` - def __init__(self, sources='streams.txt', img_size=640, stride=32, auto=True, transforms=None, vid_stride=1): - torch.backends.cudnn.benchmark = True # faster for fixed-size inference - self.mode = 'stream' - self.img_size = img_size - self.stride = stride - self.vid_stride = vid_stride # video frame-rate stride - sources = Path(sources).read_text().rsplit() if Path(sources).is_file() else [sources] - n = len(sources) - self.sources = [clean_str(x) for x in sources] # clean source names for later - self.imgs, self.fps, self.frames, self.threads = [None] * n, [0] * n, [0] * n, [None] * n - for i, s in enumerate(sources): # index, source - # Start thread to read frames from video stream - st = f'{i + 1}/{n}: {s}... ' - if urlparse(s).hostname in ('www.youtube.com', 'youtube.com', 'youtu.be'): # if source is YouTube video - check_requirements(('pafy', 'youtube_dl==2020.12.2')) - import pafy - s = pafy.new(s).getbest(preftype="mp4").url # YouTube URL - s = eval(s) if s.isnumeric() else s # i.e. s = '0' local webcam - if s == 0: - assert not is_colab(), '--source 0 webcam unsupported on Colab. Rerun command in a local environment.' - assert not is_kaggle(), '--source 0 webcam unsupported on Kaggle. Rerun command in a local environment.' - cap = cv2.VideoCapture(s) - assert cap.isOpened(), f'{st}Failed to open {s}' - w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - fps = cap.get(cv2.CAP_PROP_FPS) # warning: may return 0 or nan - self.frames[i] = max(int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), 0) or float('inf') # infinite stream fallback - self.fps[i] = max((fps if math.isfinite(fps) else 0) % 100, 0) or 30 # 30 FPS fallback - - _, self.imgs[i] = cap.read() # guarantee first frame - self.threads[i] = Thread(target=self.update, args=([i, cap, s]), daemon=True) - LOGGER.info(f"{st} Success ({self.frames[i]} frames {w}x{h} at {self.fps[i]:.2f} FPS)") - self.threads[i].start() - LOGGER.info('') # newline - - # check for common shapes - s = np.stack([letterbox(x, img_size, stride=stride, auto=auto)[0].shape for x in self.imgs]) - self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal - self.auto = auto and self.rect - self.transforms = transforms # optional - if not self.rect: - LOGGER.warning('WARNING: Stream shapes differ. For optimal performance supply similarly-shaped streams.') - - def update(self, i, cap, stream): - # Read stream `i` frames in daemon thread - n, f = 0, self.frames[i] # frame number, frame array - while cap.isOpened() and n < f: - n += 1 - cap.grab() # .read() = .grab() followed by .retrieve() - if n % self.vid_stride == 0: - success, im = cap.retrieve() - if success: - self.imgs[i] = im - else: - LOGGER.warning('WARNING: Video stream unresponsive, please check your IP camera connection.') - self.imgs[i] = np.zeros_like(self.imgs[i]) - cap.open(stream) # re-open stream if signal was lost - time.sleep(0.0) # wait time - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - if not all(x.is_alive() for x in self.threads) or cv2.waitKey(1) == ord('q'): # q to quit - cv2.destroyAllWindows() - raise StopIteration - - im0 = self.imgs.copy() - if self.transforms: - im = np.stack([self.transforms(x) for x in im0]) # transforms - else: - im = np.stack([letterbox(x, self.img_size, stride=self.stride, auto=self.auto)[0] for x in im0]) # resize - im = im[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW - im = np.ascontiguousarray(im) # contiguous - - return self.sources, im, im0, None, '' - - def __len__(self): - return len(self.sources) # 1E12 frames = 32 streams at 30 FPS for 30 years - - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = f'{os.sep}images{os.sep}', f'{os.sep}labels{os.sep}' # /images/, /labels/ substrings - return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths] - - -class LoadImagesAndLabels(Dataset): - # YOLOv5 train_loader/val_loader, loads images and labels for training and validation - cache_version = 0.6 # dataset labels *.cache version - rand_interp_methods = [cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4] - - def __init__(self, - path, - img_size=640, - batch_size=16, - augment=False, - hyp=None, - rect=False, - image_weights=False, - cache_images=False, - single_cls=False, - stride=32, - pad=0.0, - prefix=''): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - self.path = path - self.albumentations = Albumentations() if augment else None - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - # f = list(p.rglob('*.*')) # pathlib - elif p.is_file(): # file - with open(p) as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib) - else: - raise FileNotFoundError(f'{prefix}{p} does not exist') - self.im_files = sorted(x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in IMG_FORMATS) - # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in IMG_FORMATS]) # pathlib - assert self.im_files, f'{prefix}No images found' - except Exception as e: - raise Exception(f'{prefix}Error loading data from {path}: {e}\n{HELP_URL}') - - # Check cache - self.label_files = img2label_paths(self.im_files) # labels - cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') - try: - cache, exists = np.load(cache_path, allow_pickle=True).item(), True # load dict - assert cache['version'] == self.cache_version # matches current version - assert cache['hash'] == get_hash(self.label_files + self.im_files) # identical hash - except Exception: - cache, exists = self.cache_labels(cache_path, prefix), False # run cache ops - - # Display cache - nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupt, total - if exists and LOCAL_RANK in {-1, 0}: - d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupt" - tqdm(None, desc=prefix + d, total=n, initial=n, bar_format=BAR_FORMAT) # display cache results - if cache['msgs']: - LOGGER.info('\n'.join(cache['msgs'])) # display warnings - assert nf > 0 or not augment, f'{prefix}No labels found in {cache_path}, can not start training. {HELP_URL}' - - # Read cache - [cache.pop(k) for k in ('hash', 'version', 'msgs')] # remove items - labels, shapes, self.segments = zip(*cache.values()) - nl = len(np.concatenate(labels, 0)) # number of labels - assert nl > 0 or not augment, f'{prefix}All labels empty in {cache_path}, can not start training. {HELP_URL}' - self.labels = list(labels) - self.shapes = np.array(shapes) - self.im_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Update labels - include_class = [] # filter labels to include only these classes (optional) - include_class_array = np.array(include_class).reshape(1, -1) - for i, (label, segment) in enumerate(zip(self.labels, self.segments)): - if include_class: - j = (label[:, 0:1] == include_class_array).any(1) - self.labels[i] = label[j] - if segment: - self.segments[i] = segment[j] - if single_cls: # single-class training, merge all classes into 0 - self.labels[i][:, 0] = 0 - if segment: - self.segments[i][:, 0] = 0 - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.im_files = [self.im_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride - - # Cache images into RAM/disk for faster training (WARNING: large datasets may exceed system resources) - self.ims = [None] * n - self.npy_files = [Path(f).with_suffix('.npy') for f in self.im_files] - if cache_images: - gb = 0 # Gigabytes of cached images - self.im_hw0, self.im_hw = [None] * n, [None] * n - fcn = self.cache_images_to_disk if cache_images == 'disk' else self.load_image - results = ThreadPool(NUM_THREADS).imap(fcn, range(n)) - pbar = tqdm(enumerate(results), total=n, bar_format=BAR_FORMAT, disable=LOCAL_RANK > 0) - for i, x in pbar: - if cache_images == 'disk': - gb += self.npy_files[i].stat().st_size - else: # 'ram' - self.ims[i], self.im_hw0[i], self.im_hw[i] = x # im, hw_orig, hw_resized = load_image(self, i) - gb += self.ims[i].nbytes - pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB {cache_images})' - pbar.close() - - def cache_labels(self, path=Path('./labels.cache'), prefix=''): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc, msgs = 0, 0, 0, 0, [] # number missing, found, empty, corrupt, messages - desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels..." - with Pool(NUM_THREADS) as pool: - pbar = tqdm(pool.imap(verify_image_label, zip(self.im_files, self.label_files, repeat(prefix))), - desc=desc, - total=len(self.im_files), - bar_format=BAR_FORMAT) - for im_file, lb, shape, segments, nm_f, nf_f, ne_f, nc_f, msg in pbar: - nm += nm_f - nf += nf_f - ne += ne_f - nc += nc_f - if im_file: - x[im_file] = [lb, shape, segments] - if msg: - msgs.append(msg) - pbar.desc = f"{desc}{nf} found, {nm} missing, {ne} empty, {nc} corrupt" - - pbar.close() - if msgs: - LOGGER.info('\n'.join(msgs)) - if nf == 0: - LOGGER.warning(f'{prefix}WARNING: No labels found in {path}. {HELP_URL}') - x['hash'] = get_hash(self.label_files + self.im_files) - x['results'] = nf, nm, ne, nc, len(self.im_files) - x['msgs'] = msgs # warnings - x['version'] = self.cache_version # cache version - try: - np.save(path, x) # save cache for next time - path.with_suffix('.cache.npy').rename(path) # remove .npy suffix - LOGGER.info(f'{prefix}New cache created: {path}') - except Exception as e: - LOGGER.warning(f'{prefix}WARNING: Cache directory {path.parent} is not writeable: {e}') # not writeable - return x - - def __len__(self): - return len(self.im_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - img, labels = self.load_mosaic(index) - shapes = None - - # MixUp augmentation - if random.random() < hyp['mixup']: - img, labels = mixup(img, labels, *self.load_mosaic(random.randint(0, self.n - 1))) - - else: - # Load image - img, (h0, w0), (h, w) = self.load_image(index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - labels = self.labels[index].copy() - if labels.size: # normalized xywh to pixel xyxy format - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1]) - - if self.augment: - img, labels = random_perspective(img, - labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - nl = len(labels) # number of labels - if nl: - labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0], clip=True, eps=1E-3) - - if self.augment: - # Albumentations - img, labels = self.albumentations(img, labels) - nl = len(labels) # update after albumentations - - # HSV color-space - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nl: - labels[:, 2] = 1 - labels[:, 2] - - # Flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nl: - labels[:, 1] = 1 - labels[:, 1] - - # Cutouts - # labels = cutout(img, labels, p=0.5) - # nl = len(labels) # update after cutout - - labels_out = torch.zeros((nl, 6)) - if nl: - labels_out[:, 1:] = torch.from_numpy(labels) - - # Convert - img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - img = np.ascontiguousarray(img) - - return torch.from_numpy(img), labels_out, self.im_files[index], shapes - - def load_image(self, i): - # Loads 1 image from dataset index 'i', returns (im, original hw, resized hw) - im, f, fn = self.ims[i], self.im_files[i], self.npy_files[i], - if im is None: # not cached in RAM - if fn.exists(): # load npy - im = np.load(fn) - else: # read image - im = cv2.imread(f) # BGR - assert im is not None, f'Image Not Found {f}' - h0, w0 = im.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # ratio - if r != 1: # if sizes are not equal - interp = cv2.INTER_LINEAR if (self.augment or r > 1) else cv2.INTER_AREA - im = cv2.resize(im, (int(w0 * r), int(h0 * r)), interpolation=interp) - return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized - return self.ims[i], self.im_hw0[i], self.im_hw[i] # im, hw_original, hw_resized - - def cache_images_to_disk(self, i): - # Saves an image as an *.npy file for faster loading - f = self.npy_files[i] - if not f.exists(): - np.save(f.as_posix(), cv2.imread(self.im_files[i])) - - def load_mosaic(self, index): - # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - labels4, segments4 = [], [] - s = self.img_size - yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - random.shuffle(indices) - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = self.load_image(index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste']) - img4, labels4 = random_perspective(img4, - labels4, - segments4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img4, labels4 - - def load_mosaic9(self, index): - # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - labels9, segments9 = [], [] - s = self.img_size - indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - random.shuffle(indices) - hp, wp = -1, -1 # height, width previous - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = self.load_image(index) - - # place img in img9 - if i == 0: # center - img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - h0, w0 = h, w - c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - elif i == 1: # top - c = s, s - h, s + w, s - elif i == 2: # top right - c = s + wp, s - h, s + wp + w, s - elif i == 3: # right - c = s + w0, s, s + w0 + w, s + h - elif i == 4: # bottom right - c = s + w0, s + hp, s + w0 + w, s + hp + h - elif i == 5: # bottom - c = s + w0 - w, s + h0, s + w0, s + h0 + h - elif i == 6: # bottom left - c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - elif i == 7: # left - c = s - w, s + h0 - h, s, s + h0 - elif i == 8: # top left - c = s - w, s + h0 - hp - h, s, s + h0 - hp - - padx, pady = c[:2] - x1, y1, x2, y2 = (max(x, 0) for x in c) # allocate coords - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - labels9.append(labels) - segments9.extend(segments) - - # Image - img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] - hp, wp = h, w # height, width previous - - # Offset - yc, xc = (int(random.uniform(0, s)) for _ in self.mosaic_border) # mosaic center x, y - img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - - # Concat/clip labels - labels9 = np.concatenate(labels9, 0) - labels9[:, [1, 3]] -= xc - labels9[:, [2, 4]] -= yc - c = np.array([xc, yc]) # centers - segments9 = [x - c for x in segments9] - - for x in (labels9[:, 1:], *segments9): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img9, labels9 = replicate(img9, labels9) # replicate - - # Augment - img9, labels9 = random_perspective(img9, - labels9, - segments9, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img9, labels9 - - @staticmethod - def collate_fn(batch): - im, label, path, shapes = zip(*batch) # transposed - for i, lb in enumerate(label): - lb[:, 0] = i # add target image index for build_targets() - return torch.stack(im, 0), torch.cat(label, 0), path, shapes - - @staticmethod - def collate_fn4(batch): - im, label, path, shapes = zip(*batch) # transposed - n = len(shapes) // 4 - im4, label4, path4, shapes4 = [], [], path[:n], shapes[:n] - - ho = torch.tensor([[0.0, 0, 0, 1, 0, 0]]) - wo = torch.tensor([[0.0, 0, 1, 0, 0, 0]]) - s = torch.tensor([[1, 1, 0.5, 0.5, 0.5, 0.5]]) # scale - for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW - i *= 4 - if random.random() < 0.5: - im1 = F.interpolate(im[i].unsqueeze(0).float(), scale_factor=2.0, mode='bilinear', - align_corners=False)[0].type(im[i].type()) - lb = label[i] - else: - im1 = torch.cat((torch.cat((im[i], im[i + 1]), 1), torch.cat((im[i + 2], im[i + 3]), 1)), 2) - lb = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s - im4.append(im1) - label4.append(lb) - - for i, lb in enumerate(label4): - lb[:, 0] = i # add target image index for build_targets() - - return torch.stack(im4, 0), torch.cat(label4, 0), path4, shapes4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def flatten_recursive(path=DATASETS_DIR / 'coco128'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(f'{str(path)}_flat') - if os.path.exists(new_path): - shutil.rmtree(new_path) # delete output folder - os.makedirs(new_path) # make new output folder - for file in tqdm(glob.glob(f'{str(Path(path))}/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path=DATASETS_DIR / 'coco128'): # from utils.dataloaders import *; extract_boxes() - # Convert detection dataset into classification dataset, with one directory per class - path = Path(path) # images dir - shutil.rmtree(path / 'classification') if (path / 'classification').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in IMG_FORMATS: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file) as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path=DATASETS_DIR / 'coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - Usage: from utils.dataloaders import *; autosplit() - Arguments - path: Path to images directory - weights: Train, val, test weights (list, tuple) - annotated_only: Only use images with an annotated txt file - """ - path = Path(path) # images dir - files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only - n = len(files) # number of files - random.seed(0) # for reproducibility - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - for x in txt: - if (path.parent / x).exists(): - (path.parent / x).unlink() # remove existing - - print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - for i, img in tqdm(zip(indices, files), total=n): - if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - with open(path.parent / txt[i], 'a') as f: - f.write(f'./{img.relative_to(path.parent).as_posix()}' + '\n') # add image to txt file - - -def verify_image_label(args): - # Verify one image-label pair - im_file, lb_file, prefix = args - nm, nf, ne, nc, msg, segments = 0, 0, 0, 0, '', [] # number (missing, found, empty, corrupt), message, segments - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels' - assert im.format.lower() in IMG_FORMATS, f'invalid image format {im.format}' - if im.format.lower() in ('jpg', 'jpeg'): - with open(im_file, 'rb') as f: - f.seek(-2, 2) - if f.read() != b'\xff\xd9': # corrupt JPEG - ImageOps.exif_transpose(Image.open(im_file)).save(im_file, 'JPEG', subsampling=0, quality=100) - msg = f'{prefix}WARNING: {im_file}: corrupt JPEG restored and saved' - - # verify labels - if os.path.isfile(lb_file): - nf = 1 # label found - with open(lb_file) as f: - lb = [x.split() for x in f.read().strip().splitlines() if len(x)] - if any(len(x) > 6 for x in lb): # is segment - classes = np.array([x[0] for x in lb], dtype=np.float32) - segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in lb] # (cls, xy1...) - lb = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh) - lb = np.array(lb, dtype=np.float32) - nl = len(lb) - if nl: - assert lb.shape[1] == 5, f'labels require 5 columns, {lb.shape[1]} columns detected' - assert (lb >= 0).all(), f'negative label values {lb[lb < 0]}' - assert (lb[:, 1:] <= 1).all(), f'non-normalized or out of bounds coordinates {lb[:, 1:][lb[:, 1:] > 1]}' - _, i = np.unique(lb, axis=0, return_index=True) - if len(i) < nl: # duplicate row check - lb = lb[i] # remove duplicates - if segments: - segments = [segments[x] for x in i] - msg = f'{prefix}WARNING: {im_file}: {nl - len(i)} duplicate labels removed' - else: - ne = 1 # label empty - lb = np.zeros((0, 5), dtype=np.float32) - else: - nm = 1 # label missing - lb = np.zeros((0, 5), dtype=np.float32) - return im_file, lb, shape, segments, nm, nf, ne, nc, msg - except Exception as e: - nc = 1 - msg = f'{prefix}WARNING: {im_file}: ignoring corrupt image/label: {e}' - return [None, None, None, None, nm, nf, ne, nc, msg] - - -class HUBDatasetStats(): - """ Return dataset statistics dictionary with images and instances counts per split per class - To run in parent directory: export PYTHONPATH="$PWD/yolov5" - Usage1: from utils.dataloaders import *; HUBDatasetStats('coco128.yaml', autodownload=True) - Usage2: from utils.dataloaders import *; HUBDatasetStats('path/to/coco128_with_yaml.zip') - Arguments - path: Path to data.yaml or data.zip (with data.yaml inside data.zip) - autodownload: Attempt to download dataset if not found locally - """ - - def __init__(self, path='coco128.yaml', autodownload=False): - # Initialize class - zipped, data_dir, yaml_path = self._unzip(Path(path)) - try: - with open(check_yaml(yaml_path), errors='ignore') as f: - data = yaml.safe_load(f) # data dict - if zipped: - data['path'] = data_dir - except Exception as e: - raise Exception("error/HUB/dataset_stats/yaml_load") from e - - check_dataset(data, autodownload) # download dataset if missing - self.hub_dir = Path(data['path'] + '-hub') - self.im_dir = self.hub_dir / 'images' - self.im_dir.mkdir(parents=True, exist_ok=True) # makes /images - self.stats = {'nc': data['nc'], 'names': list(data['names'].values())} # statistics dictionary - self.data = data - - @staticmethod - def _find_yaml(dir): - # Return data.yaml file - files = list(dir.glob('*.yaml')) or list(dir.rglob('*.yaml')) # try root level first and then recursive - assert files, f'No *.yaml file found in {dir}' - if len(files) > 1: - files = [f for f in files if f.stem == dir.stem] # prefer *.yaml files that match dir name - assert files, f'Multiple *.yaml files found in {dir}, only 1 *.yaml file allowed' - assert len(files) == 1, f'Multiple *.yaml files found: {files}, only 1 *.yaml file allowed in {dir}' - return files[0] - - def _unzip(self, path): - # Unzip data.zip - if not str(path).endswith('.zip'): # path is data.yaml - return False, None, path - assert Path(path).is_file(), f'Error unzipping {path}, file not found' - ZipFile(path).extractall(path=path.parent) # unzip - dir = path.with_suffix('') # dataset directory == zip name - assert dir.is_dir(), f'Error unzipping {path}, {dir} not found. path/to/abc.zip MUST unzip to path/to/abc/' - return True, str(dir), self._find_yaml(dir) # zipped, data_dir, yaml_path - - def _hub_ops(self, f, max_dim=1920): - # HUB ops for 1 image 'f': resize and save at reduced quality in /dataset-hub for web/app viewing - f_new = self.im_dir / Path(f).name # dataset-hub image filename - try: # use PIL - im = Image.open(f) - r = max_dim / max(im.height, im.width) # ratio - if r < 1.0: # image too large - im = im.resize((int(im.width * r), int(im.height * r))) - im.save(f_new, 'JPEG', quality=50, optimize=True) # save - except Exception as e: # use OpenCV - print(f'WARNING: HUB ops PIL failure {f}: {e}') - im = cv2.imread(f) - im_height, im_width = im.shape[:2] - r = max_dim / max(im_height, im_width) # ratio - if r < 1.0: # image too large - im = cv2.resize(im, (int(im_width * r), int(im_height * r)), interpolation=cv2.INTER_AREA) - cv2.imwrite(str(f_new), im) - - def get_json(self, save=False, verbose=False): - # Return dataset JSON for Ultralytics HUB - def _round(labels): - # Update labels to integer class and 6 decimal place floats - return [[int(c), *(round(x, 4) for x in points)] for c, *points in labels] - - for split in 'train', 'val', 'test': - if self.data.get(split) is None: - self.stats[split] = None # i.e. no test set - continue - dataset = LoadImagesAndLabels(self.data[split]) # load dataset - x = np.array([ - np.bincount(label[:, 0].astype(int), minlength=self.data['nc']) - for label in tqdm(dataset.labels, total=dataset.n, desc='Statistics')]) # shape(128x80) - self.stats[split] = { - 'instance_stats': { - 'total': int(x.sum()), - 'per_class': x.sum(0).tolist()}, - 'image_stats': { - 'total': dataset.n, - 'unlabelled': int(np.all(x == 0, 1).sum()), - 'per_class': (x > 0).sum(0).tolist()}, - 'labels': [{ - str(Path(k).name): _round(v.tolist())} for k, v in zip(dataset.im_files, dataset.labels)]} - - # Save, print and return - if save: - stats_path = self.hub_dir / 'stats.json' - print(f'Saving {stats_path.resolve()}...') - with open(stats_path, 'w') as f: - json.dump(self.stats, f) # save stats.json - if verbose: - print(json.dumps(self.stats, indent=2, sort_keys=False)) - return self.stats - - def process_images(self): - # Compress images for Ultralytics HUB - for split in 'train', 'val', 'test': - if self.data.get(split) is None: - continue - dataset = LoadImagesAndLabels(self.data[split]) # load dataset - desc = f'{split} images' - for _ in tqdm(ThreadPool(NUM_THREADS).imap(self._hub_ops, dataset.im_files), total=dataset.n, desc=desc): - pass - print(f'Done. All images saved to {self.im_dir}') - return self.im_dir - - -# Classification dataloaders ------------------------------------------------------------------------------------------- -class ClassificationDataset(torchvision.datasets.ImageFolder): - """ - YOLOv5 Classification Dataset. - Arguments - root: Dataset path - transform: torchvision transforms, used by default - album_transform: Albumentations transforms, used if installed - """ - - def __init__(self, root, augment, imgsz, cache=False): - super().__init__(root=root) - self.torch_transforms = classify_transforms(imgsz) - self.album_transforms = classify_albumentations(augment, imgsz) if augment else None - self.cache_ram = cache is True or cache == 'ram' - self.cache_disk = cache == 'disk' - self.samples = [list(x) + [Path(x[0]).with_suffix('.npy'), None] for x in self.samples] # file, index, npy, im - - def __getitem__(self, i): - f, j, fn, im = self.samples[i] # filename, index, filename.with_suffix('.npy'), image - if self.cache_ram and im is None: - im = self.samples[i][3] = cv2.imread(f) - elif self.cache_disk: - if not fn.exists(): # load npy - np.save(fn.as_posix(), cv2.imread(f)) - im = np.load(fn) - else: # read image - im = cv2.imread(f) # BGR - if self.album_transforms: - sample = self.album_transforms(image=cv2.cvtColor(im, cv2.COLOR_BGR2RGB))["image"] - else: - sample = self.torch_transforms(im) - return sample, j - - -def create_classification_dataloader(path, - imgsz=224, - batch_size=16, - augment=True, - cache=False, - rank=-1, - workers=8, - shuffle=True): - # Returns Dataloader object to be used with YOLOv5 Classifier - with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP - dataset = ClassificationDataset(root=path, imgsz=imgsz, augment=augment, cache=cache) - batch_size = min(batch_size, len(dataset)) - nd = torch.cuda.device_count() - nw = min([os.cpu_count() // max(nd, 1), batch_size if batch_size > 1 else 0, workers]) - sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle) - generator = torch.Generator() - generator.manual_seed(0) - return InfiniteDataLoader(dataset, - batch_size=batch_size, - shuffle=shuffle and sampler is None, - num_workers=nw, - sampler=sampler, - pin_memory=PIN_MEMORY, - worker_init_fn=seed_worker, - generator=generator) # or DataLoader(persistent_workers=True) diff --git a/spaces/ZhaoYoujia/ImageRecognition/app.py b/spaces/ZhaoYoujia/ImageRecognition/app.py deleted file mode 100644 index e3349ece20f15218152617107586e03d3a696956..0000000000000000000000000000000000000000 --- a/spaces/ZhaoYoujia/ImageRecognition/app.py +++ /dev/null @@ -1,99 +0,0 @@ -from langchain.agents import load_tools -from langchain.agents import initialize_agent -from langchain.agents import AgentType -from langchain.llms import OpenAI -from langchain.chat_models import AzureChatOpenAI -from langchain.chains.conversation.memory import ConversationBufferWindowMemory -from langchain.chains import LLMChain -from langchain.prompts import PromptTemplate - -import os -import gradio as gr - -OPENAI_API_KEY=os.getenv("OPENAI_API_KEY") -OPENAI_API_BASE=os.getenv("OPENAI_API_BASE") -DEP_NAME=os.getenv("deployment_name") - -llm=AzureChatOpenAI(deployment_name=DEP_NAME,openai_api_base=OPENAI_API_BASE,openai_api_key=OPENAI_API_KEY,openai_api_version="2023-03-15-preview",model_name="gpt-3.5-turbo") - -import torch -from transformers import BlipProcessor, BlipForConditionalGeneration - -image_to_text_model = "Salesforce/blip-image-captioning-large" -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -processor = BlipProcessor.from_pretrained(image_to_text_model) -model = BlipForConditionalGeneration.from_pretrained(image_to_text_model).to(device) - -from transformers.models.oneformer.modeling_oneformer import OneFormerModelOutput -import requests -from PIL import Image - -def describeImage(image_url): -# image_object = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') - image_object = Image.open(image_url).convert('RGB') - # image - inputs = processor(image_object, return_tensors="pt").to(device) - outputs = model.generate(**inputs) - return processor.decode(outputs[0], skip_special_tokens=True) - -from langchain.tools import BaseTool - -class DescribeImageTool(BaseTool): - name = "Describe Image Tool" - description = 'use this tool to describe an image.' - - def _run(self, url: str): - description = describeImage(url) - return description - - def _arun(self, query: str): - raise NotImplementedError("Async operation not supported yet") - -tools = [DescribeImageTool()] - -agent = initialize_agent( - agent='chat-conversational-react-description', - tools=tools, - llm=llm, - verbose=True, - max_iterations=3, - early_stopping_method='generate', - memory=ConversationBufferWindowMemory( - memory_key='chat_history', - k=5, - return_messages=True - ) -) - -def enToChinese(english): - pp = "Please translate the following sentence from English to Chinese:{english}" - prompt = PromptTemplate( - input_variables=["english"], - template=pp - ) - llchain=LLMChain(llm=llm,prompt=prompt) - return llchain.run(english) - - -def chToEnglish(chinese): - pp = "Please translate the following sentence from Chinese to English:{chinese}" - prompt = PromptTemplate( - input_variables=["chinese"], - template=pp - ) - llchain=LLMChain(llm=llm,prompt=prompt) - return llchain.run(chinese) - -def image_recognition(image_url,input): - input = chToEnglish(input) - return enToChinese(agent(f"{input}:\n{image_url}")['output']) - -with gr.Blocks() as demo: - image_url = gr.Image(type="filepath",label="请选择一张图片") - input = gr.Textbox(label='问题', placeholder="", lines=1) - output = gr.Textbox(label='答案', placeholder="", lines=2,interactive=False) - submit = gr.Button('提问',variant="primary") - submit.click(image_recognition,inputs=[image_url,input],outputs=output) - -demo.launch() \ No newline at end of file diff --git a/spaces/Zitang/Self-attention-based-V1MT-motion-model/flow_tools.py b/spaces/Zitang/Self-attention-based-V1MT-motion-model/flow_tools.py deleted file mode 100644 index 002359ac545a5e63154ae2d4f956912468999e7e..0000000000000000000000000000000000000000 --- a/spaces/Zitang/Self-attention-based-V1MT-motion-model/flow_tools.py +++ /dev/null @@ -1,773 +0,0 @@ -import matplotlib.pyplot as plt -import torch -import cv2 -import numpy as np -from matplotlib.colors import hsv_to_rgb -import torch.nn.functional as tf -from PIL import Image -from os.path import * -from io import BytesIO - -cv2.setNumThreads(0) -cv2.ocl.setUseOpenCL(False) -TAG_CHAR = np.array([202021.25], np.float32) - - -def load_flow(path): - # if path.endswith('.png'): - # # for KITTI which uses 16bit PNG images - # # see 'https://github.com/ClementPinard/FlowNetPytorch/blob/master/datasets/KITTI.py' - # # The -1 is here to specify not to change the image depth (16bit), and is compatible - # # with both OpenCV2 and OpenCV3 - # flo_file = cv2.imread(path, -1) - # flo_img = flo_file[:, :, 2:0:-1].astype(np.float32) - # invalid = (flo_file[:, :, 0] == 0) # mask - # flo_img = flo_img - 32768 - # flo_img = flo_img / 64 - # flo_img[np.abs(flo_img) < 1e-10] = 1e-10 - # flo_img[invalid, :] = 0 - # return flo_img - if path.endswith('.png'): - # this method is only for the flow data generated by self-rendering - # read json file and get "forward" and "backward" flow - import json - path_range = path.replace(path.name, 'data_ranges.json') - with open(path_range, 'r') as f: - flow_dict = json.load(f) - flow_forward = flow_dict['forward_flow'] - # get the max and min value of the flow - max_value = float(flow_forward["max"]) - min_value = float(flow_forward["min"]) - # read the flow data - flow_file = cv2.imread(path, -1).astype(np.float32) - # scale the flow data - flow_file = flow_file * (max_value - min_value) / 65535 + min_value - # only keep the last two channels - flow_file = flow_file[:, :, 1:] - return flow_file - - # scaling = {"min": min_value.item(), "max": max_value.item()} - # data = (data - min_value) * 65535 / (max_value - min_value) - # data = data.astype(np.uint16) - - elif path.endswith('.flo'): - with open(path, 'rb') as f: - magic = np.fromfile(f, np.float32, count=1) - assert (202021.25 == magic), 'Magic number incorrect. Invalid .flo file' - h = np.fromfile(f, np.int32, count=1)[0] - w = np.fromfile(f, np.int32, count=1)[0] - data = np.fromfile(f, np.float32, count=2 * w * h) - # Reshape data into 3D array (columns, rows, bands) - data2D = np.resize(data, (w, h, 2)) - return data2D - elif path.endswith('.pfm'): - file = open(path, 'rb') - - color = None - width = None - height = None - scale = None - endian = None - header = file.readline().rstrip() - if header == b'PF': - color = True - elif header == b'Pf': - color = False - else: - raise Exception('Not a PFM file.') - - dim_match = re.match(rb'^(\d+)\s(\d+)\s$', file.readline()) - if dim_match: - width, height = map(int, dim_match.groups()) - else: - raise Exception('Malformed PFM header.') - - scale = float(file.readline().rstrip()) - if scale < 0: # little-endian - endian = '<' - scale = -scale - else: - endian = '>' # big-endian - data = np.fromfile(file, endian + 'f') - shape = (height, width, 3) if color else (height, width) - data = np.reshape(data, shape) - data = np.flipud(data).astype(np.float32) - if len(data.shape) == 2: - return data - else: - return data[:, :, :-1] - elif path.endswith('.bin') or path.endswith('.raw'): - return np.load(path) - else: - raise NotImplementedError("flow type") - - -def make_colorwheel(): - """ - Generates a color wheel for optical flow visualization as presented in: - Baker et al. "A Database and Evaluation Methodology for Optical Flow" (ICCV, 2007) - URL: http://vision.middlebury.edu/flow/flowEval-iccv07.pdf - - Code follows the original C++ source code of Daniel Scharstein. - Code follows the the Matlab source code of Deqing Sun. - - Returns: - np.ndarray: Color wheel - """ - - RY = 15 - YG = 6 - GC = 4 - CB = 11 - BM = 13 - MR = 6 - - ncols = RY + YG + GC + CB + BM + MR - colorwheel = np.zeros((ncols, 3)) - col = 0 - - # RY - colorwheel[0:RY, 0] = 255 - colorwheel[0:RY, 1] = np.floor(255 * np.arange(0, RY) / RY) - col = col + RY - # YG - colorwheel[col:col + YG, 0] = 255 - np.floor(255 * np.arange(0, YG) / YG) - colorwheel[col:col + YG, 1] = 255 - col = col + YG - # GC - colorwheel[col:col + GC, 1] = 255 - colorwheel[col:col + GC, 2] = np.floor(255 * np.arange(0, GC) / GC) - col = col + GC - # CB - colorwheel[col:col + CB, 1] = 255 - np.floor(255 * np.arange(CB) / CB) - colorwheel[col:col + CB, 2] = 255 - col = col + CB - # BM - colorwheel[col:col + BM, 2] = 255 - colorwheel[col:col + BM, 0] = np.floor(255 * np.arange(0, BM) / BM) - col = col + BM - # MR - colorwheel[col:col + MR, 2] = 255 - np.floor(255 * np.arange(MR) / MR) - colorwheel[col:col + MR, 0] = 255 - return colorwheel - - -def flow_uv_to_colors(u, v, convert_to_bgr=False): - """ - Applies the flow color wheel to (possibly clipped) flow components u and v. - - According to the C++ source code of Daniel Scharstein - According to the Matlab source code of Deqing Sun - - Args: - u (np.ndarray): Input horizontal flow of shape [H,W] - v (np.ndarray): Input vertical flow of shape [H,W] - convert_to_bgr (bool, optional): Convert output image to BGR. Defaults to False. - - Returns: - np.ndarray: Flow visualization image of shape [H,W,3] - """ - flow_image = np.zeros((u.shape[0], u.shape[1], 3), np.uint8) - colorwheel = make_colorwheel() # shape [55x3] - ncols = colorwheel.shape[0] - rad = np.sqrt(np.square(u) + np.square(v)) - a = np.arctan2(-v, -u) / np.pi - fk = (a + 1) / 2 * (ncols - 1) - k0 = np.floor(fk).astype(np.int32) - k1 = k0 + 1 - k1[k1 == ncols] = 0 - f = fk - k0 - for i in range(colorwheel.shape[1]): - tmp = colorwheel[:, i] - col0 = tmp[k0] / 255.0 - col1 = tmp[k1] / 255.0 - col = (1 - f) * col0 + f * col1 - idx = (rad <= 1) - col[idx] = 1 - rad[idx] * (1 - col[idx]) - col[~idx] = col[~idx] * 0.75 # out of range - # Note the 2-i => BGR instead of RGB - ch_idx = 2 - i if convert_to_bgr else i - flow_image[:, :, ch_idx] = np.floor(255 * col) - return flow_image - - -# absolut color flow -def flow_to_image(flow, max_flow=256): - if max_flow is not None: - max_flow = max(max_flow, 1.) - else: - max_flow = np.max(flow) - - n = 8 - u, v = flow[:, :, 0], flow[:, :, 1] - mag = np.sqrt(np.square(u) + np.square(v)) - angle = np.arctan2(v, u) - im_h = np.mod(angle / (2 * np.pi) + 1, 1) - im_s = np.clip(mag * n / max_flow, a_min=0, a_max=1) - im_v = np.clip(n - im_s, a_min=0, a_max=1) - im = hsv_to_rgb(np.stack([im_h, im_s, im_v], 2)) - return (im * 255).astype(np.uint8) - - -# relative color -def flow_to_image_relative(flow_uv, clip_flow=None, convert_to_bgr=False): - """ - Expects a two dimensional flow image of shape. - - Args: - flow_uv (np.ndarray): Flow UV image of shape [H,W,2] - clip_flow (float, optional): Clip maximum of flow values. Defaults to None. - convert_to_bgr (bool, optional): Convert output image to BGR. Defaults to False. - - Returns: - np.ndarray: Flow visualization image of shape [H,W,3] - """ - assert flow_uv.ndim == 3, 'input flow must have three dimensions' - assert flow_uv.shape[2] == 2, 'input flow must have shape [H,W,2]' - if clip_flow is not None: - flow_uv = np.clip(flow_uv, 0, clip_flow) - u = flow_uv[:, :, 0] - v = flow_uv[:, :, 1] - rad = np.sqrt(np.square(u) + np.square(v)) - rad_max = np.max(rad) - epsilon = 1e-5 - u = u / (rad_max + epsilon) - v = v / (rad_max + epsilon) - return flow_uv_to_colors(u, v, convert_to_bgr) - - -def resize_flow(flow, new_shape): - _, _, h, w = flow.shape - new_h, new_w = new_shape - flow = torch.nn.functional.interpolate(flow, (new_h, new_w), - mode='bilinear', align_corners=True) - scale_h, scale_w = h / float(new_h), w / float(new_w) - flow[:, 0] /= scale_w - flow[:, 1] /= scale_h - return flow - - -def evaluate_flow_api(gt_flows, pred_flows): - if len(gt_flows.shape) == 3: - gt_flows = gt_flows.unsqueeze(0) - if len(pred_flows.shape) == 3: - pred_flows = pred_flows.unsqueeze(0) - pred_flows = pred_flows.detach().cpu().numpy().transpose([0, 2, 3, 1]) - gt_flows = gt_flows.detach().cpu().numpy().transpose([0, 2, 3, 1]) - return evaluate_flow(gt_flows, pred_flows) - - -def evaluate_flow(gt_flows, pred_flows, moving_masks=None): - # credit "undepthflow/eval/evaluate_flow.py" - def calculate_error_rate(epe_map, gt_flow, mask): - bad_pixels = np.logical_and( - epe_map * mask > 3, - epe_map * mask / np.maximum( - np.sqrt(np.sum(np.square(gt_flow), axis=2)), 1e-10) > 0.05) - return bad_pixels.sum() / mask.sum() * 100. - - error, error_noc, error_occ, error_move, error_static, error_rate = \ - 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 - error_move_rate, error_static_rate = 0.0, 0.0 - B = len(gt_flows) - for gt_flow, pred_flow, i in zip(gt_flows, pred_flows, range(B)): - H, W = gt_flow.shape[:2] - - h, w = pred_flow.shape[:2] - pred_flow = np.copy(pred_flow) - pred_flow[:, :, 0] = pred_flow[:, :, 0] / w * W - pred_flow[:, :, 1] = pred_flow[:, :, 1] / h * H - - flo_pred = cv2.resize(pred_flow, (W, H), interpolation=cv2.INTER_LINEAR) - - epe_map = np.sqrt( - np.sum(np.square(flo_pred[:, :, :2] - gt_flow[:, :, :2]), - axis=2)) - if gt_flow.shape[-1] == 2: - error += np.mean(epe_map) - - elif gt_flow.shape[-1] == 4: - error += np.sum(epe_map * gt_flow[:, :, 2]) / np.sum(gt_flow[:, :, 2]) - noc_mask = gt_flow[:, :, -1] - error_noc += np.sum(epe_map * noc_mask) / np.sum(noc_mask) - - error_occ += np.sum(epe_map * (gt_flow[:, :, 2] - noc_mask)) / max( - np.sum(gt_flow[:, :, 2] - noc_mask), 1.0) - - error_rate += calculate_error_rate(epe_map, gt_flow[:, :, 0:2], - gt_flow[:, :, 2]) - - if moving_masks is not None: - move_mask = moving_masks[i] - - error_move_rate += calculate_error_rate( - epe_map, gt_flow[:, :, 0:2], gt_flow[:, :, 2] * move_mask) - error_static_rate += calculate_error_rate( - epe_map, gt_flow[:, :, 0:2], - gt_flow[:, :, 2] * (1.0 - move_mask)) - - error_move += np.sum(epe_map * gt_flow[:, :, 2] * - move_mask) / np.sum(gt_flow[:, :, 2] * - move_mask) - error_static += np.sum(epe_map * gt_flow[:, :, 2] * ( - 1.0 - move_mask)) / np.sum(gt_flow[:, :, 2] * - (1.0 - move_mask)) - - if gt_flows[0].shape[-1] == 4: - res = [error / B, error_noc / B, error_occ / B, error_rate / B] - if moving_masks is not None: - res += [error_move / B, error_static / B] - return res - else: - return [error / B] - - -class InputPadder: - """ Pads images such that dimensions are divisible by 32 """ - - def __init__(self, dims, mode='sintel'): - self.ht, self.wd = dims[-2:] - pad_ht = (((self.ht // 16) + 1) * 16 - self.ht) % 16 - pad_wd = (((self.wd // 16) + 1) * 16 - self.wd) % 16 - if mode == 'sintel': - self._pad = [pad_wd // 2, pad_wd - pad_wd // 2, pad_ht // 2, pad_ht - pad_ht // 2] - else: - self._pad = [pad_wd // 2, pad_wd - pad_wd // 2, 0, pad_ht] - - def pad(self, inputs): - return [tf.pad(x, self._pad, mode='replicate') for x in inputs] - - def unpad(self, x): - ht, wd = x.shape[-2:] - c = [self._pad[2], ht - self._pad[3], self._pad[0], wd - self._pad[1]] - - return x[..., c[0]:c[1], c[2]:c[3]] - - -class ImageInputZoomer: - """ Pads images such that dimensions are divisible by 32 """ - - def __init__(self, dims, factor=32): - self.ht, self.wd = dims[-2:] - hf = self.ht % factor - wf = self.wd % factor - pad_ht = (self.ht // factor + 1) * factor if hf > (factor / 2) else (self.ht // factor) * factor - pad_wd = (self.wd // factor + 1) * factor if wf > (factor / 2) else (self.wd // factor) * factor - self.size = [pad_wd, pad_ht] - - def zoom(self, inputs): - return [ - torch.from_numpy(cv2.resize(x.cpu().numpy().transpose(1, 2, 0), dsize=self.size, - interpolation=cv2.INTER_CUBIC).transpose(2, 0, 1)) for x in inputs] - - def unzoom(self, inputs): - return [cv2.resize(x.cpu().squeeze().numpy().transpose(1, 2, 0), dsize=(self.wd, self.ht), - interpolation=cv2.INTER_CUBIC) for x in inputs] - - -def readFlow(fn): - """ Read .flo file in Middlebury format""" - # Code adapted from: - # http://stackoverflow.com/questions/28013200/reading-middlebury-flow-files-with-python-bytes-array-numpy - - # WARNING: this will work on little-endian architectures (eg Intel x86) only! - # print 'fn = %s'%(fn) - with open(fn, 'rb') as f: - magic = np.fromfile(f, np.float32, count=1) - if 202021.25 != magic: - print('Magic number incorrect. Invalid .flo file') - return None - else: - w = np.fromfile(f, np.int32, count=1) - h = np.fromfile(f, np.int32, count=1) - # print 'Reading %d x %d flo file\n' % (w, h) - data = np.fromfile(f, np.float32, count=2 * int(w) * int(h)) - # Reshape data into 3D array (columns, rows, bands) - # The reshape here is for visualization, the original code is (w,h,2) - return np.resize(data, (int(h), int(w), 2)) - - -import re - - -def readPFM(file): - file = open(file, 'rb') - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header == b'PF': - color = True - elif header == b'Pf': - color = False - else: - raise Exception('Not a PFM file.') - - dim_match = re.match(rb'^(\d+)\s(\d+)\s$', file.readline()) - if dim_match: - width, height = map(int, dim_match.groups()) - else: - raise Exception('Malformed PFM header.') - - scale = float(file.readline().rstrip()) - if scale < 0: # little-endian - endian = '<' - scale = -scale - else: - endian = '>' # big-endian - - data = np.fromfile(file, endian + 'f') - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - return data - - -def writeFlow(filename, uv, v=None): - """ Write optical flow to file. - - If v is None, uv is assumed to contain both u and v channels, - stacked in depth. - Original code by Deqing Sun, adapted from Daniel Scharstein. - """ - nBands = 2 - - if v is None: - assert (uv.ndim == 3) - assert (uv.shape[2] == 2) - u = uv[:, :, 0] - v = uv[:, :, 1] - else: - u = uv - - assert (u.shape == v.shape) - height, width = u.shape - f = open(filename, 'wb') - # write the header - f.write(TAG_CHAR) - np.array(width).astype(np.int32).tofile(f) - np.array(height).astype(np.int32).tofile(f) - # arrange into matrix form - tmp = np.zeros((height, width * nBands)) - tmp[:, np.arange(width) * 2] = u - tmp[:, np.arange(width) * 2 + 1] = v - tmp.astype(np.float32).tofile(f) - f.close() - - -def readFlowKITTI(filename): - flow = cv2.imread(filename, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) - flow = flow[:, :, ::-1].astype(np.float32) - flow, valid = flow[:, :, :2], flow[:, :, 2] - flow = (flow - 2 ** 15) / 64.0 - return flow, valid - - -def readDispKITTI(filename): - disp = cv2.imread(filename, cv2.IMREAD_ANYDEPTH) / 256.0 - valid = disp > 0.0 - flow = np.stack([-disp, np.zeros_like(disp)], -1) - return flow, valid - - -def writeFlowKITTI(filename, uv): - uv = 64.0 * uv + 2 ** 15 - valid = np.ones([uv.shape[0], uv.shape[1], 1]) - uv = np.concatenate([uv, valid], axis=-1).astype(np.uint16) - cv2.imwrite(filename, uv[..., ::-1]) - - -def read_gen(file_name, pil=False): - ext = splitext(file_name)[-1] - if ext == '.png' or ext == '.jpeg' or ext == '.ppm' or ext == '.jpg': - return Image.open(file_name) - elif ext == '.bin' or ext == '.raw': - return np.load(file_name) - elif ext == '.flo': - return readFlow(file_name).astype(np.float32) - elif ext == '.pfm': - flow = readPFM(file_name).astype(np.float32) - if len(flow.shape) == 2: - return flow - else: - return flow[:, :, :-1] - return [] - - -def flow_error_image_np(flow_pred, flow_gt, mask_occ, mask_noc=None, log_colors=True): - """Visualize the error between two flows as 3-channel color image. - Adapted from the KITTI C++ devkit. - Args: - flow_pred: prediction flow of shape [ height, width, 2]. - flow_gt: ground truth - mask_occ: flow validity mask of shape [num_batch, height, width, 1]. - Equals 1 at (occluded and non-occluded) valid pixels. - mask_noc: Is 1 only at valid pixels which are not occluded. - """ - # mask_noc = tf.ones(tf.shape(mask_occ)) if mask_noc is None else mask_noc - mask_noc = np.ones(mask_occ.shape) if mask_noc is None else mask_noc - diff_sq = (flow_pred - flow_gt) ** 2 - # diff = tf.sqrt(tf.reduce_sum(diff_sq, [3], keep_dims=True)) - diff = np.sqrt(np.sum(diff_sq, axis=2, keepdims=True)) - if log_colors: - height, width, _ = flow_pred.shape - # num_batch, height, width, _ = tf.unstack(tf.shape(flow_1)) - colormap = [ - [0, 0.0625, 49, 54, 149], - [0.0625, 0.125, 69, 117, 180], - [0.125, 0.25, 116, 173, 209], - [0.25, 0.5, 171, 217, 233], - [0.5, 1, 224, 243, 248], - [1, 2, 254, 224, 144], - [2, 4, 253, 174, 97], - [4, 8, 244, 109, 67], - [8, 16, 215, 48, 39], - [16, 1000000000.0, 165, 0, 38]] - colormap = np.asarray(colormap, dtype=np.float32) - colormap[:, 2:5] = colormap[:, 2:5] / 255 - # mag = tf.sqrt(tf.reduce_sum(tf.square(flow_2), 3, keep_dims=True)) - tempp = np.square(flow_gt) - # temp = np.sum(tempp, axis=2, keep_dims=True) - # mag = np.sqrt(temp) - mag = np.sqrt(np.sum(tempp, axis=2, keepdims=True)) - # error = tf.minimum(diff / 3, 20 * diff / mag) - error = np.minimum(diff / 3, 20 * diff / (mag + 1e-7)) - im = np.zeros([height, width, 3]) - for i in range(colormap.shape[0]): - colors = colormap[i, :] - cond = np.logical_and(np.greater_equal(error, colors[0]), np.less(error, colors[1])) - # temp=np.tile(cond, [1, 1, 3]) - im = np.where(np.tile(cond, [1, 1, 3]), np.ones([height, width, 1]) * colors[2:5], im) - # temp=np.cast(mask_noc, np.bool) - # im = np.where(np.tile(np.cast(mask_noc, np.bool), [1, 1, 3]), im, im * 0.5) - im = np.where(np.tile(mask_noc == 1, [1, 1, 3]), im, im * 0.5) - im = im * mask_occ - else: - error = (np.minimum(diff, 5) / 5) * mask_occ - im_r = error # errors in occluded areas will be red - im_g = error * mask_noc - im_b = error * mask_noc - im = np.concatenate([im_r, im_g, im_b], axis=2) - # im = np.concatenate(axis=2, values=[im_r, im_g, im_b]) - return im[:, :, ::-1] - - -def viz_img_seq(img_list=[], flow_list=[], batch_index=0, if_debug=True): - '''visulize image sequence from cuda''' - if if_debug: - - assert len(img_list) != 0 - if len(img_list[0].shape) == 3: - img_list = [np.expand_dims(img, axis=0) for img in img_list] - elif img_list[0].shape[1] == 1: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - img_list = [cv2.cvtColor(flo * 255, cv2.COLOR_GRAY2BGR) for flo in img_list] - elif img_list[0].shape[1] == 2: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - img_list = [flow_to_image_relative(flo) / 255.0 for flo in img_list] - else: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - - if len(flow_list) == 0: - flow_list = [np.zeros_like(img) for img in img_list] - elif len(flow_list[0].shape) == 3: - flow_list = [np.expand_dims(img, axis=0) for img in flow_list] - elif flow_list[0].shape[1] == 1: - flow_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in flow_list] - flow_list = [cv2.cvtColor(flo * 255, cv2.COLOR_GRAY2BGR) for flo in flow_list] - elif flow_list[0].shape[1] == 2: - flow_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in flow_list] - flow_list = [flow_to_image_relative(flo) / 255.0 for flo in flow_list] - else: - flow_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in flow_list] - - if img_list[0].max() > 10: - img_list = [img / 255.0 for img in img_list] - if flow_list[0].max() > 10: - flow_list = [img / 255.0 for img in flow_list] - - while len(img_list) > len(flow_list): - flow_list.append(np.zeros_like(flow_list[-1])) - while len(flow_list) > len(img_list): - img_list.append(np.zeros_like(img_list[-1])) - img_flo = np.concatenate([flow_list[0], img_list[0]], axis=0) - # map flow to rgb image - for i in range(1, len(img_list)): - temp = np.concatenate([flow_list[i], img_list[i]], axis=0) - img_flo = np.concatenate([img_flo, temp], axis=1) - cv2.imshow('image', img_flo[:, :, [2, 1, 0]]) - cv2.waitKey() - else: - return - - -def plt_show_img_flow(img_list=[], flow_list=[], batch_index=0): - assert len(img_list) != 0 - if len(img_list[0].shape) == 3: - img_list = [np.expand_dims(img, axis=0) for img in img_list] - elif img_list[0].shape[1] == 1: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - img_list = [cv2.cvtColor(flo * 255, cv2.COLOR_GRAY2BGR) for flo in img_list] - elif img_list[0].shape[1] == 2: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - img_list = [flow_to_image_relative(flo) / 255.0 for flo in img_list] - else: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - - assert flow_list[0].shape[1] == 2 - flow_vec = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in flow_list] - flow_list = [flow_to_image_relative(flo) / 255.0 for flo in flow_vec] - - col = len(flow_list) // 2 - fig = plt.figure(figsize=(10, 8)) - for i in range(len(flow_list)): - ax1 = fig.add_subplot(2, col, i + 1) - plot_quiver(ax1, flow=flow_vec[i], mask=flow_list[i], spacing=(30 * flow_list[i].shape[0]) // 512) - if i == len(flow_list) - 1: - plt.title("Final Flow Result") - else: - plt.title("Flow from decoder (Layer %d)" % i) - plt.xticks([]) - plt.yticks([]) - plt.tight_layout() - - # save image to buffer - buf = BytesIO() - plt.savefig(buf, format='png') - buf.seek(0) - # convert buffer to image - img = Image.open(buf) - # convert image to numpy array - img = np.asarray(img) - return img - - -def plt_attention(attention, h, w): - col = len(attention) // 2 - fig = plt.figure(figsize=(10, 5)) - - for i in range(len(attention)): - viz = attention[i][0, :, :, h, w].detach().cpu().numpy() - # viz = viz[7:-7, 7:-7] - if i == 0: - viz_all = viz - else: - viz_all = viz_all + viz - - ax1 = fig.add_subplot(2, col + 1, i + 1) - img = ax1.imshow(viz, cmap="rainbow", interpolation="bilinear") - plt.colorbar(img, ax=ax1) - ax1.scatter(h, w, color='red') - plt.title("Attention of Iteration %d" % (i + 1)) - - ax1 = fig.add_subplot(2, col + 1, 2 * (col + 1)) - img = ax1.imshow(viz_all, cmap="rainbow", interpolation="bilinear") - plt.colorbar(img, ax=ax1) - ax1.scatter(h, w, color='red') - plt.title("Mean Attention") - plt.show() - - -def plot_quiver(ax, flow, spacing, mask=None, show_win=None, margin=0, **kwargs): - """Plots less dense quiver field. - - Args: - ax: Matplotlib axis - flow: motion vectors - spacing: space (px) between each arrow in grid - margin: width (px) of enclosing region without arrows - kwargs: quiver kwargs (default: angles="xy", scale_units="xy") - """ - h, w, *_ = flow.shape - spacing = 50 - if show_win is None: - nx = int((w - 2 * margin) / spacing) - ny = int((h - 2 * margin) / spacing) - x = np.linspace(margin, w - margin - 1, nx, dtype=np.int64) - y = np.linspace(margin, h - margin - 1, ny, dtype=np.int64) - else: - h0, h1, w0, w1 = *show_win, - h0 = int(h0 * h) - h1 = int(h1 * h) - w0 = int(w0 * w) - w1 = int(w1 * w) - num_h = (h1 - h0) // spacing - num_w = (w1 - w0) // spacing - y = np.linspace(h0, h1, num_h, dtype=np.int64) - x = np.linspace(w0, w1, num_w, dtype=np.int64) - - flow = flow[np.ix_(y, x)] - u = flow[:, :, 0] - v = flow[:, :, 1] * -1 # ---------- - - kwargs = {**dict(angles="xy", scale_units="xy"), **kwargs} - if mask is not None: - ax.imshow(mask) - # ax.quiver(x, y, u, v, color="black", scale=10, width=0.010, headwidth=5, minlength=0.5) # bigger is short - ax.quiver(x, y, u, v, color="black") # bigger is short - x_gird, y_gird = np.meshgrid(x, y) - ax.scatter(x_gird, y_gird, c="black", s=(h + w) // 50) - ax.scatter(x_gird, y_gird, c="black", s=(h + w) // 100) - ax.set_ylim(sorted(ax.get_ylim(), reverse=True)) - ax.set_aspect("equal") - - -def save_img_seq(img_list, batch_index=0, name='img', if_debug=False): - if if_debug: - temp = img_list[0] - size = temp.shape - fourcc = cv2.VideoWriter_fourcc(*'mp4v') - out = cv2.VideoWriter(name + '_flow.mp4', fourcc, 22, (size[-1], size[-2])) - if img_list[0].shape[1] == 2: - image_list = [] - flow_vec = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - flow_viz = [flow_to_image_relative(flo) for flo in flow_vec] - # for index, img in enumerate(flow_viz): - # image_list.append(viz(flow_viz[index], flow_vec[index], flow_viz[index])) - img_list = flow_viz - if img_list[0].shape[1] == 3: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() * 255.0 for img1 in img_list] - if img_list[0].shape[1] == 1: - img_list = [img1[batch_index].detach().cpu().permute(1, 2, 0).numpy() for img1 in img_list] - img_list = [cv2.cvtColor(flo * 255, cv2.COLOR_GRAY2BGR) for flo in img_list] - - for index, img in enumerate(img_list): - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - cv2.imwrite(name + '_%d.png' % index, img) - out.write(img.astype(np.uint8)) - out.release() - else: - return - - -from io import BytesIO - - -def viz(flo, flow_vec, - image): - fig, axes = plt.subplots(1, 2, figsize=(10, 5), dpi=500) - ax1 = axes[0] - plot_quiver(ax1, flow=flow_vec, mask=flo, spacing=40) - ax1.set_title('flow all') - - ax1 = axes[1] - ax1.imshow(image) - ax1.set_title('image') - - plt.tight_layout() - # eliminate the x and y-axis - plt.axis('off') - # save figure into a buffer - buf = BytesIO() - plt.savefig(buf, format='png', dpi=200) - buf.seek(0) - # convert to numpy array - im = np.array(Image.open(buf)) - buf.close() - plt.close() - return im diff --git "a/spaces/a-v-bely/russian-task-generator/pages/4_\360\237\223\235_\320\236\320\275\320\273\320\260\320\271\320\275-\321\202\320\265\321\201\321\202 (\321\215\320\272\321\201\320\277\320\265\321\200\320\270\320\274\320\265\320\275\321\202).py" "b/spaces/a-v-bely/russian-task-generator/pages/4_\360\237\223\235_\320\236\320\275\320\273\320\260\320\271\320\275-\321\202\320\265\321\201\321\202 (\321\215\320\272\321\201\320\277\320\265\321\200\320\270\320\274\320\265\320\275\321\202).py" deleted file mode 100644 index 7bda8111583b56729e6e282006e0e1bc86332f53..0000000000000000000000000000000000000000 --- "a/spaces/a-v-bely/russian-task-generator/pages/4_\360\237\223\235_\320\236\320\275\320\273\320\260\320\271\320\275-\321\202\320\265\321\201\321\202 (\321\215\320\272\321\201\320\277\320\265\321\200\320\270\320\274\320\265\320\275\321\202).py" +++ /dev/null @@ -1,67 +0,0 @@ -import datetime -import pandas as pd -import streamlit as st -from utilities_database.user_database_utils import save_data_in_database -from utilities_database.user_database_widgets import user_save_text_table - -st.set_page_config(page_title='Онлайн-тест', layout="wide", page_icon=':ru:') -if st.session_state.get('-ONLINE_TEST_READY-') and st.session_state.get('-LOGGED_IN_BOOL-'): - INSTRUCTION = st.expander(label='**ИНСТРУКЦИЯ**', expanded=True) - INSTRUCTION.markdown( - 'Уважаемые пользователи, предлагаем Вам заполнить опросник по оценке качества созданных заданий. ' - '\n\nНиже находится анкета с заданиями в таблице.' - '\n\n- В **первом столбце** приводится ответ - слово, удаленное из оригинального текста.' - '\n\n- Отметьте во **втором столбце**, уместно ли создавать задание с данным словом.' - '\n\n- В **третьем столбце** приведены подобранные программой дистракторы.' - '\n\n- Введите в **четвертый столбец** дистракторы (целиком или букву), которые, по Вашему мнению, ' - '**:red[не уместны]**. ' - '\n\n**:green[Уместными дистракторами]** мы предлагаем считать те, которые одновременно удовлетворяют ' - 'следующим условиям в рамках языкового уровня, для которого они созданы:' - '\n\n1. не слишком очевидно являются неправильными вариантами (*варить суп/стол*);' - '\n\n2. и при этом не могут быть полноценной заменой удаленного слова (*варить суп/кашу*)' - ) - result = st.session_state.get('RESULT') - if result is None: - st.error('Не можем ничего загрузить! Вы ничего не просили!') - st.stop() - tasks = result['TASKS_ONLY'] - answers = result['KEYS_ONLY_RAW'] - len_answers = len(answers) - st.header('Онлайн-тест') - ONLINE_TEST = st.form('Онлайн тест') - ONLINE_TEST.write(result['TEXT_WITH_GAPS'].replace('_', '\_')) - BAD_DISTRACTORS_AND_ANSWERS_temp = ONLINE_TEST.experimental_data_editor( - pd.DataFrame([{"Задание №": i+1, - "Ответ": [answers[i][1]], - "Задание уместно": False, - "Дистракторы": tasks[i][1], - "Неуместные дистракторы": ''} - for i in range(len(tasks))]), - num_rows="fixed", - height=45*len_answers, - use_container_width=True) - COMMENTS = ONLINE_TEST.text_input(label='**Прокомментировать**', - placeholder='Напишите комментарий') - SUBMIT = ONLINE_TEST.form_submit_button('READY') - if SUBMIT: - points = test_mark = 'Teacher' - appropriate_tasks = BAD_DISTRACTORS_AND_ANSWERS_temp["Задание уместно"].values.tolist() - inappropriate_distractors = BAD_DISTRACTORS_AND_ANSWERS_temp["Неуместные дистракторы"].values.tolist() - RETURN_TEST_DATA = [{'ANSWER': answers[i], - 'APPROPRIATE_TASK': appropriate_tasks[i], - 'INAPPROPRIATE_DISTRACTORS': inappropriate_distractors[i]} for i in range(len_answers)] - save_data_in_database(user_task_database=user_save_text_table, - save_type='online_test', - save_name=st.session_state['-UPLOAD_CLOUD_FILE_NAME-'], - cefr_level=st.session_state['-LOADED_CEFR_LEVEL-'], - time_stamp=str(datetime.datetime.now())[:-7], - creator_name=st.session_state.get('-USER_NAME-'), - test_taker_name=st.session_state.get('-USER_NAME-'), - test_taker_answers=RETURN_TEST_DATA, - generated_result=result, - test_taker_result={'Баллов': points, 'Всего': len_answers, 'Оценка': test_mark}, - comments=COMMENTS) -elif st.session_state.get('-LOGGED_IN_BOOL-'): - st.warning('**Не можем ничего загрузить! Вы ничего не просили!**') -else: - st.warning('**Войдите или зарегистрируйтесь**') diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/inference.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/inference.py deleted file mode 100644 index 90bc1c0c68525734bd6793f07c15fe97d3c8342c..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/inference.py +++ /dev/null @@ -1,136 +0,0 @@ -import matplotlib.pyplot as plt -import annotator.uniformer.mmcv as mmcv -import torch -from annotator.uniformer.mmcv.parallel import collate, scatter -from annotator.uniformer.mmcv.runner import load_checkpoint - -from annotator.uniformer.mmseg.datasets.pipelines import Compose -from annotator.uniformer.mmseg.models import build_segmentor - - -def init_segmentor(config, checkpoint=None, device='cuda:0'): - """Initialize a segmentor from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str, optional) CPU/CUDA device option. Default 'cuda:0'. - Use 'cpu' for loading model on CPU. - Returns: - nn.Module: The constructed segmentor. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - 'but got {}'.format(type(config))) - config.model.pretrained = None - config.model.train_cfg = None - model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - model.CLASSES = checkpoint['meta']['CLASSES'] - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """A simple pipeline to load image.""" - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - - Returns: - dict: ``results`` will be returned containing loaded image. - """ - - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_segmentor(model, img): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - data['img_metas'] = [i.data[0] for i in data['img_metas']] - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - palette=None, - fig_size=(15, 10), - opacity=0.5, - title='', - block=True): - """Visualize the segmentation results on the image. - - Args: - model (nn.Module): The loaded segmentor. - img (str or np.ndarray): Image filename or loaded image. - result (list): The segmentation result. - palette (list[list[int]]] | None): The palette of segmentation - map. If None is given, random palette will be generated. - Default: None - fig_size (tuple): Figure size of the pyplot figure. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - title (str): The title of pyplot figure. - Default is ''. - block (bool): Whether to block the pyplot figure. - Default is True. - """ - if hasattr(model, 'module'): - model = model.module - img = model.show_result( - img, result, palette=palette, show=False, opacity=opacity) - # plt.figure(figsize=fig_size) - # plt.imshow(mmcv.bgr2rgb(img)) - # plt.title(title) - # plt.tight_layout() - # plt.show(block=block) - return mmcv.bgr2rgb(img) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/version.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/version.py deleted file mode 100644 index 1cce4e50bd692d4002e3cac3c545a3fb2efe95d0..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/version.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -__version__ = '1.3.17' - - -def parse_version_info(version_str: str, length: int = 4) -> tuple: - """Parse a version string into a tuple. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int | str]: The version info, e.g., "1.3.0" is parsed into - (1, 3, 0, 0, 0, 0), and "2.0.0rc1" is parsed into - (2, 0, 0, 0, 'rc', 1) (when length is set to 4). - """ - from packaging.version import parse - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - release.extend(list(version.pre)) - elif version.is_postrelease: - release.extend(list(version.post)) - else: - release.extend([0, 0]) - return tuple(release) - - -version_info = tuple(int(x) for x in __version__.split('.')[:3]) - -__all__ = ['__version__', 'version_info', 'parse_version_info'] diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/offscreen.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/offscreen.py deleted file mode 100644 index 340142983006cdc6f51b6d114e9b2b294aa4a919..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/offscreen.py +++ /dev/null @@ -1,160 +0,0 @@ -"""Wrapper for offscreen rendering. - -Author: Matthew Matl -""" -import os - -from .renderer import Renderer -from .constants import RenderFlags - - -class OffscreenRenderer(object): - """A wrapper for offscreen rendering. - - Parameters - ---------- - viewport_width : int - The width of the main viewport, in pixels. - viewport_height : int - The height of the main viewport, in pixels. - point_size : float - The size of screen-space points in pixels. - """ - - def __init__(self, viewport_width, viewport_height, point_size=1.0): - self.viewport_width = viewport_width - self.viewport_height = viewport_height - self.point_size = point_size - - self._platform = None - self._renderer = None - self._create() - - @property - def viewport_width(self): - """int : The width of the main viewport, in pixels. - """ - return self._viewport_width - - @viewport_width.setter - def viewport_width(self, value): - self._viewport_width = int(value) - - @property - def viewport_height(self): - """int : The height of the main viewport, in pixels. - """ - return self._viewport_height - - @viewport_height.setter - def viewport_height(self, value): - self._viewport_height = int(value) - - @property - def point_size(self): - """float : The pixel size of points in point clouds. - """ - return self._point_size - - @point_size.setter - def point_size(self, value): - self._point_size = float(value) - - def render(self, scene, flags=RenderFlags.NONE, seg_node_map=None): - """Render a scene with the given set of flags. - - Parameters - ---------- - scene : :class:`Scene` - A scene to render. - flags : int - A bitwise or of one or more flags from :class:`.RenderFlags`. - seg_node_map : dict - A map from :class:`.Node` objects to (3,) colors for each. - If specified along with flags set to :attr:`.RenderFlags.SEG`, - the color image will be a segmentation image. - - Returns - ------- - color_im : (h, w, 3) uint8 or (h, w, 4) uint8 - The color buffer in RGB format, or in RGBA format if - :attr:`.RenderFlags.RGBA` is set. - Not returned if flags includes :attr:`.RenderFlags.DEPTH_ONLY`. - depth_im : (h, w) float32 - The depth buffer in linear units. - """ - self._platform.make_current() - # If platform does not support dynamically-resizing framebuffers, - # destroy it and restart it - if (self._platform.viewport_height != self.viewport_height or - self._platform.viewport_width != self.viewport_width): - if not self._platform.supports_framebuffers(): - self.delete() - self._create() - - self._platform.make_current() - self._renderer.viewport_width = self.viewport_width - self._renderer.viewport_height = self.viewport_height - self._renderer.point_size = self.point_size - - if self._platform.supports_framebuffers(): - flags |= RenderFlags.OFFSCREEN - retval = self._renderer.render(scene, flags, seg_node_map) - else: - self._renderer.render(scene, flags, seg_node_map) - depth = self._renderer.read_depth_buf() - if flags & RenderFlags.DEPTH_ONLY: - retval = depth - else: - color = self._renderer.read_color_buf() - retval = color, depth - - # Make the platform not current - self._platform.make_uncurrent() - return retval - - def delete(self): - """Free all OpenGL resources. - """ - self._platform.make_current() - self._renderer.delete() - self._platform.delete_context() - del self._renderer - del self._platform - self._renderer = None - self._platform = None - import gc - gc.collect() - - def _create(self): - if 'PYOPENGL_PLATFORM' not in os.environ: - from pyrender.platforms.pyglet_platform import PygletPlatform - self._platform = PygletPlatform(self.viewport_width, - self.viewport_height) - elif os.environ['PYOPENGL_PLATFORM'] == 'egl': - from pyrender.platforms import egl - device_id = int(os.environ.get('EGL_DEVICE_ID', '0')) - egl_device = egl.get_device_by_index(device_id) - self._platform = egl.EGLPlatform(self.viewport_width, - self.viewport_height, - device=egl_device) - elif os.environ['PYOPENGL_PLATFORM'] == 'osmesa': - from pyrender.platforms.osmesa import OSMesaPlatform - self._platform = OSMesaPlatform(self.viewport_width, - self.viewport_height) - else: - raise ValueError('Unsupported PyOpenGL platform: {}'.format( - os.environ['PYOPENGL_PLATFORM'] - )) - self._platform.init_context() - self._platform.make_current() - self._renderer = Renderer(self.viewport_width, self.viewport_height) - - def __del__(self): - try: - self.delete() - except Exception: - pass - - -__all__ = ['OffscreenRenderer'] diff --git a/spaces/adirik/stylemc-demo/encoder4editing/models/stylegan2/model.py b/spaces/adirik/stylemc-demo/encoder4editing/models/stylegan2/model.py deleted file mode 100644 index 54870486c6ef5a0d34e8e63b94ba5e3ac6e68944..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/encoder4editing/models/stylegan2/model.py +++ /dev/null @@ -1,673 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F - -from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - return_features=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - elif return_features: - return image, out - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out diff --git a/spaces/ajitrajasekharan/NER-Biomedical-PHI-Ensemble/batched_main_NER.py b/spaces/ajitrajasekharan/NER-Biomedical-PHI-Ensemble/batched_main_NER.py deleted file mode 100644 index 97e5e5a5aa719cab76f688457682f5a5bf40e67f..0000000000000000000000000000000000000000 --- a/spaces/ajitrajasekharan/NER-Biomedical-PHI-Ensemble/batched_main_NER.py +++ /dev/null @@ -1,910 +0,0 @@ -import pdb -import config_utils as cf -import requests -import sys -import urllib.parse -import numpy as np -from collections import OrderedDict -import argparse -from common import * -import json - -#WORD_POS = 1 -#TAG_POS = 2 -#MASK_TAG = "__entity__" -DEFAULT_CONFIG = "./config.json" -DISPATCH_MASK_TAG = "entity" -DESC_HEAD = "PIVOT_DESCRIPTORS:" -#TYPE2_AMB = "AMB2-" -TYPE2_AMB = "" -DUMMY_DESCS=10 -DEFAULT_ENTITY_MAP = "entity_types_consolidated.txt" - -#RESET_POS_TAG='RESET' -SPECIFIC_TAG=":__entity__" - - -def softmax(x): - """Compute softmax values for each sets of scores in x.""" - e_x = np.exp(x - np.max(x)) - return e_x / e_x.sum(axis=0) # only difference - -#def softmax(x): -# """Compute softmax values for each sets of scores in x.""" -# return np.exp(x) / np.sum(np.exp(x), axis=0) - - -#noun_tags = ['NFP','JJ','NN','FW','NNS','NNPS','JJS','JJR','NNP','POS','CD'] -#cap_tags = ['NFP','JJ','NN','FW','NNS','NNPS','JJS','JJR','NNP','PRP'] - -def read_common_descs(file_name): - common_descs = {} - with open(file_name) as fp: - for line in fp: - common_descs[line.strip()] = 1 - print("Common descs for filtering read:",len(common_descs)) - return common_descs - -def read_entity_map(file_name): - emap = {} - with open(file_name) as fp: - for line in fp: - line = line.rstrip('\n') - entities = line.split() - if (len(entities) == 1): - assert(entities[0] not in emap) - emap[entities[0]] = entities[0] - else: - assert(len(entities) == 2) - entity_arr = entities[1].split('/') - if (entities[0] not in emap): - emap[entities[0]] = entities[0] - for entity in entity_arr: - assert(entity not in emap) - emap[entity] = entities[0] - print("Entity map:",len(emap)) - return emap - -class UnsupNER: - def __init__(self,config_file): - print("NER service handler started") - base_path = cf.read_config(config_file)["BASE_PATH"] if ("BASE_PATH" in cf.read_config(config_file)) else "./" - self.pos_server_url = cf.read_config(config_file)["POS_SERVER_URL"] - self.desc_server_url = cf.read_config(config_file)["DESC_SERVER_URL"] - self.entity_server_url = cf.read_config(config_file)["ENTITY_SERVER_URL"] - self.common_descs = read_common_descs(cf.read_config(config_file)["COMMON_DESCS_FILE"]) - self.entity_map = read_entity_map(cf.read_config(config_file)["EMAP_FILE"]) - self.rfp = open(base_path + "log_results.txt","a") - self.dfp = open(base_path + "log_debug.txt","a") - self.algo_ci_tag_fp = open(base_path + "algorthimic_ci_tags.txt","a") - print(self.pos_server_url) - print(self.desc_server_url) - print(self.entity_server_url) - np.set_printoptions(suppress=True) #this suppresses exponential representation when np is used to round - if (cf.read_config(config_file)["SUPPRESS_UNTAGGED"] == "1"): - self.suppress_untagged = True - else: - self.suppress_untagged = False #This is disabled in full debug text mode - - - #This is bad hack for prototyping - parsing from text output as opposed to json - def extract_POS(self,text): - arr = text.split('\n') - if (len(arr) > 0): - start_pos = 0 - for i,line in enumerate(arr): - if (len(line) > 0): - start_pos += 1 - continue - else: - break - #print(arr[start_pos:]) - terms_arr = [] - for i,line in enumerate(arr[start_pos:]): - terms = line.split('\t') - if (len(terms) == 5): - #print(terms) - terms_arr.append(terms) - return terms_arr - - def normalize_casing(self,sent): - sent_arr = sent.split() - ret_sent_arr = [] - for i,word in enumerate(sent_arr): - if (len(word) > 1): - norm_word = word[0] + word[1:].lower() - else: - norm_word = word[0] - ret_sent_arr.append(norm_word) - return ' '.join(ret_sent_arr) - - #Full sentence tag call also generates json output. - def tag_sentence_service(self,text,desc_obj): - ret_str = self.tag_sentence(text,self.rfp,self.dfp,True,desc_obj) - return ret_str - - def dictify_ner_response(self,ner_str): - arr = ner_str.split('\n') - ret_dict = OrderedDict() - count = 1 - ref_indices_arr = [] - for line in arr: - terms = line.split() - if (len(terms) == 2): - ret_dict[count] = {"term":terms[0],"e":terms[1]} - if (terms[1] != "O" and terms[1].startswith("B_")): - ref_indices_arr.append(count) - count += 1 - elif (len(terms) == 1): - ret_dict[count] = {"term":"empty","e":terms[0]} - if (terms[0] != "O" and terms[0].startswith("B_")): - ref_indices_arr.append(count) - count += 1 - if (len(ret_dict) > 3): #algorithmic harvesting of CI labels for human verification and adding to bootstrap list - self.algo_ci_tag_fp.write("SENT:" + ner_str.replace('\n',' ') + "\n") - out = terms[0].replace('[',' ').replace(']','').split()[-1] - out = '_'.join(out.split('_')[1:]) if out.startswith("B_") else out - print(out) - self.algo_ci_tag_fp.write(ret_dict[count-2]["term"] + " " + out + "\n") - self.algo_ci_tag_fp.flush() - else: - assert(len(terms) == 0) #If not empty something is not right - return ret_dict,ref_indices_arr - - def blank_entity_sentence(self,sent,dfp): - value = True if sent.endswith(" :__entity__\n") else False - if (value == True): - print("\n\n**************** Skipping CI prediction in pooling for sent:",sent) - dfp.write("\n\n**************** Skipping CI prediction in pooling for sent:" + sent + "\n") - return value - - def pool_confidences(self,ci_entities,ci_confidences,ci_subtypes,cs_entities,cs_confidences,cs_subtypes,debug_str_arr,sent,dfp): - main_classes = {} - assert(len(cs_entities) == len(cs_confidences)) - assert(len(cs_subtypes) == len(cs_entities)) - assert(len(ci_entities) == len(ci_confidences)) - assert(len(ci_subtypes) == len(ci_entities)) - #Pool entity classes across CI and CS - is_blank_statement = self.blank_entity_sentence(sent,dfp) #Do not pool CI confidences of the sentences of the form " is a entity". These sentences are sent for purely algo harvesting of CS terms. CI predictions will add noise. - if (not is_blank_statement): #Do not pool CI confidences of the sentences of the form " is a entity". These sentences are sent for purely algo harvesting of CS terms. CI predictions will add noise. - for e,c in zip(ci_entities,ci_confidences): - e_base = e.split('[')[0] - main_classes[e_base] = float(c) - for e,c in zip(cs_entities,cs_confidences): - e_base = e.split('[')[0] - if (e_base in main_classes): - main_classes[e_base] += float(c) - else: - main_classes[e_base] = float(c) - final_sorted_d = OrderedDict(sorted(main_classes.items(), key=lambda kv: kv[1], reverse=True)) - main_dist = self.convert_positive_nums_to_dist(final_sorted_d) - main_classes_arr = list(final_sorted_d.keys()) - #print("\nIn pooling confidences") - #print(main_classes_arr) - #print(main_dist) - #Pool subtypes across CI and CS for a particular entity class - subtype_factors = {} - for e_class in final_sorted_d: - if e_class in cs_subtypes: - stypes = cs_subtypes[e_class] - if (e_class not in subtype_factors): - subtype_factors[e_class] = {} - for st in stypes: - if (st in subtype_factors[e_class]): - subtype_factors[e_class][st] += stypes[st] - else: - subtype_factors[e_class][st] = stypes[st] - if (is_blank_statement): - continue - if e_class in ci_subtypes: - stypes = ci_subtypes[e_class] - if (e_class not in subtype_factors): - subtype_factors[e_class] = {} - for st in stypes: - if (st in subtype_factors[e_class]): - subtype_factors[e_class][st] += stypes[st] - else: - subtype_factors[e_class][st] = stypes[st] - sorted_subtype_factors = {} - for e_class in subtype_factors: - stypes = subtype_factors[e_class] - final_sorted_d = OrderedDict(sorted(stypes.items(), key=lambda kv: kv[1], reverse=True)) - stypes_dist = self.convert_positive_nums_to_dist(final_sorted_d) - stypes_class_arr = list(final_sorted_d.keys()) - sorted_subtype_factors[e_class] = {"stypes":stypes_class_arr,"dist":stypes_dist} - pooled_results = OrderedDict() - assert(len(main_classes_arr) == len(main_dist)) - d_str_arr = [] - d_str_arr.append("\n***CONSOLIDATED ENTITY:") - for e,c in zip(main_classes_arr,main_dist): - pooled_results[e] = {"e":e,"confidence":c} - d_str_arr.append(e + " " + str(c)) - stypes_dict = sorted_subtype_factors[e] - pooled_st = OrderedDict() - for st,sd in zip(stypes_dict["stypes"],stypes_dict["dist"]): - pooled_st[st] = sd - pooled_results[e]["stypes"] = pooled_st - debug_str_arr.append(' '.join(d_str_arr)) - print(' '.join(d_str_arr)) - return pooled_results - - - - - - - - - - def init_entity_info(self,entity_info_dict,index): - curr_term_dict = OrderedDict() - entity_info_dict[index] = curr_term_dict - curr_term_dict["ci"] = OrderedDict() - curr_term_dict["ci"]["entities"] = [] - curr_term_dict["ci"]["descs"] = [] - curr_term_dict["cs"] = OrderedDict() - curr_term_dict["cs"]["entities"] = [] - curr_term_dict["cs"]["descs"] = [] - - - - - #This now does specific tagging if there is a __entity__ in sentence; else does full tagging. TBD. - #TBD. Make response params same regardlesss of output format. Now it is different - def tag_sentence(self,sent,rfp,dfp,json_output,desc_obj): - print("Input: ", sent) - dfp.write("\n\n++++-------------------------------\n") - dfp.write("NER_INPUT: " + sent + "\n") - debug_str_arr = [] - entity_info_dict = OrderedDict() - #url = self.desc_server_url + sent.replace('"','\'') - #r = self.dispatch_request(url) - #if (r is None): - # print("Empty response. Desc server is probably down: ",self.desc_server_url) - # return json.loads("[]") - #main_obj = json.loads(r.text) - main_obj = desc_obj - #print(json.dumps(main_obj,indent=4)) - #Find CI predictions for ALL masked predictios in sentence - ci_predictions,orig_ci_entities = self.find_ci_entities(main_obj,debug_str_arr,entity_info_dict) #ci_entities is the same info as ci_predictions except packed differently for output - #Find CS predictions for ALL masked predictios in sentence. Use the CI predictions from previous step to - #pool - detected_entities_arr,ner_str,full_pooled_results,orig_cs_entities = self.find_cs_entities(sent,main_obj,rfp,dfp,debug_str_arr,ci_predictions,entity_info_dict) - assert(len(detected_entities_arr) == len(entity_info_dict)) - print("--------") - if (json_output): - if (len(detected_entities_arr) != len(entity_info_dict)): - if (len(entity_info_dict) == 0): - self.init_entity_info(entity_info_dict,index) - entity_info_dict[1]["cs"]["entities"].append([{"e":"O","confidence":1}]) - entity_info_dict[1]["ci"]["entities"].append([{"e":"O","confidence":1}]) - ret_dict,ref_indices_arr = self.dictify_ner_response(ner_str) #Convert ner string to a dictionary for json output - assert(len(ref_indices_arr) == len(detected_entities_arr)) - assert(len(entity_info_dict) == len(detected_entities_arr)) - cs_aux_dict = OrderedDict() - ci_aux_dict = OrderedDict() - cs_aux_orig_entities = OrderedDict() - ci_aux_orig_entities = OrderedDict() - pooled_pred_dict = OrderedDict() - count = 0 - assert(len(full_pooled_results) == len(detected_entities_arr)) - assert(len(full_pooled_results) == len(orig_cs_entities)) - assert(len(full_pooled_results) == len(orig_ci_entities)) - for e,c,p,o,i in zip(detected_entities_arr,entity_info_dict,full_pooled_results,orig_cs_entities,orig_ci_entities): - val = entity_info_dict[c] - #cs_aux_dict[ref_indices_arr[count]] = {"e":e,"cs_distribution":val["cs"]["entities"],"cs_descs":val["cs"]["descs"]} - pooled_pred_dict[ref_indices_arr[count]] = {"e": e, "cs_distribution": list(p.values())} - cs_aux_dict[ref_indices_arr[count]] = {"e":e,"cs_descs":val["cs"]["descs"]} - #ci_aux_dict[ref_indices_arr[count]] = {"ci_distribution":val["ci"]["entities"],"ci_descs":val["ci"]["descs"]} - ci_aux_dict[ref_indices_arr[count]] = {"ci_descs":val["ci"]["descs"]} - cs_aux_orig_entities[ref_indices_arr[count]] = {"e":e,"cs_distribution": o} - ci_aux_orig_entities[ref_indices_arr[count]] = {"e":e,"cs_distribution": i} - count += 1 - #print(ret_dict) - #print(aux_dict) - final_ret_dict = {"total_terms_count":len(ret_dict),"detected_entity_phrases_count":len(detected_entities_arr),"ner":ret_dict,"entity_distribution":pooled_pred_dict,"cs_prediction_details":cs_aux_dict,"ci_prediction_details":ci_aux_dict,"orig_cs_prediction_details":cs_aux_orig_entities,"orig_ci_prediction_details":ci_aux_orig_entities,"debug":debug_str_arr} - json_str = json.dumps(final_ret_dict,indent = 4) - #print (json_str) - #with open("single_debug.txt","w") as fp: - # fp.write(json_str) - - dfp.write('\n'.join(debug_str_arr)) - dfp.write("\n\nEND-------------------------------\n") - dfp.flush() - return json_str - else: - print(detected_entities_arr) - debug_str_arr.append("NER_FINAL_RESULTS: " + ' '.join(detected_entities_arr)) - print("--------") - dfp.write('\n'.join(debug_str_arr)) - dfp.write("\n\nEND-------------------------------\n") - dfp.flush() - return detected_entities_arr,span_arr,terms_arr,ner_str,debug_str_arr - - def masked_word_first_letter_capitalize(self,entity): - arr = entity.split() - ret_arr = [] - for term in arr: - if (len(term) > 1 and term[0].islower() and term[1].islower()): - ret_arr.append(term[0].upper() + term[1:]) - else: - ret_arr.append(term) - return ' '.join(ret_arr) - - - def gen_single_phrase_sentences(self,terms_arr,masked_sent_arr,span_arr,rfp,dfp): - sentence_template = "%s is a entity" - print(span_arr) - sentences = [] - singleton_spans_arr = [] - run_index = 0 - entity = "" - singleton_span = [] - while (run_index < len(span_arr)): - if (span_arr[run_index] == 1): - while (run_index < len(span_arr)): - if (span_arr[run_index] == 1): - #print(terms_arr[run_index][WORD_POS],end=' ') - if (len(entity) == 0): - entity = terms_arr[run_index][WORD_POS] - else: - entity = entity + " " + terms_arr[run_index][WORD_POS] - singleton_span.append(1) - run_index += 1 - else: - break - #print() - for i in sentence_template.split(): - if (i != "%s"): - singleton_span.append(0) - entity = self.masked_word_first_letter_capitalize(entity) - sentence = sentence_template % entity - sentences.append(sentence) - singleton_spans_arr.append(singleton_span) - print(sentence) - print(singleton_span) - entity = "" - singleton_span = [] - else: - run_index += 1 - return sentences,singleton_spans_arr - - - def find_ci_entities(self,main_obj,debug_str_arr,entity_info_dict): - ci_predictions = [] - orig_ci_confidences = [] - term_index = 1 - batch_obj = main_obj["descs_and_entities"] - for key in batch_obj: - masked_sent = batch_obj[key]["ci_prediction"]["sentence"] - print("\n**CI: ", masked_sent) - debug_str_arr.append(masked_sent) - #entity_info_dict["masked_sent"].append(masked_sent) - inp_arr = batch_obj[key]["ci_prediction"]["descs"] - descs = self.get_descriptors_for_masked_position(inp_arr) - self.init_entity_info(entity_info_dict,term_index) - entities,confidences,subtypes = self.get_entities_for_masked_position(inp_arr,descs,debug_str_arr,entity_info_dict[term_index]["ci"]) - ci_predictions.append({"entities":entities,"confidences":confidences,"subtypes":subtypes}) - orig_ci_confidences.append(self.pack_confidences(entities,confidences)) #this is sent for ensemble server to detect cross predictions. CS predicitons are more reflective of cross over than consolidated predictions, since CI may overwhelm CS - term_index += 1 - return ci_predictions,orig_ci_confidences - - - def pack_confidences(self,cs_entities,cs_confidences): - assert(len(cs_entities) == len(cs_confidences)) - orig_cs_arr = [] - for e,c in zip(cs_entities,cs_confidences): - print(e,c) - e_split = e.split('[') - e_main = e_split[0] - if (len(e_split) > 1): - e_sub = e_split[1].split(',')[0].rstrip(']') - if (e_main != e_sub): - e = e_main + '[' + e_sub + ']' - else: - e = e_main - else: - e = e_main - orig_cs_arr.append({"e":e,"confidence":c}) - return orig_cs_arr - - - #We have multiple masked versions of a single sentence. Tag each one of them - #and create a complete tagged version for a sentence - def find_cs_entities(self,sent,main_obj,rfp,dfp,debug_str_arr,ci_predictions,entity_info_dict): - #print(sent) - batch_obj = main_obj["descs_and_entities"] - dfp.write(sent + "\n") - term_index = 1 - detected_entities_arr = [] - full_pooled_results = [] - orig_cs_confidences = [] - for index,key in enumerate(batch_obj): - position_info = batch_obj[key]["cs_prediction"]["descs"] - ci_entities = ci_predictions[index]["entities"] - ci_confidences = ci_predictions[index]["confidences"] - ci_subtypes = ci_predictions[index]["subtypes"] - debug_str_arr.append("\n++++++ nth Masked term : " + str(key)) - #dfp.write(key + "\n") - masked_sent = batch_obj[key]["cs_prediction"]["sentence"] - print("\n**CS: ",masked_sent) - descs = self.get_descriptors_for_masked_position(position_info) - #dfp.write(str(descs) + "\n") - if (len(descs) > 0): - cs_entities,cs_confidences,cs_subtypes = self.get_entities_for_masked_position(position_info,descs,debug_str_arr,entity_info_dict[term_index]["cs"]) - else: - cs_entities = [] - cs_confidences = [] - cs_subtypes = [] - #dfp.write(str(cs_entities) + "\n") - pooled_results = self.pool_confidences(ci_entities,ci_confidences,ci_subtypes,cs_entities,cs_confidences,cs_subtypes,debug_str_arr,sent,dfp) - self.fill_detected_entities(detected_entities_arr,pooled_results) #just picks the top prediction - full_pooled_results.append(pooled_results) - orig_cs_confidences.append(self.pack_confidences(cs_entities,cs_confidences)) #this is sent for ensemble server to detect cross predictions. CS predicitons are more reflective of cross over than consolidated predictions, since CI may overwhelm CS - #self.old_resolve_entities(i,singleton_entities,detected_entities_arr) #This decides how to pick entities given CI and CS predictions - term_index += 1 - #out of the full loop over sentences. Now create NER sentence - terms_arr = main_obj["terms_arr"] - span_arr = main_obj["span_arr"] - ner_str = self.emit_sentence_entities(sent,terms_arr,detected_entities_arr,span_arr,rfp) #just outputs results in NER Conll format - dfp.flush() - return detected_entities_arr,ner_str,full_pooled_results,orig_cs_confidences - - - def fill_detected_entities(self,detected_entities_arr,entities): - if (len(entities) > 0): - top_e_class = next(iter(entities)) - top_subtype = next(iter(entities[top_e_class]["stypes"])) - if (top_e_class != top_subtype): - top_prediction = top_e_class + "[" + top_subtype + "]" - else: - top_prediction = top_e_class - detected_entities_arr.append(top_prediction) - else: - detected_entities_arr.append("OTHER") - - - def fill_detected_entities_old(self,detected_entities_arr,entities,pan_arr): - entities_dict = {} - count = 1 - for i in entities: - cand = i.split("-") - for j in cand: - terms = j.split("/") - for k in terms: - if (k not in entities_dict): - entities_dict[k] = 1.0/count - else: - entities_dict[k] += 1.0/count - count += 1 - final_sorted_d = OrderedDict(sorted(entities_dict.items(), key=lambda kv: kv[1], reverse=True)) - first = "OTHER" - for first in final_sorted_d: - break - detected_entities_arr.append(first) - - #Contextual entity is picked as first candidate before context independent candidate - def old_resolve_entities(self,index,singleton_entities,detected_entities_arr): - if (singleton_entities[index].split('[')[0] != detected_entities_arr[index].split('[')[0]): - if (singleton_entities[index].split('[')[0] != "OTHER" and detected_entities_arr[index].split('[')[0] != "OTHER"): - detected_entities_arr[index] = detected_entities_arr[index] + "/" + singleton_entities[index] - elif (detected_entities_arr[index].split('[')[0] == "OTHER"): - detected_entities_arr[index] = singleton_entities[index] - else: - pass - else: - #this is the case when both CI and CS entity type match. Since the subtypes are already ordered, just merge(CS/CI,CS/CI...) the two picking unique subtypes - main_entity = detected_entities_arr[index].split('[')[0] - cs_arr = detected_entities_arr[index].split('[')[1].rstrip(']').split(',') - ci_arr = singleton_entities[index].split('[')[1].rstrip(']').split(',') - cs_arr_len = len(cs_arr) - ci_arr_len = len(ci_arr) - max_len = ci_arr_len if ci_arr_len > cs_arr_len else cs_arr_len - merged_unique_subtype_dict = OrderedDict() - for i in range(cs_arr_len): - if (i < cs_arr_len and cs_arr[i] not in merged_unique_subtype_dict): - merged_unique_subtype_dict[cs_arr[i]] = 1 - if (i < ci_arr_len and ci_arr[i] not in merged_unique_subtype_dict): - merged_unique_subtype_dict[ci_arr[i]] = 1 - new_subtypes_str = ','.join(list(merged_unique_subtype_dict.keys())) - detected_entities_arr[index] = main_entity + '[' + new_subtypes_str + ']' - - - - - - - def emit_sentence_entities(self,sent,terms_arr,detected_entities_arr,span_arr,rfp): - print("Final result") - ret_str = "" - for i,term in enumerate(terms_arr): - print(term,' ',end='') - print() - sent_arr = sent.split() - assert(len(terms_arr) == len(span_arr)) - entity_index = 0 - i = 0 - in_span = False - while (i < len(span_arr)): - if (span_arr[i] == 0): - tag = "O" - if (in_span): - in_span = False - entity_index += 1 - else: - if (in_span): - tag = "I_" + detected_entities_arr[entity_index] - else: - in_span = True - tag = "B_" + detected_entities_arr[entity_index] - rfp.write(terms_arr[i] + ' ' + tag + "\n") - ret_str = ret_str + terms_arr[i] + ' ' + tag + "\n" - print(tag + ' ',end='') - i += 1 - print() - rfp.write("\n") - ret_str += "\n" - rfp.flush() - return ret_str - - - - - - def get_descriptors_for_masked_position(self,inp_arr): - desc_arr = [] - for i in range(len(inp_arr)): - desc_arr.append(inp_arr[i]["desc"]) - desc_arr.append(inp_arr[i]["v"]) - return desc_arr - - def dispatch_request(self,url): - max_retries = 10 - attempts = 0 - while True: - try: - r = requests.get(url,timeout=1000) - if (r.status_code == 200): - return r - except: - print("Request:", url, " failed. Retrying...") - attempts += 1 - if (attempts >= max_retries): - print("Request:", url, " failed") - break - - def convert_positive_nums_to_dist(self,final_sorted_d): - factors = list(final_sorted_d.values()) #convert dict values to an array - factors = list(map(float, factors)) - total = float(sum(factors)) - if (total == 0): - total = 1 - factors[0] = 1 #just make the sum 100%. This a boundary case for numbers for instance - factors = np.array(factors) - #factors = softmax(factors) - factors = factors/total - factors = np.round(factors,4) - return factors - - def get_desc_weights_total(self,count,desc_weights): - i = 0 - total = 0 - while (i < count): - total += float(desc_weights[i+1]) - i += 2 - total = 1 if total == 0 else total - return total - - - def aggregate_entities(self,entities,desc_weights,debug_str_arr,entity_info_dict_entities): - ''' Given a masked position, whose entity we are trying to determine, - First get descriptors for that postion 2*N array [desc1,score1,desc2,score2,...] - Then for each descriptor, get entity predictions which is an array 2*N of the form [e1,score1,e2,score2,...] where e1 could be DRUG/DISEASE and score1 is 10/8 etc. - In this function we aggregate each unique entity prediction (e.g. DISEASE) by summing up its weighted scores across all N predictions. - The result factor array is normalized to create a probability distribution - ''' - count = len(entities) - assert(count %2 == 0) - aggregate_entities = {} - i = 0 - subtypes = {} - while (i < count): - #entities[i] contains entity names and entities[i+] contains counts. Example PROTEIN/GENE/PERSON is i and 10/4/7 is i+1 - curr_counts = entities[i+1].split('/') #this is one of the N predictions - this single prediction is itself a list of entities - trunc_e,trunc_counts = self.map_entities(entities[i].split('/'),curr_counts,subtypes) # Aggregate the subtype entities for this predictions. Subtypes aggregation is **across** the N predictions - #Also trunc_e contains the consolidated entity names. - assert(len(trunc_e) <= len(curr_counts)) # can be less if untagged is skipped - assert(len(trunc_e) == len(trunc_counts)) - trunc_counts = softmax(trunc_counts) #this normalization is done to reduce the effect of absolute count of certain labeled entities, while aggregating the entity vectors across descriptors - curr_counts_sum = sum(map(int,trunc_counts)) #Using truncated count - curr_counts_sum = 1 if curr_counts_sum == 0 else curr_counts_sum - for j in range(len(trunc_e)): #this is iterating through the current instance of all *consolidated* tagged entity predictons (that is except UNTAGGED_ENTITY) - if (self.skip_untagged(trunc_e[j])): - continue - if (trunc_e[j] not in aggregate_entities): - aggregate_entities[trunc_e[j]] = (float(trunc_counts[j]))*float(desc_weights[i+1]) - #aggregate_entities[trunc_e[j]] = (float(trunc_counts[j])/curr_counts_sum)*float(desc_weights[i+1]) - #aggregate_entities[trunc_e[j]] = float(desc_weights[i+1]) - else: - aggregate_entities[trunc_e[j]] += (float(trunc_counts[j]))*float(desc_weights[i+1]) - #aggregate_entities[trunc_e[j]] += (float(trunc_counts[j])/curr_counts_sum)*float(desc_weights[i+1]) - #aggregate_entities[trunc_e[j]] += float(desc_weights[i+1]) - i += 2 - final_sorted_d = OrderedDict(sorted(aggregate_entities.items(), key=lambda kv: kv[1], reverse=True)) - if (len(final_sorted_d) == 0): #Case where all terms are tagged OTHER - final_sorted_d = {"OTHER":1} - subtypes["OTHER"] = {"OTHER":1} - factors = self.convert_positive_nums_to_dist(final_sorted_d) - ret_entities = list(final_sorted_d.keys()) - confidences = factors.tolist() - print(ret_entities) - sorted_subtypes = self.sort_subtypes(subtypes) - ret_entities = self.update_entities_with_subtypes(ret_entities,sorted_subtypes) - print(ret_entities) - debug_str_arr.append(" ") - debug_str_arr.append(' '.join(ret_entities)) - print(confidences) - assert(len(confidences) == len(ret_entities)) - arr = [] - for e,c in zip(ret_entities,confidences): - arr.append({"e":e,"confidence":c}) - entity_info_dict_entities.append(arr) - debug_str_arr.append(' '.join([str(x) for x in confidences])) - debug_str_arr.append("\n\n") - return ret_entities,confidences,subtypes - - - def sort_subtypes(self,subtypes): - sorted_subtypes = OrderedDict() - for ent in subtypes: - final_sorted_d = OrderedDict(sorted(subtypes[ent].items(), key=lambda kv: kv[1], reverse=True)) - sorted_subtypes[ent] = list(final_sorted_d.keys()) - return sorted_subtypes - - def update_entities_with_subtypes(self,ret_entities,subtypes): - new_entities = [] - - for ent in ret_entities: - #if (len(ret_entities) == 1): - # new_entities.append(ent) #avoid creating a subtype for a single case - # return new_entities - if (ent in subtypes): - new_entities.append(ent + '[' + ','.join(subtypes[ent]) + ']') - else: - new_entities.append(ent) - return new_entities - - def skip_untagged(self,term): - if (self.suppress_untagged == True and (term == "OTHER" or term == "UNTAGGED_ENTITY")): - return True - return False - - - def map_entities(self,arr,counts_arr,subtypes_dict): - ret_arr = [] - new_counts_arr = [] - for index,term in enumerate(arr): - if (self.skip_untagged(term)): - continue - ret_arr.append(self.entity_map[term]) - new_counts_arr.append(int(counts_arr[index])) - if (self.entity_map[term] not in subtypes_dict): - subtypes_dict[self.entity_map[term]] = {} - if (term not in subtypes_dict[self.entity_map[term]]): - #subtypes_dict[self.entity_map[i]][i] = 1 - subtypes_dict[self.entity_map[term]][term] = int(counts_arr[index]) - else: - #subtypes_dict[self.entity_map[i]][i] += 1 - subtypes_dict[self.entity_map[term]][term] += int(counts_arr[index]) - return ret_arr,new_counts_arr - - def get_entities_from_batch(self,inp_arr): - entities_arr = [] - for i in range(len(inp_arr)): - entities_arr.append(inp_arr[i]["e"]) - entities_arr.append(inp_arr[i]["e_count"]) - return entities_arr - - - def get_entities_for_masked_position(self,inp_arr,descs,debug_str_arr,entity_info_dict): - entities = self.get_entities_from_batch(inp_arr) - debug_combined_arr =[] - desc_arr =[] - assert(len(descs) %2 == 0) - assert(len(entities) %2 == 0) - index = 0 - for d,e in zip(descs,entities): - p_e = '/'.join(e.split('/')[:5]) - debug_combined_arr.append(d + " " + p_e) - if (index % 2 == 0): - temp_dict = OrderedDict() - temp_dict["d"] = d - temp_dict["e"] = e - else: - temp_dict["mlm"] = d - temp_dict["l_score"] = e - desc_arr.append(temp_dict) - index += 1 - debug_str_arr.append("\n" + ', '.join(debug_combined_arr)) - print(debug_combined_arr) - entity_info_dict["descs"] = desc_arr - #debug_str_arr.append(' '.join(entities)) - assert(len(entities) == len(descs)) - entities,confidences,subtypes = self.aggregate_entities(entities,descs,debug_str_arr,entity_info_dict["entities"]) - return entities,confidences,subtypes - - - #This is again a bad hack for prototyping purposes - extracting fields from a raw text output as opposed to a structured output like json - def extract_descs(self,text): - arr = text.split('\n') - desc_arr = [] - if (len(arr) > 0): - for i,line in enumerate(arr): - if (line.startswith(DESC_HEAD)): - terms = line.split(':') - desc_arr = ' '.join(terms[1:]).strip().split() - break - return desc_arr - - - def generate_masked_sentences(self,terms_arr): - size = len(terms_arr) - sentence_arr = [] - span_arr = [] - i = 0 - while (i < size): - term_info = terms_arr[i] - if (term_info[TAG_POS] in noun_tags): - skip = self.gen_sentence(sentence_arr,terms_arr,i) - i += skip - for j in range(skip): - span_arr.append(1) - else: - i += 1 - span_arr.append(0) - #print(sentence_arr) - return sentence_arr,span_arr - - def gen_sentence(self,sentence_arr,terms_arr,index): - size = len(terms_arr) - new_sent = [] - for prefix,term in enumerate(terms_arr[:index]): - new_sent.append(term[WORD_POS]) - i = index - skip = 0 - while (i < size): - if (terms_arr[i][TAG_POS] in noun_tags): - skip += 1 - i += 1 - else: - break - new_sent.append(MASK_TAG) - i = index + skip - while (i < size): - new_sent.append(terms_arr[i][WORD_POS]) - i += 1 - assert(skip != 0) - sentence_arr.append(new_sent) - return skip - - - - - - - - -def run_test(file_name,obj): - rfp = open("results.txt","w") - dfp = open("debug.txt","w") - with open(file_name) as fp: - count = 1 - for line in fp: - if (len(line) > 1): - print(str(count) + "] ",line,end='') - obj.tag_sentence(line,rfp,dfp) - count += 1 - rfp.close() - dfp.close() - - -def tag_single_entity_in_sentence(file_name,obj): - rfp = open("results.txt","w") - dfp = open("debug.txt","w") - sfp = open("se_results.txt","w") - with open(file_name) as fp: - count = 1 - for line in fp: - if (len(line) > 1): - print(str(count) + "] ",line,end='') - #entity_arr,span_arr,terms_arr,ner_str,debug_str = obj.tag_sentence(line,rfp,dfp,False) # False for json output - json_str = obj.tag_sentence(line,rfp,dfp,True) # True for json output - #print("*******************:",terms_arr[span_arr.index(1)][WORD_POS].rstrip(":"),entity_arr[0]) - #sfp.write(terms_arr[span_arr.index(1)][WORD_POS].rstrip(":") + " " + entity_arr[0] + "\n") - count += 1 - sfp.flush() - #pdb.set_trace() - rfp.close() - sfp.close() - dfp.close() - - - - -test_arr = [ -"He felt New:__entity__ York:__entity__ has a chance to win this year's competition", -"Ajit rajasekharan is an engineer at nFerence:__entity__", -"Ajit:__entity__ rajasekharan is an engineer:__entity__ at nFerence:__entity__", -"Mesothelioma:__entity__ is caused by exposure to asbestos:__entity__", -"Fyodor:__entity__ Mikhailovich:__entity__ Dostoevsky:__entity__ was treated for Parkinsons", -"Ajit:__entity__ Rajasekharan:__entity__ is an engineer at nFerence", -"A eGFR:__entity__ below 60 indicates chronic kidney disease", -"A eGFR below 60:__entity__ indicates chronic kidney disease", -"A eGFR:__entity__ below 60:__entity__ indicates chronic:__entity__ kidney:__entity__ disease:__entity__", -"Ajit:__entity__ rajasekharan is an engineer at nFerence", -"Her hypophysitis secondary to ipilimumab was well managed with supplemental hormones", -"In Seattle:__entity__ , Pete Incaviglia 's grand slam with one out in the sixth snapped a tie and lifted the Baltimore Orioles past the Seattle Mariners , 5-2 .", -"engineer", -"Austin:__entity__ called", -"Paul Erdős died at 83", -"Imatinib mesylate is a drug and is used to treat nsclc", -"In Seattle , Pete Incaviglia 's grand slam with one out in the sixth snapped a tie and lifted the Baltimore Orioles past the Seattle Mariners , 5-2 .", -"It was Incaviglia 's sixth grand slam and 200th homer of his career .", -"Add Women 's singles , third round Lisa Raymond ( U.S. ) beat Kimberly Po ( U.S. ) 6-3 6-2 .", -"1880s marked the beginning of Jazz", -"He flew from New York to SFO", -"Lionel Ritchie was popular in the 1980s", -"Lionel Ritchie was popular in the late eighties", -"John Doe flew from New York to Rio De Janiro via Miami", -"He felt New York has a chance to win this year's competition", -"Bandolier - Budgie ' , a free itunes app for ipad , iphone and ipod touch , released in December 2011 , tells the story of the making of Bandolier in the band 's own words - including an extensive audio interview with Burke Shelley", -"In humans mutations in Foxp2 leads to verbal dyspraxia", -"The recent spread of Corona virus flu from China to Italy,Iran, South Korea and Japan has caused global concern", -"Hotel California topped the singles chart", -"Elon Musk said Telsa will open a manufacturing plant in Europe", -"He flew from New York to SFO", -"After studies at Hofstra University , He worked for New York Telephone before He was elected to the New York State Assembly to represent the 16th District in Northwest Nassau County ", -"Everyday he rode his bicycle from Rajakilpakkam to Tambaram", -"If he loses Saturday , it could devalue his position as one of the world 's great boxers , \" Panamanian Boxing Association President Ramon Manzanares said .", -"West Indian all-rounder Phil Simmons took four for 38 on Friday as Leicestershire beat Somerset by an innings and 39 runs in two days to take over at the head of the county championship .", -"they are his friends ", -"they flew from Boston to Rio De Janiro and had a mocha", -"he flew from Boston to Rio De Janiro and had a mocha", -"X,Y,Z are medicines"] - - -def test_canned_sentences(obj): - rfp = open("results.txt","w") - dfp = open("debug.txt","w") - pdb.set_trace() - for line in test_arr: - ret_val = obj.tag_sentence(line,rfp,dfp,True) - pdb.set_trace() - rfp.close() - dfp.close() - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='main NER for a single model ',formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument('-input', action="store", dest="input",default="",help='Input file required for run options batch,single') - parser.add_argument('-config', action="store", dest="config", default=DEFAULT_CONFIG,help='config file path') - parser.add_argument('-option', action="store", dest="option",default="canned",help='Valid options are canned,batch,single. canned - test few canned sentences used in medium artice. batch - tag sentences in input file. Entities to be tagged are determing used POS tagging to find noun phrases. specific - tag specific entities in input file. The tagged word or phrases needs to be of the form w1:__entity_ w2:__entity_ Example:Her hypophysitis:__entity__ secondary to ipilimumab was well managed with supplemental:__entity__ hormones:__entity__') - results = parser.parse_args() - - obj = UnsupNER(results.config) - if (results.option == "canned"): - test_canned_sentences(obj) - elif (results.option == "batch"): - if (len(results.input) == 0): - print("Input file needs to be specified") - else: - run_test(results.input,obj) - print("Tags and sentences are written in results.txt and debug.txt") - elif (results.option == "specific"): - if (len(results.input) == 0): - print("Input file needs to be specified") - else: - tag_single_entity_in_sentence(results.input,obj) - print("Tags and sentences are written in results.txt and debug.txt") - else: - print("Invalid argument:\n") - parser.print_help() diff --git a/spaces/akhaliq/AnimeGANv1/app.py b/spaces/akhaliq/AnimeGANv1/app.py deleted file mode 100644 index 31ff107e70b7a544f8aa7381ebebf92f17eef862..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/AnimeGANv1/app.py +++ /dev/null @@ -1,20 +0,0 @@ -from PIL import Image -import torch -import gradio as gr - -model = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1") - -face2paint = torch.hub.load( - 'bryandlee/animegan2-pytorch:main', 'face2paint', - size=512, device="cpu" -) -def inference(img): - out = face2paint(model, img) - return out - - -title = "Animeganv1" -description = "Gradio demo for AnimeGanv1 Face Portrait v1. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please use a cropped portrait picture for best results similar to the examples below" -article = "

    Github Repo Pytorch | Github Repo ONNX

    samples from repo: animation animation

    " -examples=[['bill.png']] -gr.Interface(inference, gr.inputs.Image(type="pil"), gr.outputs.Image(type="pil"),title=title,description=description,article=article,enable_queue=True,examples=examples).launch() \ No newline at end of file diff --git a/spaces/akhaliq/Kapao/demos/general.py b/spaces/akhaliq/Kapao/demos/general.py deleted file mode 100644 index 79a0fbceb79d3bf52452398268aa8c76eafa8776..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Kapao/demos/general.py +++ /dev/null @@ -1,159 +0,0 @@ -import sys -from pathlib import Path - -import argparse -from pytube import YouTube -import os.path as osp -from utils.torch_utils import select_device, time_sync -from utils.general import check_img_size -from utils.datasets import LoadImages -from models.experimental import attempt_load -import torch -import cv2 -import numpy as np -import yaml -from tqdm import tqdm -import imageio -from val import run_nms, post_process_batch - - -VIDEO_NAME = 'Crazy Uptown Funk Flashmob in Sydney for sydney domains campaign.mp4' -URL = 'https://youtu.be/1WLMahXDnuI' -COLOR = (255, 0, 255) # purple -ALPHA = 0.5 -SEG_THICK = 3 -FPS_TEXT_SIZE = 2 - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--data', type=str, default='data/coco-kp.yaml') - parser.add_argument('--imgsz', type=int, default=448) - parser.add_argument('--vid', type=str, default='') - parser.add_argument('--weights', default='kapao_s_coco.pt') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or cpu') - parser.add_argument('--half', action='store_true') - parser.add_argument('--conf-thres', type=float, default=0.5, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') - parser.add_argument('--no-kp-dets', action='store_true', help='do not use keypoint objects') - parser.add_argument('--conf-thres-kp', type=float, default=0.5) - parser.add_argument('--conf-thres-kp-person', type=float, default=0.2) - parser.add_argument('--iou-thres-kp', type=float, default=0.45) - parser.add_argument('--overwrite-tol', type=int, default=50) - parser.add_argument('--scales', type=float, nargs='+', default=[1]) - parser.add_argument('--flips', type=int, nargs='+', default=[-1]) - parser.add_argument('--display', action='store_true', help='display inference results') - parser.add_argument('--fps', action='store_true', help='display fps') - parser.add_argument('--gif', action='store_true', help='create fig') - parser.add_argument('--start', type=int, default=68, help='start time (s)') - parser.add_argument('--end', type=int, default=98, help='end time (s)') - args = parser.parse_args() - - with open(args.data) as f: - data = yaml.safe_load(f) # load data dict - - # add inference settings to data dict - data['imgsz'] = args.imgsz - data['conf_thres'] = args.conf_thres - data['iou_thres'] = args.iou_thres - data['use_kp_dets'] = not args.no_kp_dets - data['conf_thres_kp'] = args.conf_thres_kp - data['iou_thres_kp'] = args.iou_thres_kp - data['conf_thres_kp_person'] = args.conf_thres_kp_person - data['overwrite_tol'] = args.overwrite_tol - data['scales'] = args.scales - data['flips'] = [None if f == -1 else f for f in args.flips] - - - - device = select_device(args.device, batch_size=1) - print('Using device: {}'.format(device)) - - model = attempt_load(args.weights, map_location=device) # load FP32 model - half = args.half & (device.type != 'cpu') - if half: # half precision only supported on CUDA - model.half() - stride = int(model.stride.max()) # model stride - - imgsz = check_img_size(args.imgsz, s=stride) # check image size - dataset = LoadImages(args.vid, img_size=imgsz, stride=stride, auto=True) - - if device.type != 'cpu': - model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once - - cap = dataset.cap - cap.set(cv2.CAP_PROP_POS_MSEC, args.start * 1000) - fps = cap.get(cv2.CAP_PROP_FPS) - n = int(fps * (args.end - args.start)) - h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - gif_frames = [] - video_name = 'flash_mob_inference_{}'.format(osp.splitext(args.weights)[0]) - - if not args.display: - writer = cv2.VideoWriter(video_name + '.mp4', - cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - if not args.fps: # tqdm might slows down inference - dataset = tqdm(dataset, desc='Writing inference video', total=n) - - t0 = time_sync() - for i, (path, img, im0, _) in enumerate(dataset): - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img = img / 255.0 # 0 - 255 to 0.0 - 1.0 - if len(img.shape) == 3: - img = img[None] # expand for batch dim - - out = model(img, augment=True, kp_flip=data['kp_flip'], scales=data['scales'], flips=data['flips'])[0] - person_dets, kp_dets = run_nms(data, out) - bboxes, poses, _, _, _ = post_process_batch(data, img, [], [[im0.shape[:2]]], person_dets, kp_dets) - - im0_copy = im0.copy() - - # DRAW POSES - for j, (bbox, pose) in enumerate(zip(bboxes, poses)): - x1, y1, x2, y2 = bbox - size = ((x2 - x1) ** 2 + (y2 - y1) ** 2) ** 0.5 - # if size < 450: - cv2.rectangle(im0_copy, (int(x1), int(y1)), (int(x2), int(y2)), COLOR, thickness=2) - for seg in data['segments'].values(): - pt1 = (int(pose[seg[0], 0]), int(pose[seg[0], 1])) - pt2 = (int(pose[seg[1], 0]), int(pose[seg[1], 1])) - cv2.line(im0_copy, pt1, pt2, COLOR, SEG_THICK) - im0 = cv2.addWeighted(im0, ALPHA, im0_copy, 1 - ALPHA, gamma=0) - - if i == 0: - t = time_sync() - t0 - else: - t = time_sync() - t1 - - if args.fps: - s = FPS_TEXT_SIZE - cv2.putText(im0, '{:.1f} FPS'.format(1 / t), (5*s, 25*s), - cv2.FONT_HERSHEY_SIMPLEX, s, (255, 255, 255), thickness=2*s) - - if args.gif: - gif_frames.append(cv2.resize(im0, dsize=None, fx=0.375, fy=0.375)[:, :, [2, 1, 0]]) - elif not args.display: - writer.write(im0) - else: - cv2.imshow('', im0) - cv2.waitKey(1) - - t1 = time_sync() - if i == n - 1: - break - - cv2.destroyAllWindows() - cap.release() - if not args.display: - writer.release() - - if args.gif: - print('Saving GIF...') - with imageio.get_writer(video_name + '.gif', mode="I", fps=fps) as writer: - for idx, frame in tqdm(enumerate(gif_frames)): - writer.append_data(frame) - - - diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer_preprocess_audio.py b/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer_preprocess_audio.py deleted file mode 100644 index fd4d01d476d77391322aef9d9d5a005adb1f5c15..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer_preprocess_audio.py +++ /dev/null @@ -1,59 +0,0 @@ -from synthesizer.preprocess import preprocess_dataset -from synthesizer.hparams import hparams -from utils.argutils import print_args -from pathlib import Path -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Preprocesses audio files from datasets, encodes them as mel spectrograms " - "and writes them to the disk. Audio files are also saved, to be used by the " - "vocoder for training.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("datasets_root", type=Path, help=\ - "Path to the directory containing your LibriSpeech/TTS datasets.") - parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\ - "Path to the output directory that will contain the mel spectrograms, the audios and the " - "embeds. Defaults to /SV2TTS/synthesizer/") - parser.add_argument("-n", "--n_processes", type=int, default=None, help=\ - "Number of processes in parallel.") - parser.add_argument("-s", "--skip_existing", action="store_true", help=\ - "Whether to overwrite existing files with the same name. Useful if the preprocessing was " - "interrupted.") - parser.add_argument("--hparams", type=str, default="", help=\ - "Hyperparameter overrides as a comma-separated list of name-value pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--no_alignments", action="store_true", help=\ - "Use this option when dataset does not include alignments\ - (these are used to split long audio files into sub-utterances.)") - parser.add_argument("--datasets_name", type=str, default="LibriSpeech", help=\ - "Name of the dataset directory to process.") - parser.add_argument("--subfolders", type=str, default="train-clean-100, train-clean-360", help=\ - "Comma-separated list of subfolders to process inside your dataset directory") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "out_dir"): - args.out_dir = args.datasets_root.joinpath("SV2TTS", "synthesizer") - - # Create directories - assert args.datasets_root.exists() - args.out_dir.mkdir(exist_ok=True, parents=True) - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - del args.no_trim - - # Preprocess the dataset - print_args(args, parser) - args.hparams = hparams.parse(args.hparams) - preprocess_dataset(**vars(args)) diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/vctk/voc1/run.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/vctk/voc1/run.sh deleted file mode 100644 index 5f58c42f36dd7cc8e7ca2917814a72face2222cc..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/vctk/voc1/run.sh +++ /dev/null @@ -1,188 +0,0 @@ -#!/bin/bash - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -. ./cmd.sh || exit 1; -. ./path.sh || exit 1; - -# basic settings -stage=-1 # stage to start -stop_stage=100 # stage to stop -verbose=1 # verbosity level (lower is less info) -n_gpus=1 # number of gpus in training -n_jobs=16 # number of parallel jobs in feature extraction - -# NOTE(kan-bayashi): renamed to conf to avoid conflict in parse_options.sh -conf=conf/parallel_wavegan.v1.yaml - -# speaker setting -spks="all" # all or you can choose speakers e.g., "p225 p226 p227 ..." - -# directory path setting -download_dir=downloads # directory to save database -dumpdir=dump # directory to dump features - -# training related setting -tag="" # tag for directory to save model -resume="" # checkpoint path to resume training - # (e.g. //checkpoint-10000steps.pkl) - -# decoding related setting -checkpoint="" # checkpoint path to be used for decoding - # if not provided, the latest one will be used - # (e.g. //checkpoint-400000steps.pkl) - -# shellcheck disable=SC1091 -. utils/parse_options.sh || exit 1; - -train_set="train_nodev_$(echo "${spks}" | tr " " "_")" # name of training data directory -dev_set="dev_$(echo "${spks}" | tr " " "_")" # name of development data directory -eval_set="eval_$(echo "${spks}" | tr " " "_")" # name of evaluation data directory - -set -euo pipefail - -if [ "${stage}" -le -1 ] && [ "${stop_stage}" -ge -1 ]; then - echo "Stage -1: Data download" - local/data_download.sh "${download_dir}" -fi - -if [ "${stage}" -le 0 ] && [ "${stop_stage}" -ge 0 ]; then - echo "Stage 0: Data preparation" - train_data_dirs="" - dev_data_dirs="" - eval_data_dirs="" - # if set to "all", use all of the speakers in the corpus - if [ "${spks}" = "all" ]; then - # NOTE(kan-bayashi): p315 will not be used since it lacks txt data - spks=$(find "${download_dir}/VCTK-Corpus/wav48" \ - -maxdepth 1 -name "p*" -exec basename {} \; | sort | grep -v p315) - fi - for spk in ${spks}; do - local/data_prep.sh \ - --fs "$(yq ".sampling_rate" "${conf}")" \ - --train_set "train_nodev_${spk}" \ - --dev_set "dev_${spk}" \ - --eval_set "eval_${spk}" \ - "${download_dir}/VCTK-Corpus" "${spk}" data - train_data_dirs+=" data/train_nodev_${spk}" - dev_data_dirs+=" data/dev_${spk}" - eval_data_dirs+=" data/eval_${spk}" - done - # shellcheck disable=SC2086 - utils/combine_data.sh "data/${train_set}" ${train_data_dirs} - # shellcheck disable=SC2086 - utils/combine_data.sh "data/${dev_set}" ${dev_data_dirs} - # shellcheck disable=SC2086 - utils/combine_data.sh "data/${eval_set}" ${eval_data_dirs} -fi - -stats_ext=$(grep -q "hdf5" <(yq ".format" "${conf}") && echo "h5" || echo "npy") -if [ "${stage}" -le 1 ] && [ "${stop_stage}" -ge 1 ]; then - echo "Stage 1: Feature extraction" - # extract raw features - pids=() - for name in "${train_set}" "${dev_set}" "${eval_set}"; do - ( - [ ! -e "${dumpdir}/${name}/raw" ] && mkdir -p "${dumpdir}/${name}/raw" - echo "Feature extraction start. See the progress via ${dumpdir}/${name}/raw/preprocessing.*.log." - utils/make_subset_data.sh "data/${name}" "${n_jobs}" "${dumpdir}/${name}/raw" - ${train_cmd} JOB=1:${n_jobs} "${dumpdir}/${name}/raw/preprocessing.JOB.log" \ - parallel-wavegan-preprocess \ - --config "${conf}" \ - --scp "${dumpdir}/${name}/raw/wav.JOB.scp" \ - --segments "${dumpdir}/${name}/raw/segments.JOB" \ - --dumpdir "${dumpdir}/${name}/raw/dump.JOB" \ - --verbose "${verbose}" - echo "Successfully finished feature extraction of ${name} set." - ) & - pids+=($!) - done - i=0; for pid in "${pids[@]}"; do wait "${pid}" || ((++i)); done - [ "${i}" -gt 0 ] && echo "$0: ${i} background jobs are failed." && exit 1; - echo "Successfully finished feature extraction." - - # calculate statistics for normalization - echo "Statistics computation start. See the progress via ${dumpdir}/${train_set}/compute_statistics.log." - ${train_cmd} "${dumpdir}/${train_set}/compute_statistics.log" \ - parallel-wavegan-compute-statistics \ - --config "${conf}" \ - --rootdir "${dumpdir}/${train_set}/raw" \ - --dumpdir "${dumpdir}/${train_set}" \ - --verbose "${verbose}" - echo "Successfully finished calculation of statistics." - - # normalize and dump them - pids=() - for name in "${train_set}" "${dev_set}" "${eval_set}"; do - ( - [ ! -e "${dumpdir}/${name}/norm" ] && mkdir -p "${dumpdir}/${name}/norm" - echo "Nomalization start. See the progress via ${dumpdir}/${name}/norm/normalize.*.log." - ${train_cmd} JOB=1:${n_jobs} "${dumpdir}/${name}/norm/normalize.JOB.log" \ - parallel-wavegan-normalize \ - --config "${conf}" \ - --stats "${dumpdir}/${train_set}/stats.${stats_ext}" \ - --rootdir "${dumpdir}/${name}/raw/dump.JOB" \ - --dumpdir "${dumpdir}/${name}/norm/dump.JOB" \ - --verbose "${verbose}" - echo "Successfully finished normalization of ${name} set." - ) & - pids+=($!) - done - i=0; for pid in "${pids[@]}"; do wait "${pid}" || ((++i)); done - [ "${i}" -gt 0 ] && echo "$0: ${i} background jobs are failed." && exit 1; - echo "Successfully finished normalization." -fi - -if [ -z "${tag}" ]; then - expdir="exp/${train_set}_vctk_$(basename "${conf}" .yaml)" -else - expdir="exp/${train_set}_vctk_${tag}" -fi -if [ "${stage}" -le 2 ] && [ "${stop_stage}" -ge 2 ]; then - echo "Stage 2: Network training" - [ ! -e "${expdir}" ] && mkdir -p "${expdir}" - cp "${dumpdir}/${train_set}/stats.${stats_ext}" "${expdir}" - if [ "${n_gpus}" -gt 1 ]; then - train="python -m parallel_wavegan.distributed.launch --nproc_per_node ${n_gpus} -c parallel-wavegan-train" - else - train="parallel-wavegan-train" - fi - echo "Training start. See the progress via ${expdir}/train.log." - ${cuda_cmd} --gpu "${n_gpus}" "${expdir}/train.log" \ - ${train} \ - --config "${conf}" \ - --train-dumpdir "${dumpdir}/${train_set}/norm" \ - --dev-dumpdir "${dumpdir}/${dev_set}/norm" \ - --outdir "${expdir}" \ - --resume "${resume}" \ - --verbose "${verbose}" - echo "Successfully finished training." -fi - -if [ "${stage}" -le 3 ] && [ "${stop_stage}" -ge 3 ]; then - echo "Stage 3: Network decoding" - # shellcheck disable=SC2012 - [ -z "${checkpoint}" ] && checkpoint="$(ls -dt "${expdir}"/*.pkl | head -1 || true)" - outdir="${expdir}/wav/$(basename "${checkpoint}" .pkl)" - pids=() - for name in "${dev_set}" "${eval_set}"; do - ( - [ ! -e "${outdir}/${name}" ] && mkdir -p "${outdir}/${name}" - [ "${n_gpus}" -gt 1 ] && n_gpus=1 - echo "Decoding start. See the progress via ${outdir}/${name}/decode.log." - ${cuda_cmd} --gpu "${n_gpus}" "${outdir}/${name}/decode.log" \ - parallel-wavegan-decode \ - --dumpdir "${dumpdir}/${name}/norm" \ - --checkpoint "${checkpoint}" \ - --outdir "${outdir}/${name}" \ - --verbose "${verbose}" - echo "Successfully finished decoding of ${name} set." - ) & - pids+=($!) - done - i=0; for pid in "${pids[@]}"; do wait "${pid}" || ((++i)); done - [ "${i}" -gt 0 ] && echo "$0: ${i} background jobs are failed." && exit 1; - echo "Successfully finished decoding." -fi -echo "Finished." diff --git a/spaces/akhaliq/deeplab2/g3doc/setup/getting_started.md b/spaces/akhaliq/deeplab2/g3doc/setup/getting_started.md deleted file mode 100644 index 44bb6fea382cab179757ae027fd85d7fb809dbe2..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/g3doc/setup/getting_started.md +++ /dev/null @@ -1,109 +0,0 @@ -# Using DeepLab2 - -In the following, we provide instructions on how to run DeepLab2. - -## Prerequisites - -We assume DeepLab2 is successfully installed and the necessary datasets are -configured. - -* See [Installation](installation.md). -* See dataset guides: - * [Cityscapes](cityscapes.md). - * [KITTI-STEP](kitti_step.md). - * [and many more](./). - -## Running DeepLab2 - -DeepLab2 contains several implementations of state-of-the-art methods. In the -following, we discuss all steps from choosing a model, setting up the -configuration to training and evaluating it. - -### Choosing a model - -For this tutorial, we use Panoptic-DeepLab, however, running any other model -follows the same steps. For each network architecture, we provide a guide that -contains example configurations and (pretrained) checkpoints. You can find all -guides [here](../projects/). For now, please checkout -[Panoptic-DeepLab](../projects/panoptic_deeplab.md). - -We will use the Resnet50 model as an example for this guide. If you just want to -run the network without training, please download the corresponding checkpoint -trained by us. If you would like to train the network, please download the -corresponding ImageNet pretrained checkpoint from -[here](../projects/imagenet_pretrained_checkpoints.md). - -### Defining a configuration - -When you want to train or evaluate a network, DeepLab2 requires a corresponding -configuration. This configuration contains information about the network -architecture as well as all sorts of hyper-parameters. Fortunately, for almost -all settings we provide default values and example configurations. The -configuration of Panoptic-DeepLab with ResNet50 for the Cityscapes dataset can -be found -[here](../../configs/cityscapes/panoptic_deeplab/resnet50_os32_merge_with_pure_tf_func.textproto). - -Using our default parameters there are only a few things that needs to be -defined: - -1. The name of the experiment `experiment_name`. The experiment name is used as - a folder name to store all experiment related files in. -2. The initial checkpoint `initial_checkpoint`, which can be an empty string - for none or the path to a checkpoint (e.g., pretrained on ImageNet or fully - trained by us.) -3. The training dataset `train_dataset_options.file_pattern`, which should - point to the TfRecords of the Cityscapes train set. -4. The evaluation dataset `eval_dataset_options.file_pattern`, which should - point to the TfRecords of the Cityscapes val set. -5. If the custom CUDA kernel is successfully compiled, we recommend to set - `merge_semantic_and_instance_with_tf_op` to true. - -For a detailed explanation of all the parameters, we refer to the documented -definitions of the proto files. A good starting place is the -[config.proto](../../config.proto). The `ExperimentOptions` are a collection of -all necessary configurations ranging from the model architecture to the training -settings. - -### Training and Evaluating - -We currently support four different modes to run DeepLab2: - -* Training: This will only train the network based on the provided - configuration. -* Evaluation: This will only evaluate the network based on the provided - configuration. -* Continuous Evaluation: This mode will constantly monitor a directory for - newly saved checkpoints that will be evaluated until a timeout. This mode is - useful when runing separate jobs for training and evaluation (e.g., a multi - GPU job for training, and a single GPU job for evaluating). -* Interleaved Training and Evaluation: In this mode, training and evaluation - will run interleaved. This is not supported for multi GPU jobs. - -### Putting everything together - -To run DeepLab2 on GPUs, the following command should be used: - -```bash -python training/train.py \ - --config_file=${CONFIG_FILE} \ - --mode={train | eval | train_and_eval | continuous_eval} \ - --model_dir=${BASE_MODEL_DIRECTORY} \ - --num_gpus=${NUM_GPUS} -``` - -You can also launch DeepLab2 on TPUS. For this, the TPU address needs to be -specified: - -```bash -python training/train.py \ - --config_file=${CONFIG_FILE} \ - --mode={train | eval | train_and_eval | continuous_eval} \ - --model_dir=${BASE_MODEL_DIRECTORY} \ - --master=${TPU_ADDRESS} -``` - -For a detailed explanation of each option run: - -```bash -python training/train.py --help -``` diff --git a/spaces/akhaliq/deeplab2/model/encoder/axial_resnet_test.py b/spaces/akhaliq/deeplab2/model/encoder/axial_resnet_test.py deleted file mode 100644 index c50b66261951164560725bd530288cededfdb8cd..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/encoder/axial_resnet_test.py +++ /dev/null @@ -1,46 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for axial_resnet.""" - -import numpy as np -import tensorflow as tf - -from deeplab2.model.encoder import axial_resnet - - -class AxialResNetTest(tf.test.TestCase): - - def test_axial_resnet_correct_output_shape(self): - model = axial_resnet.AxialResNet('max_deeplab_s') - endpoints = model(tf.zeros([2, 65, 65, 3]), training=False) - self.assertListEqual(endpoints['backbone_output'].get_shape().as_list(), - [2, 5, 5, 2048]) - self.assertListEqual( - endpoints['transformer_class_feature'].get_shape().as_list(), - [2, 128, 256]) - self.assertListEqual( - endpoints['transformer_mask_feature'].get_shape().as_list(), - [2, 128, 256]) - self.assertListEqual(endpoints['feature_panoptic'].get_shape().as_list(), - [2, 17, 17, 256]) - self.assertListEqual(endpoints['feature_semantic'].get_shape().as_list(), - [2, 5, 5, 2048]) - num_params = np.sum( - [np.prod(v.get_shape().as_list()) for v in model.trainable_weights]) - self.assertEqual(num_params, 61726624) - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/akhaliq/papercutcraft-v1/app.py b/spaces/akhaliq/papercutcraft-v1/app.py deleted file mode 100644 index 576da1e2dd55edd198b4274a7cf8b5ed1728f983..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/papercutcraft-v1/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'OlafII/papercutcraft-v1' -prefix = '' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    Papercutcraft V1

    -
    -

    - Demo for Papercutcraft V1 Stable Diffusion model.
    - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

    - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"}

    - Duplicate Space -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
    -
    -

    This space was created using SD Space Creator.

    -
    - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/akshatsanghvi/movie-recommender-system/README.md b/spaces/akshatsanghvi/movie-recommender-system/README.md deleted file mode 100644 index 800692cbd8660a05a55572a582f7269ad826790d..0000000000000000000000000000000000000000 --- a/spaces/akshatsanghvi/movie-recommender-system/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Movie Recommender System -emoji: 🎥 -colorFrom: purple -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -license: apache-2.0 ---- - -## Movie Recommender System -A movie recommendation system, or a movie recommender system, is an ML-based approach to filter or predict the users’ film preferences using techinques like : -- popularity-based -- content-based -- collabrative filtering - - -In this, project I have used content-based filtering to fiter movies with similar tags (concatenated strings of story, gernre, and title column) - -## Links : - -- #### Hugging Face Space Link : [Click Me](https://huggingface.co/spaces/akshatsanghvi/movie-recommender-system) -- #### Kaggle Dataset (Bollywood) : [Click Me](https://www.kaggle.com/datasets/pncnmnp/the-indian-movie-database) -- #### Kaggle Dataset (Hollywood) : [Click Me](https://www.kaggle.com/datasets/neha1703/movie-genre-from-its-poster?select=MovieGenre.csv) - -## After you click on the HF link you'll see two buttons, -#### "Recommend" Button : Recommends 5 similar movies from the same country as of movie. -#### "🌍" Button : Recommends 10 similar movies from the world. - -## Step 1: -![ss1](https://user-images.githubusercontent.com/92530735/220027045-6a009d71-d524-4a20-8a38-6ec7a813fd17.png) - -## Step 2: -![ss2](https://user-images.githubusercontent.com/92530735/220027275-d85b1336-2a14-4336-8383-62c0c7e8a6d0.png) - -## Step 3: -![ss3](https://user-images.githubusercontent.com/92530735/220027380-b711ece2-eb2e-44d2-8abc-b83cc116ba9a.png) \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test117/app.py b/spaces/allknowingroger/Image-Models-Test117/app.py deleted file mode 100644 index de8671ee160b3083834732b8aa1bf0bb5b43d99e..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test117/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "ngttt/lora-trained-xl-colab-test", - "thisserand/lora-trained-xl-colab", - "EnD-Diffusers/lineart-model", - "NEXAS/stable_diff_personl", - "Gauri54damle/sd-multi-object-model", - "minhalvp/SDXL-Dreambooth-HRSFC", - "CiroN2022/skull-graphics", - "Navu45/pokemon-model", - "rohiladora/lora-trained-xl-colab", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test43/app.py b/spaces/allknowingroger/Image-Models-Test43/app.py deleted file mode 100644 index 67ff223376c7a19a4dc16178feb0c36d0e7612e8..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test43/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "OPERFIND/step1", - "Yntec/ChildrenStoriesAnime", - "Dinesh-2004/my-pet-dog", - "rafaym/DreamBoothAvatar", - "abwqr/text2img_vision", - "Jinouga/makima-chainsaw-manv1", - "Dinesh-2004/my-pet-dog", - "abwqr/text2img_vision_2.0", - "digiplay/SyncMix_v1.5", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alphacep/asr/README.md b/spaces/alphacep/asr/README.md deleted file mode 100644 index d11b0de259dcb3c4ee955960b3c65e4a9aeb3ca9..0000000000000000000000000000000000000000 --- a/spaces/alphacep/asr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Automatic Speech Recognition -emoji: 🌍 -colorFrom: magenta -colorTo: magenta -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/arpagon/whisper-demo-large-v2-es/README.md b/spaces/arpagon/whisper-demo-large-v2-es/README.md deleted file mode 100644 index 7843cbfd85cf6bb73dfbef2f8863bb01aed27aa2..0000000000000000000000000000000000000000 --- a/spaces/arpagon/whisper-demo-large-v2-es/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Whisper Demo -emoji: 🤫 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: whisper-event/whisper-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/arsalagrey/speech-recognition-vue/index.html b/spaces/arsalagrey/speech-recognition-vue/index.html deleted file mode 100644 index d07d378647ab6ebf3b557a84a9116f9459e081c8..0000000000000000000000000000000000000000 --- a/spaces/arsalagrey/speech-recognition-vue/index.html +++ /dev/null @@ -1,53 +0,0 @@ - - - - - - Speech Recognition Vue - HuggingFace.js Live Examples - - - - - -
    -

    Speech Recognition

    -
    -
    - - -
    -
    - - -
    -
    -
    -
    - - -
    -
    - - -
    -
    - -

    {{statusMessage}}

    -

    {{recognizedText}}

    -
    - - - diff --git a/spaces/artificialguybr/video-dubbing/TTS/Makefile b/spaces/artificialguybr/video-dubbing/TTS/Makefile deleted file mode 100644 index 54aa6eeb186b69cbc752c7c043114b6873fc4e7d..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/Makefile +++ /dev/null @@ -1,81 +0,0 @@ -.DEFAULT_GOAL := help -.PHONY: test system-deps dev-deps deps style lint install help docs - -help: - @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' - -target_dirs := tests TTS notebooks recipes - -test_all: ## run tests and don't stop on an error. - nose2 --with-coverage --coverage TTS tests - ./run_bash_tests.sh - -test: ## run tests. - nose2 -F -v -B --with-coverage --coverage TTS tests - -test_vocoder: ## run vocoder tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.vocoder_tests - -test_tts: ## run tts tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.tts_tests - -test_tts2: ## run tts tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.tts_tests2 - -test_xtts: - nose2 -F -v -B --with-coverage --coverage TTS tests.xtts_tests - -test_aux: ## run aux tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.aux_tests - ./run_bash_tests.sh - -test_zoo: ## run zoo tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.zoo_tests - -inference_tests: ## run inference tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.inference_tests - -api_tests: ## run api tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.api_tests - -data_tests: ## run data tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.data_tests - -test_text: ## run text tests. - nose2 -F -v -B --with-coverage --coverage TTS tests.text_tests - -test_failed: ## only run tests failed the last time. - nose2 -F -v -B --with-coverage --coverage TTS tests - -style: ## update code style. - black ${target_dirs} - isort ${target_dirs} - -lint: ## run pylint linter. - pylint ${target_dirs} - black ${target_dirs} --check - isort ${target_dirs} --check-only - -system-deps: ## install linux system deps - sudo apt-get install -y libsndfile1-dev - -dev-deps: ## install development deps - pip install -r requirements.dev.txt - -doc-deps: ## install docs dependencies - pip install -r docs/requirements.txt - -build-docs: ## build the docs - cd docs && make clean && make build - -hub-deps: ## install deps for torch hub use - pip install -r requirements.hub.txt - -deps: ## install 🐸 requirements. - pip install -r requirements.txt - -install: ## install 🐸 TTS for development. - pip install -e .[all] - -docs: ## build the docs - $(MAKE) -C docs clean && $(MAKE) -C docs html diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_vits.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_vits.py deleted file mode 100644 index fca99556199efb79a9c65378c40faebdb2cf51b6..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_vits.py +++ /dev/null @@ -1,595 +0,0 @@ -import copy -import os -import unittest - -import torch -from trainer.logging.tensorboard_logger import TensorboardLogger - -from tests import assertHasAttr, assertHasNotAttr, get_tests_data_path, get_tests_input_path, get_tests_output_path -from TTS.config import load_config -from TTS.encoder.utils.generic_utils import setup_encoder_model -from TTS.tts.configs.vits_config import VitsConfig -from TTS.tts.models.vits import ( - Vits, - VitsArgs, - VitsAudioConfig, - amp_to_db, - db_to_amp, - load_audio, - spec_to_mel, - wav_to_mel, - wav_to_spec, -) -from TTS.tts.utils.speakers import SpeakerManager - -LANG_FILE = os.path.join(get_tests_input_path(), "language_ids.json") -SPEAKER_ENCODER_CONFIG = os.path.join(get_tests_input_path(), "test_speaker_encoder_config.json") -WAV_FILE = os.path.join(get_tests_input_path(), "example_1.wav") - - -torch.manual_seed(1) -use_cuda = torch.cuda.is_available() -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - - -# pylint: disable=no-self-use -class TestVits(unittest.TestCase): - def test_load_audio(self): - wav, sr = load_audio(WAV_FILE) - self.assertEqual(wav.shape, (1, 41885)) - self.assertEqual(sr, 22050) - - spec = wav_to_spec(wav, n_fft=1024, hop_length=512, win_length=1024, center=False) - mel = wav_to_mel( - wav, - n_fft=1024, - num_mels=80, - sample_rate=sr, - hop_length=512, - win_length=1024, - fmin=0, - fmax=8000, - center=False, - ) - mel2 = spec_to_mel(spec, n_fft=1024, num_mels=80, sample_rate=sr, fmin=0, fmax=8000) - - self.assertEqual((mel - mel2).abs().max(), 0) - self.assertEqual(spec.shape[0], mel.shape[0]) - self.assertEqual(spec.shape[2], mel.shape[2]) - - spec_db = amp_to_db(spec) - spec_amp = db_to_amp(spec_db) - - self.assertAlmostEqual((spec - spec_amp).abs().max(), 0, delta=1e-4) - - def test_dataset(self): - """TODO:""" - ... - - def test_init_multispeaker(self): - num_speakers = 10 - args = VitsArgs(num_speakers=num_speakers, use_speaker_embedding=True) - model = Vits(args) - assertHasAttr(self, model, "emb_g") - - args = VitsArgs(num_speakers=0, use_speaker_embedding=True) - model = Vits(args) - assertHasNotAttr(self, model, "emb_g") - - args = VitsArgs(num_speakers=10, use_speaker_embedding=False) - model = Vits(args) - assertHasNotAttr(self, model, "emb_g") - - args = VitsArgs(d_vector_dim=101, use_d_vector_file=True) - model = Vits(args) - self.assertEqual(model.embedded_speaker_dim, 101) - - def test_init_multilingual(self): - args = VitsArgs(language_ids_file=None, use_language_embedding=False) - model = Vits(args) - self.assertEqual(model.language_manager, None) - self.assertEqual(model.embedded_language_dim, 0) - assertHasNotAttr(self, model, "emb_l") - - args = VitsArgs(language_ids_file=LANG_FILE) - model = Vits(args) - self.assertNotEqual(model.language_manager, None) - self.assertEqual(model.embedded_language_dim, 0) - assertHasNotAttr(self, model, "emb_l") - - args = VitsArgs(language_ids_file=LANG_FILE, use_language_embedding=True) - model = Vits(args) - self.assertNotEqual(model.language_manager, None) - self.assertEqual(model.embedded_language_dim, args.embedded_language_dim) - assertHasAttr(self, model, "emb_l") - - args = VitsArgs(language_ids_file=LANG_FILE, use_language_embedding=True, embedded_language_dim=102) - model = Vits(args) - self.assertNotEqual(model.language_manager, None) - self.assertEqual(model.embedded_language_dim, args.embedded_language_dim) - assertHasAttr(self, model, "emb_l") - - def test_get_aux_input(self): - aux_input = {"speaker_ids": None, "style_wav": None, "d_vectors": None, "language_ids": None} - args = VitsArgs() - model = Vits(args) - aux_out = model.get_aux_input(aux_input) - - speaker_id = torch.randint(10, (1,)) - language_id = torch.randint(10, (1,)) - d_vector = torch.rand(1, 128) - aux_input = {"speaker_ids": speaker_id, "style_wav": None, "d_vectors": d_vector, "language_ids": language_id} - aux_out = model.get_aux_input(aux_input) - self.assertEqual(aux_out["speaker_ids"].shape, speaker_id.shape) - self.assertEqual(aux_out["language_ids"].shape, language_id.shape) - self.assertEqual(aux_out["d_vectors"].shape, d_vector.unsqueeze(0).transpose(2, 1).shape) - - def test_voice_conversion(self): - num_speakers = 10 - spec_len = 101 - spec_effective_len = 50 - - args = VitsArgs(num_speakers=num_speakers, use_speaker_embedding=True) - model = Vits(args) - - ref_inp = torch.randn(1, 513, spec_len) - ref_inp_len = torch.randint(1, spec_effective_len, (1,)) - ref_spk_id = torch.randint(1, num_speakers, (1,)).item() - tgt_spk_id = torch.randint(1, num_speakers, (1,)).item() - o_hat, y_mask, (z, z_p, z_hat) = model.voice_conversion(ref_inp, ref_inp_len, ref_spk_id, tgt_spk_id) - - self.assertEqual(o_hat.shape, (1, 1, spec_len * 256)) - self.assertEqual(y_mask.shape, (1, 1, spec_len)) - self.assertEqual(y_mask.sum(), ref_inp_len[0]) - self.assertEqual(z.shape, (1, args.hidden_channels, spec_len)) - self.assertEqual(z_p.shape, (1, args.hidden_channels, spec_len)) - self.assertEqual(z_hat.shape, (1, args.hidden_channels, spec_len)) - - def _create_inputs(self, config, batch_size=2): - input_dummy = torch.randint(0, 24, (batch_size, 128)).long().to(device) - input_lengths = torch.randint(100, 129, (batch_size,)).long().to(device) - input_lengths[-1] = 128 - spec = torch.rand(batch_size, config.audio["fft_size"] // 2 + 1, 30).to(device) - mel = torch.rand(batch_size, config.audio["num_mels"], 30).to(device) - spec_lengths = torch.randint(20, 30, (batch_size,)).long().to(device) - spec_lengths[-1] = spec.size(2) - waveform = torch.rand(batch_size, 1, spec.size(2) * config.audio["hop_length"]).to(device) - return input_dummy, input_lengths, mel, spec, spec_lengths, waveform - - def _check_forward_outputs(self, config, output_dict, encoder_config=None, batch_size=2): - self.assertEqual( - output_dict["model_outputs"].shape[2], config.model_args.spec_segment_size * config.audio["hop_length"] - ) - self.assertEqual(output_dict["alignments"].shape, (batch_size, 128, 30)) - self.assertEqual(output_dict["alignments"].max(), 1) - self.assertEqual(output_dict["alignments"].min(), 0) - self.assertEqual(output_dict["z"].shape, (batch_size, config.model_args.hidden_channels, 30)) - self.assertEqual(output_dict["z_p"].shape, (batch_size, config.model_args.hidden_channels, 30)) - self.assertEqual(output_dict["m_p"].shape, (batch_size, config.model_args.hidden_channels, 30)) - self.assertEqual(output_dict["logs_p"].shape, (batch_size, config.model_args.hidden_channels, 30)) - self.assertEqual(output_dict["m_q"].shape, (batch_size, config.model_args.hidden_channels, 30)) - self.assertEqual(output_dict["logs_q"].shape, (batch_size, config.model_args.hidden_channels, 30)) - self.assertEqual( - output_dict["waveform_seg"].shape[2], config.model_args.spec_segment_size * config.audio["hop_length"] - ) - if encoder_config: - self.assertEqual(output_dict["gt_spk_emb"].shape, (batch_size, encoder_config.model_params["proj_dim"])) - self.assertEqual(output_dict["syn_spk_emb"].shape, (batch_size, encoder_config.model_params["proj_dim"])) - else: - self.assertEqual(output_dict["gt_spk_emb"], None) - self.assertEqual(output_dict["syn_spk_emb"], None) - - def test_forward(self): - num_speakers = 0 - config = VitsConfig(num_speakers=num_speakers, use_speaker_embedding=True) - config.model_args.spec_segment_size = 10 - input_dummy, input_lengths, _, spec, spec_lengths, waveform = self._create_inputs(config) - model = Vits(config).to(device) - output_dict = model.forward(input_dummy, input_lengths, spec, spec_lengths, waveform) - self._check_forward_outputs(config, output_dict) - - def test_multispeaker_forward(self): - num_speakers = 10 - - config = VitsConfig(num_speakers=num_speakers, use_speaker_embedding=True) - config.model_args.spec_segment_size = 10 - - input_dummy, input_lengths, _, spec, spec_lengths, waveform = self._create_inputs(config) - speaker_ids = torch.randint(0, num_speakers, (8,)).long().to(device) - - model = Vits(config).to(device) - output_dict = model.forward( - input_dummy, input_lengths, spec, spec_lengths, waveform, aux_input={"speaker_ids": speaker_ids} - ) - self._check_forward_outputs(config, output_dict) - - def test_d_vector_forward(self): - batch_size = 2 - args = VitsArgs( - spec_segment_size=10, - num_chars=32, - use_d_vector_file=True, - d_vector_dim=256, - d_vector_file=[os.path.join(get_tests_data_path(), "dummy_speakers.json")], - ) - config = VitsConfig(model_args=args) - model = Vits.init_from_config(config, verbose=False).to(device) - model.train() - input_dummy, input_lengths, _, spec, spec_lengths, waveform = self._create_inputs(config, batch_size=batch_size) - d_vectors = torch.randn(batch_size, 256).to(device) - output_dict = model.forward( - input_dummy, input_lengths, spec, spec_lengths, waveform, aux_input={"d_vectors": d_vectors} - ) - self._check_forward_outputs(config, output_dict) - - def test_multilingual_forward(self): - num_speakers = 10 - num_langs = 3 - batch_size = 2 - - args = VitsArgs(language_ids_file=LANG_FILE, use_language_embedding=True, spec_segment_size=10) - config = VitsConfig(num_speakers=num_speakers, use_speaker_embedding=True, model_args=args) - - input_dummy, input_lengths, _, spec, spec_lengths, waveform = self._create_inputs(config, batch_size=batch_size) - speaker_ids = torch.randint(0, num_speakers, (batch_size,)).long().to(device) - lang_ids = torch.randint(0, num_langs, (batch_size,)).long().to(device) - - model = Vits(config).to(device) - output_dict = model.forward( - input_dummy, - input_lengths, - spec, - spec_lengths, - waveform, - aux_input={"speaker_ids": speaker_ids, "language_ids": lang_ids}, - ) - self._check_forward_outputs(config, output_dict) - - def test_secl_forward(self): - num_speakers = 10 - num_langs = 3 - batch_size = 2 - - speaker_encoder_config = load_config(SPEAKER_ENCODER_CONFIG) - speaker_encoder_config.model_params["use_torch_spec"] = True - speaker_encoder = setup_encoder_model(speaker_encoder_config).to(device) - speaker_manager = SpeakerManager() - speaker_manager.encoder = speaker_encoder - - args = VitsArgs( - language_ids_file=LANG_FILE, - use_language_embedding=True, - spec_segment_size=10, - use_speaker_encoder_as_loss=True, - ) - config = VitsConfig(num_speakers=num_speakers, use_speaker_embedding=True, model_args=args) - config.audio.sample_rate = 16000 - - input_dummy, input_lengths, _, spec, spec_lengths, waveform = self._create_inputs(config, batch_size=batch_size) - speaker_ids = torch.randint(0, num_speakers, (batch_size,)).long().to(device) - lang_ids = torch.randint(0, num_langs, (batch_size,)).long().to(device) - - model = Vits(config, speaker_manager=speaker_manager).to(device) - output_dict = model.forward( - input_dummy, - input_lengths, - spec, - spec_lengths, - waveform, - aux_input={"speaker_ids": speaker_ids, "language_ids": lang_ids}, - ) - self._check_forward_outputs(config, output_dict, speaker_encoder_config) - - def _check_inference_outputs(self, config, outputs, input_dummy, batch_size=1): - feat_len = outputs["z"].shape[2] - self.assertEqual(outputs["model_outputs"].shape[:2], (batch_size, 1)) # we don't know the channel dimension - self.assertEqual(outputs["alignments"].shape, (batch_size, input_dummy.shape[1], feat_len)) - self.assertEqual(outputs["z"].shape, (batch_size, config.model_args.hidden_channels, feat_len)) - self.assertEqual(outputs["z_p"].shape, (batch_size, config.model_args.hidden_channels, feat_len)) - self.assertEqual(outputs["m_p"].shape, (batch_size, config.model_args.hidden_channels, feat_len)) - self.assertEqual(outputs["logs_p"].shape, (batch_size, config.model_args.hidden_channels, feat_len)) - - def test_inference(self): - num_speakers = 0 - config = VitsConfig(num_speakers=num_speakers, use_speaker_embedding=True) - model = Vits(config).to(device) - - batch_size = 1 - input_dummy, *_ = self._create_inputs(config, batch_size=batch_size) - outputs = model.inference(input_dummy) - self._check_inference_outputs(config, outputs, input_dummy, batch_size=batch_size) - - batch_size = 2 - input_dummy, input_lengths, *_ = self._create_inputs(config, batch_size=batch_size) - outputs = model.inference(input_dummy, aux_input={"x_lengths": input_lengths}) - self._check_inference_outputs(config, outputs, input_dummy, batch_size=batch_size) - - def test_multispeaker_inference(self): - num_speakers = 10 - config = VitsConfig(num_speakers=num_speakers, use_speaker_embedding=True) - model = Vits(config).to(device) - - batch_size = 1 - input_dummy, *_ = self._create_inputs(config, batch_size=batch_size) - speaker_ids = torch.randint(0, num_speakers, (batch_size,)).long().to(device) - outputs = model.inference(input_dummy, {"speaker_ids": speaker_ids}) - self._check_inference_outputs(config, outputs, input_dummy, batch_size=batch_size) - - batch_size = 2 - input_dummy, input_lengths, *_ = self._create_inputs(config, batch_size=batch_size) - speaker_ids = torch.randint(0, num_speakers, (batch_size,)).long().to(device) - outputs = model.inference(input_dummy, {"x_lengths": input_lengths, "speaker_ids": speaker_ids}) - self._check_inference_outputs(config, outputs, input_dummy, batch_size=batch_size) - - def test_multilingual_inference(self): - num_speakers = 10 - num_langs = 3 - args = VitsArgs(language_ids_file=LANG_FILE, use_language_embedding=True, spec_segment_size=10) - config = VitsConfig(num_speakers=num_speakers, use_speaker_embedding=True, model_args=args) - model = Vits(config).to(device) - - input_dummy = torch.randint(0, 24, (1, 128)).long().to(device) - speaker_ids = torch.randint(0, num_speakers, (1,)).long().to(device) - lang_ids = torch.randint(0, num_langs, (1,)).long().to(device) - _ = model.inference(input_dummy, {"speaker_ids": speaker_ids, "language_ids": lang_ids}) - - batch_size = 1 - input_dummy, *_ = self._create_inputs(config, batch_size=batch_size) - speaker_ids = torch.randint(0, num_speakers, (batch_size,)).long().to(device) - lang_ids = torch.randint(0, num_langs, (batch_size,)).long().to(device) - outputs = model.inference(input_dummy, {"speaker_ids": speaker_ids, "language_ids": lang_ids}) - self._check_inference_outputs(config, outputs, input_dummy, batch_size=batch_size) - - batch_size = 2 - input_dummy, input_lengths, *_ = self._create_inputs(config, batch_size=batch_size) - speaker_ids = torch.randint(0, num_speakers, (batch_size,)).long().to(device) - lang_ids = torch.randint(0, num_langs, (batch_size,)).long().to(device) - outputs = model.inference( - input_dummy, {"x_lengths": input_lengths, "speaker_ids": speaker_ids, "language_ids": lang_ids} - ) - self._check_inference_outputs(config, outputs, input_dummy, batch_size=batch_size) - - def test_d_vector_inference(self): - args = VitsArgs( - spec_segment_size=10, - num_chars=32, - use_d_vector_file=True, - d_vector_dim=256, - d_vector_file=[os.path.join(get_tests_data_path(), "dummy_speakers.json")], - ) - config = VitsConfig(model_args=args) - model = Vits.init_from_config(config, verbose=False).to(device) - model.eval() - # batch size = 1 - input_dummy = torch.randint(0, 24, (1, 128)).long().to(device) - d_vectors = torch.randn(1, 256).to(device) - outputs = model.inference(input_dummy, aux_input={"d_vectors": d_vectors}) - self._check_inference_outputs(config, outputs, input_dummy) - # batch size = 2 - input_dummy, input_lengths, *_ = self._create_inputs(config) - d_vectors = torch.randn(2, 256).to(device) - outputs = model.inference(input_dummy, aux_input={"x_lengths": input_lengths, "d_vectors": d_vectors}) - self._check_inference_outputs(config, outputs, input_dummy, batch_size=2) - - @staticmethod - def _check_parameter_changes(model, model_ref): - count = 0 - for item1, item2 in zip(model.named_parameters(), model_ref.named_parameters()): - name = item1[0] - param = item1[1] - param_ref = item2[1] - assert (param != param_ref).any(), "param {} with shape {} not updated!! \n{}\n{}".format( - name, param.shape, param, param_ref - ) - count = count + 1 - - def _create_batch(self, config, batch_size): - input_dummy, input_lengths, mel, spec, mel_lengths, _ = self._create_inputs(config, batch_size) - batch = {} - batch["tokens"] = input_dummy - batch["token_lens"] = input_lengths - batch["spec_lens"] = mel_lengths - batch["mel_lens"] = mel_lengths - batch["spec"] = spec - batch["mel"] = mel - batch["waveform"] = torch.rand(batch_size, 1, config.audio["sample_rate"] * 10).to(device) - batch["d_vectors"] = None - batch["speaker_ids"] = None - batch["language_ids"] = None - return batch - - def test_train_step(self): - # setup the model - with torch.autograd.set_detect_anomaly(True): - config = VitsConfig(model_args=VitsArgs(num_chars=32, spec_segment_size=10)) - model = Vits(config).to(device) - model.train() - # model to train - optimizers = model.get_optimizer() - criterions = model.get_criterion() - criterions = [criterions[0].to(device), criterions[1].to(device)] - # reference model to compare model weights - model_ref = Vits(config).to(device) - # # pass the state to ref model - model_ref.load_state_dict(copy.deepcopy(model.state_dict())) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count = count + 1 - for _ in range(5): - batch = self._create_batch(config, 2) - for idx in [0, 1]: - outputs, loss_dict = model.train_step(batch, criterions, idx) - self.assertFalse(not outputs) - self.assertFalse(not loss_dict) - loss_dict["loss"].backward() - optimizers[idx].step() - optimizers[idx].zero_grad() - - # check parameter changes - self._check_parameter_changes(model, model_ref) - - def test_train_step_upsampling(self): - """Upsampling by the decoder upsampling layers""" - # setup the model - with torch.autograd.set_detect_anomaly(True): - audio_config = VitsAudioConfig(sample_rate=22050) - model_args = VitsArgs( - num_chars=32, - spec_segment_size=10, - encoder_sample_rate=11025, - interpolate_z=False, - upsample_rates_decoder=[8, 8, 4, 2], - ) - config = VitsConfig(model_args=model_args, audio=audio_config) - model = Vits(config).to(device) - model.train() - # model to train - optimizers = model.get_optimizer() - criterions = model.get_criterion() - criterions = [criterions[0].to(device), criterions[1].to(device)] - # reference model to compare model weights - model_ref = Vits(config).to(device) - # # pass the state to ref model - model_ref.load_state_dict(copy.deepcopy(model.state_dict())) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count = count + 1 - for _ in range(5): - batch = self._create_batch(config, 2) - for idx in [0, 1]: - outputs, loss_dict = model.train_step(batch, criterions, idx) - self.assertFalse(not outputs) - self.assertFalse(not loss_dict) - loss_dict["loss"].backward() - optimizers[idx].step() - optimizers[idx].zero_grad() - - # check parameter changes - self._check_parameter_changes(model, model_ref) - - def test_train_step_upsampling_interpolation(self): - """Upsampling by interpolation""" - # setup the model - with torch.autograd.set_detect_anomaly(True): - audio_config = VitsAudioConfig(sample_rate=22050) - model_args = VitsArgs( - num_chars=32, - spec_segment_size=10, - encoder_sample_rate=11025, - interpolate_z=True, - upsample_rates_decoder=[8, 8, 2, 2], - ) - config = VitsConfig(model_args=model_args, audio=audio_config) - model = Vits(config).to(device) - model.train() - # model to train - optimizers = model.get_optimizer() - criterions = model.get_criterion() - criterions = [criterions[0].to(device), criterions[1].to(device)] - # reference model to compare model weights - model_ref = Vits(config).to(device) - # # pass the state to ref model - model_ref.load_state_dict(copy.deepcopy(model.state_dict())) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count = count + 1 - for _ in range(5): - batch = self._create_batch(config, 2) - for idx in [0, 1]: - outputs, loss_dict = model.train_step(batch, criterions, idx) - self.assertFalse(not outputs) - self.assertFalse(not loss_dict) - loss_dict["loss"].backward() - optimizers[idx].step() - optimizers[idx].zero_grad() - - # check parameter changes - self._check_parameter_changes(model, model_ref) - - def test_train_eval_log(self): - batch_size = 2 - config = VitsConfig(model_args=VitsArgs(num_chars=32, spec_segment_size=10)) - model = Vits.init_from_config(config, verbose=False).to(device) - model.run_data_dep_init = False - model.train() - batch = self._create_batch(config, batch_size) - logger = TensorboardLogger( - log_dir=os.path.join(get_tests_output_path(), "dummy_vits_logs"), model_name="vits_test_train_log" - ) - criterion = model.get_criterion() - criterion = [criterion[0].to(device), criterion[1].to(device)] - outputs = [None] * 2 - outputs[0], _ = model.train_step(batch, criterion, 0) - outputs[1], _ = model.train_step(batch, criterion, 1) - model.train_log(batch, outputs, logger, None, 1) - - model.eval_log(batch, outputs, logger, None, 1) - logger.finish() - - def test_test_run(self): - config = VitsConfig(model_args=VitsArgs(num_chars=32)) - model = Vits.init_from_config(config, verbose=False).to(device) - model.run_data_dep_init = False - model.eval() - test_figures, test_audios = model.test_run(None) - self.assertTrue(test_figures is not None) - self.assertTrue(test_audios is not None) - - def test_load_checkpoint(self): - chkp_path = os.path.join(get_tests_output_path(), "dummy_glow_tts_checkpoint.pth") - config = VitsConfig(VitsArgs(num_chars=32)) - model = Vits.init_from_config(config, verbose=False).to(device) - chkp = {} - chkp["model"] = model.state_dict() - torch.save(chkp, chkp_path) - model.load_checkpoint(config, chkp_path) - self.assertTrue(model.training) - model.load_checkpoint(config, chkp_path, eval=True) - self.assertFalse(model.training) - - def test_get_criterion(self): - config = VitsConfig(VitsArgs(num_chars=32)) - model = Vits.init_from_config(config, verbose=False).to(device) - criterion = model.get_criterion() - self.assertTrue(criterion is not None) - - def test_init_from_config(self): - config = VitsConfig(model_args=VitsArgs(num_chars=32)) - model = Vits.init_from_config(config, verbose=False).to(device) - - config = VitsConfig(model_args=VitsArgs(num_chars=32, num_speakers=2)) - model = Vits.init_from_config(config, verbose=False).to(device) - self.assertTrue(not hasattr(model, "emb_g")) - - config = VitsConfig(model_args=VitsArgs(num_chars=32, num_speakers=2, use_speaker_embedding=True)) - model = Vits.init_from_config(config, verbose=False).to(device) - self.assertEqual(model.num_speakers, 2) - self.assertTrue(hasattr(model, "emb_g")) - - config = VitsConfig( - model_args=VitsArgs( - num_chars=32, - num_speakers=2, - use_speaker_embedding=True, - speakers_file=os.path.join(get_tests_data_path(), "ljspeech", "speakers.json"), - ) - ) - model = Vits.init_from_config(config, verbose=False).to(device) - self.assertEqual(model.num_speakers, 10) - self.assertTrue(hasattr(model, "emb_g")) - - config = VitsConfig( - model_args=VitsArgs( - num_chars=32, - use_d_vector_file=True, - d_vector_dim=256, - d_vector_file=[os.path.join(get_tests_data_path(), "dummy_speakers.json")], - ) - ) - model = Vits.init_from_config(config, verbose=False).to(device) - self.assertTrue(model.num_speakers == 1) - self.assertTrue(not hasattr(model, "emb_g")) - self.assertTrue(model.embedded_speaker_dim == config.d_vector_dim) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/error/DiagnosticErrorListener.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/error/DiagnosticErrorListener.py deleted file mode 100644 index 32ac14b63579ce7c984c2e34f2b1c80bebe328ed..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/error/DiagnosticErrorListener.py +++ /dev/null @@ -1,107 +0,0 @@ -# -# Copyright (c) 2012-2017 The ANTLR Project. All rights reserved. -# Use of this file is governed by the BSD 3-clause license that -# can be found in the LICENSE.txt file in the project root. -# - - -# -# This implementation of {@link ANTLRErrorListener} can be used to identify -# certain potential correctness and performance problems in grammars. "Reports" -# are made by calling {@link Parser#notifyErrorListeners} with the appropriate -# message. -# -#
      -#
    • Ambiguities: These are cases where more than one path through the -# grammar can match the input.
    • -#
    • Weak context sensitivity: These are cases where full-context -# prediction resolved an SLL conflict to a unique alternative which equaled the -# minimum alternative of the SLL conflict.
    • -#
    • Strong (forced) context sensitivity: These are cases where the -# full-context prediction resolved an SLL conflict to a unique alternative, -# and the minimum alternative of the SLL conflict was found to not be -# a truly viable alternative. Two-stage parsing cannot be used for inputs where -# this situation occurs.
    • -#
    - -from io import StringIO -from antlr4 import Parser, DFA -from antlr4.atn.ATNConfigSet import ATNConfigSet -from antlr4.error.ErrorListener import ErrorListener - -class DiagnosticErrorListener(ErrorListener): - - def __init__(self, exactOnly:bool=True): - # whether all ambiguities or only exact ambiguities are reported. - self.exactOnly = exactOnly - - def reportAmbiguity(self, recognizer:Parser, dfa:DFA, startIndex:int, - stopIndex:int, exact:bool, ambigAlts:set, configs:ATNConfigSet): - if self.exactOnly and not exact: - return - - with StringIO() as buf: - buf.write("reportAmbiguity d=") - buf.write(self.getDecisionDescription(recognizer, dfa)) - buf.write(": ambigAlts=") - buf.write(str(self.getConflictingAlts(ambigAlts, configs))) - buf.write(", input='") - buf.write(recognizer.getTokenStream().getText(startIndex, stopIndex)) - buf.write("'") - recognizer.notifyErrorListeners(buf.getvalue()) - - - def reportAttemptingFullContext(self, recognizer:Parser, dfa:DFA, startIndex:int, - stopIndex:int, conflictingAlts:set, configs:ATNConfigSet): - with StringIO() as buf: - buf.write("reportAttemptingFullContext d=") - buf.write(self.getDecisionDescription(recognizer, dfa)) - buf.write(", input='") - buf.write(recognizer.getTokenStream().getText(startIndex, stopIndex)) - buf.write("'") - recognizer.notifyErrorListeners(buf.getvalue()) - - def reportContextSensitivity(self, recognizer:Parser, dfa:DFA, startIndex:int, - stopIndex:int, prediction:int, configs:ATNConfigSet): - with StringIO() as buf: - buf.write("reportContextSensitivity d=") - buf.write(self.getDecisionDescription(recognizer, dfa)) - buf.write(", input='") - buf.write(recognizer.getTokenStream().getText(startIndex, stopIndex)) - buf.write("'") - recognizer.notifyErrorListeners(buf.getvalue()) - - def getDecisionDescription(self, recognizer:Parser, dfa:DFA): - decision = dfa.decision - ruleIndex = dfa.atnStartState.ruleIndex - - ruleNames = recognizer.ruleNames - if ruleIndex < 0 or ruleIndex >= len(ruleNames): - return str(decision) - - ruleName = ruleNames[ruleIndex] - if ruleName is None or len(ruleName)==0: - return str(decision) - - return str(decision) + " (" + ruleName + ")" - - # - # Computes the set of conflicting or ambiguous alternatives from a - # configuration set, if that information was not already provided by the - # parser. - # - # @param reportedAlts The set of conflicting or ambiguous alternatives, as - # reported by the parser. - # @param configs The conflicting or ambiguous configuration set. - # @return Returns {@code reportedAlts} if it is not {@code null}, otherwise - # returns the set of alternatives represented in {@code configs}. - # - def getConflictingAlts(self, reportedAlts:set, configs:ATNConfigSet): - if reportedAlts is not None: - return reportedAlts - - result = set() - for config in configs: - result.add(config.alt) - - return result diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/sentencepiece_bpe.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/sentencepiece_bpe.py deleted file mode 100644 index 0aa6cd7681d0c3a91a6917640972d008db8faef7..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/sentencepiece_bpe.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SentencepieceConfig(FairseqDataclass): - sentencepiece_model: str = field( - default="???", metadata={"help": "path to sentencepiece model"} - ) - sentencepiece_enable_sampling: bool = field( - default=False, metadata={"help": "enable sampling"} - ) - sentencepiece_alpha: Optional[float] = field( - default=None, - metadata={ - "help": "soothing parameter for unigram sampling, " - "and merge probability for BPE-dropout" - }, - ) - - -@register_bpe("sentencepiece", dataclass=SentencepieceConfig) -class SentencepieceBPE(object): - def __init__(self, cfg): - self.enable_sampling = cfg.sentencepiece_enable_sampling - self.alpha = cfg.sentencepiece_alpha - sentencepiece_model = file_utils.cached_path(cfg.sentencepiece_model) - try: - import sentencepiece as spm - - self.sp = spm.SentencePieceProcessor() - self.sp.Load(sentencepiece_model) - except ImportError: - raise ImportError( - "Please install sentencepiece with: pip install sentencepiece" - ) - - def encode(self, x: str) -> str: - return " ".join( - self.sp.Encode( - x, out_type=str, enable_sampling=self.enable_sampling, alpha=self.alpha - ) - ) - - def decode(self, x: str) -> str: - return x.replace(" ", "").replace("\u2581", " ").strip() - - def is_beginning_of_word(self, x: str) -> bool: - if x in ["", "", "", ""]: - # special elements are always considered beginnings - # HACK: this logic is already present in fairseq/tasks/masked_lm.py - # but these special tokens are also contained in the sentencepiece - # vocabulary which causes duplicate special tokens. This hack makes - # sure that they are all taken into account. - return True - return x.startswith("\u2581") diff --git a/spaces/ashhadahsan/ai-book-generator/app.py b/spaces/ashhadahsan/ai-book-generator/app.py deleted file mode 100644 index a974d86d74c2505ffd504aeb7036d294bd2bfa9d..0000000000000000000000000000000000000000 --- a/spaces/ashhadahsan/ai-book-generator/app.py +++ /dev/null @@ -1,111 +0,0 @@ -import streamlit as st -from constants import * -from stqdm import stqdm -from prompts import * -from generator import * -from utils import * - - -st.set_page_config( - layout="wide", - page_title="AI Book Generator", - page_icon=":book:", -) -st.title("AI Book Generator") -st.markdown("

    Select options

    ", unsafe_allow_html=True) -with st.expander("Educational value *"): - age_range = st.select_slider("Age range of the reader", options=AGE_RANGE) - skill_development = st.selectbox("Skill development", options=SKILL_DEVELOPMENT) - learning_obectives = st.selectbox( - "Learning objectives", options=LEARNING_OBJECTIVES - ) -with st.expander("Emotional value *"): - theme = st.selectbox("Theme", options=THEME) - mood = st.selectbox("Moood of story", options=MODD_OF_STORY) - positive_messaging = st.selectbox("Skill development", options=POSITIVE_MESSAGNG) -with st.expander("Personal *"): - theme = st.selectbox("Gender", options=GENDER) - fvrt_book = st.text_input("Favorite book") -with st.expander("Book Details * "): - chapters = st.number_input( - "How many chapters should the book have?", min_value=3, max_value=100, value=5 - ) - - title = st.text_input("Title of the book") - genre = st.selectbox("Genre", options=GENRE) - topic = st.selectbox("Topic ", options=TOPIC) - main_name = st.text_input("Name of main character") - type_of_main_character = st.selectbox( - "Type of main character", TYPE_OF_MAIN_CHARACTER - ) - antagonist_name = st.text_input("Antagonist name") - antagonsit_type = st.selectbox("Antagonist type", options=ANTAGONIST_TYPE) - suuporting_character_name = st.text_input("Supporting character name (if any)") - suporting_character_type = st.selectbox( - "Supporting character type", options=SUPPORTING_CHARACTER_TYPE - ) - settings = st.selectbox("Setting ", options=SETTINGS) - resolution = st.selectbox("Resolution", options=RESOLUTION) - -btn = st.button("Generate Book") -if btn: - content = [] - for x in stqdm(range(chapters), desc="Generating book"): - if x == 0: - prmpt = get_initial_prompts( - genre, - type_of_main_character, - main_name, - skill_development, - learning_obectives, - theme, - topic, - ) - content.append(complete_with_gpt(prmpt, 200, "gpt2", 1500, 0.7, 1.5)) - if x == 1: - prmpt = story_setting_prompt( - genre, - type_of_main_character, - main_name, - skill_development, - learning_obectives, - theme, - mood, - antagonist_name, - antagonsit_type, - ) - previous = " ".join(x for x in content) - prmpt = previous + " " + prmpt - content.append(complete_with_gpt(prmpt, 200, "gpt2", 1500, 0.7, 1.5)) - - if x % 3 == 0: - prmpt = supporting_character_inclusion( - genre, - suuporting_character_name, - suporting_character_type, - positive_messaging, - ) - previous = " ".join(x for x in content) - prmpt = previous + " " + prmpt - content.append(complete_with_gpt(prmpt, 200, "gpt2", 1500, 0.7, 1.5)) - if x == chapters - 1: - prmpt = ending_scene(genre, resolution, main_name, positive_messaging) - previous = " ".join(x for x in content) - prmpt = previous + " " + prmpt - content.append(complete_with_gpt(prmpt, 200, "gpt2", 1500, 0.7, 1.5)) - else: - previous = " ".join(x for x in content) - prmpt = previous - content.append(complete_with_gpt(prmpt, 200, "gpt2", 1500, 0.7, 1.5)) - - st.write(content) - filenamee = to_pdf(convert(create_md(text=content, title=title))) - with open(filenamee, "rb") as pdf_file: - PDFbyte = pdf_file.read() - - st.download_button( - label="Download Book", - data=PDFbyte, - file_name=filenamee, - mime="application/octet-stream", - ) diff --git a/spaces/aurora10/GPT4ALL_CHATBOT/app.py b/spaces/aurora10/GPT4ALL_CHATBOT/app.py deleted file mode 100644 index 7361d946a8ff6c24d5f63b63d748cb4dba27b7ce..0000000000000000000000000000000000000000 --- a/spaces/aurora10/GPT4ALL_CHATBOT/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import os -import gradio as gr -from gpt4all import GPT4All -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory -from langchain import PromptTemplate, LLMChain -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler - - - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -model = GPT4All("ggml-gpt4all-l13b-snoozy.bin") - -from langchain.llms import GPT4All - -# Callbacks support token-wise streaming -callbacks = [StreamingStdOutCallbackHandler()] - -# Verbose is required to pass to the callback manager -llm = GPT4All(model="ggml-gpt4all-l13b-snoozy.bin", callbacks=callbacks, verbose=True) - -# If you want to use a custom model add the backend parameter -# Check https://docs.gpt4all.io/gpt4all_python.html for supported backends -llm = GPT4All(model="ggml-gpt4all-l13b-snoozy.bin", backend="gptj", callbacks=callbacks, verbose=True) - -llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True, memory=memory,) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/awacke1/AI-Standard-Operating-Procedures/README.md b/spaces/awacke1/AI-Standard-Operating-Procedures/README.md deleted file mode 100644 index 43c70871fb49dcd3d1edfec3a3d6787173115502..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AI-Standard-Operating-Procedures/README.md +++ /dev/null @@ -1,239 +0,0 @@ ---- -title: AI Standard Operating Procedures -emoji: 🌡️📜🎓👀📊🚨📁 -colorFrom: yellow -colorTo: yellow -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: awacke1/AI-ChatGPT-CPT-Body-Map-Cost ---- - -## Standard Operating Procedures -| SOP No. | Standard Operating Procedure | Description | Top Ten Keywords | Wikipedia Link | SOP Icon | -|---------|------------------------------|-------------|-----------------|----------------|---------| -| 1 | SOP-01: Risk Assessment | Identifying, evaluating, and prioritizing compliance risks | risk, assessment, evaluate, prioritize, compliance, identify, analysis, management, mitigation, control | https://en.wikipedia.org/wiki/Risk_assessment | 🌡️ | -| 2 | SOP-02: Policy Development | Creating clear and concise compliance policies and procedures | policy, development, create, clear, concise, compliance, procedure, regulation, standard, guideline | https://en.wikipedia.org/wiki/Policy | 📜 | -| 3 | SOP-03: Training | Providing regular compliance training to employees | training, compliance, regular, employee, development, program, education, workshop, seminar, course | https://en.wikipedia.org/wiki/Training | 🎓 | -| 4 | SOP-04: Monitoring | Conducting periodic compliance audits and monitoring activities | monitoring, periodic, compliance, audit, review, assessment, evaluation, inspection, surveillance, oversight | https://en.wikipedia.org/wiki/Monitoring_and_evaluation | 👀 | -| 5 | SOP-05: Reporting | Establishing a process for reporting and addressing compliance issues | reporting, process, establish, compliance, issue, address, record, communication, notification, investigation | https://en.wikipedia.org/wiki/Reporting | 📊 | -| 6 | SOP-06: Incident Management | Handling compliance incidents and implementing corrective actions | incident, management, compliance, handle, implement, corrective, action, investigation, response, resolution | https://en.wikipedia.org/wiki/Incident_management | 🚨 | -| 7 | SOP-07: Recordkeeping | Maintaining accurate and up-to-date compliance records and documentation | recordkeeping, maintain, accurate, up-to-date, compliance, documentation, archive, storage, filing, record | https://en.wikipedia.org/wiki/Record_keeping | 📁 | - -1. What is the purpose of SOP-01: Risk Assessment? -- The purpose of SOP-01: Risk Assessment is to identify, evaluate, and prioritize compliance risks. - -2. What does the term “risk” refer to in the context of risk assessment? -- In the context of risk assessment, the term “risk” refers to the potential for an event or situation to have a negative impact on an organization or project. - -3. What is the process for evaluating risks? -- The process for evaluating risks typically involves identifying the potential risks, analyzing their likelihood and potential impact, and prioritizing them based on their severity. - -4. How do you prioritize risks in a risk assessment? -- Risks can be prioritized in a risk assessment by considering their potential impact, likelihood of occurrence, and the organization’s ability to mitigate or control them. - -5. What is compliance risk? -- Compliance risk refers to the risk associated with non-compliance with laws, regulations, or internal policies and procedures. - -6. What is the role of analysis in risk assessment? -- Analysis plays a crucial role in risk assessment by helping to identify potential risks, evaluate their impact and likelihood, and develop strategies for mitigating or controlling them. - -7. What is risk management? -- Risk management is the process of identifying, assessing, and prioritizing risks, and developing strategies to mitigate or control them. - -8. What is risk mitigation? -- Risk mitigation refers to the process of minimizing or preventing the negative impact of potential risks. - -9. What is risk control? -- Risk control refers to the measures taken to manage or reduce the likelihood and severity of potential risks. - -10. Why is risk assessment important? -- Risk assessment is important because it helps organizations to identify and manage potential risks, leading to better decision-making, improved performance, and reduced negative impacts. - - - -1. What is the purpose of SOP-02: Policy Development? -- The purpose of SOP-02: Policy Development is to create clear and concise compliance policies and procedures. - -2. What is a policy? -- A policy is a set of guidelines or principles that are developed to guide decision-making and behavior within an organization. - -3. What is the process for policy development? -- The process for policy development typically involves identifying the need for the policy, researching and gathering information, drafting the policy, obtaining feedback and approval, and implementing the policy. - -4. Why is it important for policies to be clear and concise? -- It is important for policies to be clear and concise so that they can be easily understood and followed by all members of the organization. This helps to ensure that everyone is on the same page and that compliance is maintained. - -5. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -6. What is a procedure? -- A procedure is a set of step-by-step instructions or guidelines for how to perform a specific task or activity. - -7. What is a regulation? -- A regulation is a rule or law that is put in place by a government or regulatory body to ensure compliance and standardization. - -8. What is a standard? -- A standard is a set of guidelines or principles that are developed to ensure consistent and high-quality performance or behavior. - -9. What is a guideline? -- A guideline is a set of recommendations or tips that are developed to assist with decision-making or performance. - -10. Why is policy development important? -- Policy development is important because it helps to ensure that an organization is operating in compliance with regulations and standards, while also promoting consistency and clarity in decision-making and behavior. - -1. What is the purpose of SOP-03: Training? -- The purpose of SOP-03: Training is to provide regular compliance training to employees. - -2. What is training? -- Training is the process of developing skills, knowledge, or behavior through education and instruction. - -3. Why is regular compliance training important? -- Regular compliance training is important to ensure that employees are aware of, and adhere to, laws, regulations, and company policies and procedures. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. Who is responsible for providing compliance training? -- It is typically the responsibility of the employer or organization to provide compliance training to their employees. - -6. What is employee development? -- Employee development refers to the process of improving an employee’s skills, knowledge, and abilities through training and education programs. - -7. What is a training program? -- A training program is a structured approach to employee development that is designed to improve skills, knowledge, and abilities related to a specific job or task. - -8. What is an education workshop? -- An education workshop is a training session that is designed to provide participants with information and skills related to a specific topic or field. - -9. What is a seminar? -- A seminar is a training event that typically involves an expert speaker or panel discussing a specific topic or issue. - -10. What is a training course? -- A training course is a structured program of learning that is typically designed to improve skills or knowledge related to a specific job or task. - - -1. What is the purpose of SOP-04: Monitoring? -- The purpose of SOP-04: Monitoring is to conduct periodic compliance audits and monitoring activities. - -2. What is monitoring? -- Monitoring is the process of tracking and observing an activity or process to ensure that it is operating as intended. - -3. What does periodic mean in the context of monitoring? -- In the context of monitoring, periodic refers to activities that are conducted at regular intervals, rather than continuously. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. What is an audit? -- An audit is a systematic examination of an organization or process to evaluate compliance, performance, or financial status. - -6. What is a review? -- A review is an evaluation of an organization or process to assess performance or compliance. - -7. What is an assessment? -- An assessment is a process of evaluating the performance, compliance, or quality of an organization or process. - -8. What is an evaluation? -- An evaluation is a systematic process of collecting and analyzing information to assess the effectiveness, efficiency, or relevance of an organization or process. - -9. What is an inspection? -- An inspection is an examination or review of an organization or process to evaluate compliance, performance, or safety. - -10. What is surveillance? -- Surveillance is the act of closely monitoring an activity or process to ensure compliance, safety, or security. - -1. What is the purpose of SOP-05: Reporting? -- The purpose of SOP-05: Reporting is to establish a process for reporting and addressing compliance issues. - -2. What is reporting? -- Reporting is the process of notifying others about an event or situation, typically for the purpose of documentation or action. - -3. What does the term “process” mean in the context of SOP-05: Reporting? -- In the context of SOP-05: Reporting, “process” refers to the steps and procedures that are established to ensure that compliance issues are identified, reported, and addressed in a timely and effective manner. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. What is a compliance issue? -- A compliance issue is an event or situation that violates laws, regulations, or internal policies and procedures. - -6. What does it mean to address a compliance issue? -- To address a compliance issue means to take appropriate steps to investigate, resolve, and prevent similar issues in the future. - -7. What is a record? -- A record is a document or other form of evidence that is created or maintained for legal, administrative, or business purposes. - -8. What is communication? -- Communication is the exchange of information between individuals or groups, typically through speaking, writing, or other forms of expression. - -9. What is notification? -- Notification is the process of informing individuals or groups about a particular event or situation. - -10. What is an investigation? -- An investigation is a process of gathering information and evidence to uncover the facts about a particular event or situation. - - -1. What is the purpose of SOP-06: Incident Management? -- The purpose of SOP-06: Incident Management is to handle compliance incidents and implement corrective actions. - -2. What is an incident? -- An incident is an event or situation that is unexpected or disrupts normal operations. - -3. What is management? -- Management refers to the process of planning, organizing, and controlling resources to achieve organizational goals. - -4. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -5. What does it mean to handle an incident? -- To handle an incident means to respond to and manage the incident in a way that minimizes its impact and prevents a recurrence. - -6. What does it mean to implement corrective actions? -- To implement corrective actions means to take steps to address the root cause of an incident and prevent it from happening again. - -7. What is a corrective action? -- A corrective action is a step or process that is taken to address the root cause of an incident and prevent its recurrence. - -8. What is an investigation? -- An investigation is a process of gathering information and evidence to uncover the facts about a particular event or situation. - -9. What is a response? -- A response is the immediate action taken in response to an incident to prevent further harm or damage. - -10. What is a resolution? -- A resolution is a decision or action taken to resolve an incident or issue and to prevent its recurrence. - -1. What is the purpose of SOP-07: Recordkeeping? -- The purpose of SOP-07: Recordkeeping is to maintain accurate and up-to-date compliance records and documentation. - -2. What is recordkeeping? -- Recordkeeping is the process of creating, managing, and storing information for legal, administrative, or business purposes. - -3. What does it mean to maintain records? -- To maintain records means to keep records accurate, complete, and up-to-date to ensure that they are reliable and useful when needed. - -4. What does it mean for records to be accurate and up-to-date? -- For records to be accurate and up-to-date means that they reflect the current state of affairs and contain the correct information. - -5. What is compliance? -- Compliance refers to the act of following laws, regulations, or internal policies and procedures. - -6. What is documentation? -- Documentation is information that is recorded and stored for legal, administrative, or business purposes. - -7. What is an archive? -- An archive is a collection of historical records or documents that are preserved for research, reference, or legal purposes. - -8. What is storage? -- Storage is the physical or digital location where records or documents are kept for future reference or use. - -9. What is filing? -- Filing is the process of organizing documents or records into a structured system for easy retrieval and access. - -10. Why is recordkeeping important? -- Recordkeeping is important for maintaining compliance, establishing accountability, facilitating business operations, and preserving historical information/documentation. - - diff --git a/spaces/awacke1/Hexagon-Dice-Fractal-Math-Game/README.md b/spaces/awacke1/Hexagon-Dice-Fractal-Math-Game/README.md deleted file mode 100644 index aeabcfea652c251b5e0478d3c4071f22020e5ab5..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Hexagon-Dice-Fractal-Math-Game/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ⬡Hexagon⬡ 🎲Dice🎲 Fractal Math Game -emoji: 🎲⬡⬡⬡🎲 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Search_Streamlit/README.md b/spaces/awacke1/Search_Streamlit/README.md deleted file mode 100644 index 81c05e9b1ae3a5683a77cb832ccbbe68c6abb61e..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Search_Streamlit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 📗NLP Plot Search Memory SL🔍🎥 -emoji: 📗🔍🎥 -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/StreamlitHeatmapAndCluster/app.py b/spaces/awacke1/StreamlitHeatmapAndCluster/app.py deleted file mode 100644 index 040b6f71a2b254f3826994176b1140b08ce6ef8a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitHeatmapAndCluster/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import streamlit as st -import nltk -from transformers import pipeline -from sentence_transformers import SentenceTransformer -from scipy.spatial.distance import cosine -import numpy as np -import seaborn as sns -import matplotlib.pyplot as plt -from sklearn.cluster import KMeans -import tensorflow as tf -import tensorflow_hub as hub - - -def cluster_examples(messages, embed, nc=3): - km = KMeans( - n_clusters=nc, init='random', - n_init=10, max_iter=300, - tol=1e-04, random_state=0 - ) - km = km.fit_predict(embed) - for n in range(nc): - idxs = [i for i in range(len(km)) if km[i] == n] - ms = [messages[i] for i in idxs] - st.markdown ("CLUSTER : %d"%n) - for m in ms: - st.markdown (m) - - -def plot_heatmap(labels, heatmap, rotation=90): - sns.set(font_scale=1.2) - fig, ax = plt.subplots() - g = sns.heatmap( - heatmap, - xticklabels=labels, - yticklabels=labels, - vmin=-1, - vmax=1, - cmap="coolwarm") - g.set_xticklabels(labels, rotation=rotation) - g.set_title("Textual Similarity") - st.pyplot(fig) - -# Streamlit text boxes -text = st.text_area('Enter sentences:', value="Behavior right this is a kind of Heisenberg uncertainty principle situation if I told you, then you behave differently. What would be the impressive thing is you have talked about winning a nobel prize in a system winning a nobel prize. Adjusting it and then making your own. That is when I fell in love with computers. I realized that they were a very magical device. Can go to sleep come back the next day and it is solved. You know that feels magical to me.") - -nc = st.slider('Select a number of clusters:', min_value=1, max_value=15, value=3) - -model_type = st.radio("Choose model:", ('Sentence Transformer', 'Universal Sentence Encoder'), index=0) - -# Model setup -if model_type == "Sentence Transformer": - model = SentenceTransformer('paraphrase-distilroberta-base-v1') -elif model_type == "Universal Sentence Encoder": - model_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5" - model = hub.load(model_url) - -nltk.download('punkt') - -# Run model -if text: - sentences = nltk.tokenize.sent_tokenize(text) - if model_type == "Sentence Transformer": - embed = model.encode(sentences) - elif model_type == "Universal Sentence Encoder": - embed = model(sentences).numpy() - sim = np.zeros([len(embed), len(embed)]) - for i,em in enumerate(embed): - for j,ea in enumerate(embed): - sim[i][j] = 1.0-cosine(em,ea) - st.subheader("Similarity Heatmap") - plot_heatmap(sentences, sim) - st.subheader("Results from K-Means Clustering") - cluster_examples(sentences, embed, nc) \ No newline at end of file diff --git a/spaces/awacke1/Twitter-Sentiment-Live-Realtime/app.py b/spaces/awacke1/Twitter-Sentiment-Live-Realtime/app.py deleted file mode 100644 index 7165f7c550e19a19b4068435d84ffc2398027b40..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Twitter-Sentiment-Live-Realtime/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import streamlit as st -import tweepy as tw -import pandas as pd -import matplotlib.pyplot as plt -from transformers import pipeline -import os - -consumer_key = 'OCgWzDW6PaBvBeVimmGBqdAg1' -consumer_secret = 'tBKnmyg5Jfsewkpmw74gxHZbbZkGIH6Ee4rsM0lD1vFL7SrEIM' -access_token = '1449663645412065281-LNjZoEO9lxdtxPcmLtM35BRdIKYHpk' -access_token_secret = 'FL3SGsUWSzPVFnG7bNMnyh4vYK8W1SlABBNtdF7Xcbh7a' -auth = tw.OAuthHandler(consumer_key, consumer_secret) -auth.set_access_token(access_token, access_token_secret) -api = tw.API(auth, wait_on_rate_limit=True) -classifier = pipeline('sentiment-analysis') -FILE_NAME = 'query_history.csv' -HEADERS = ['Search Query', 'Number of Tweets', 'Results', 'Date'] - -if not os.path.isfile(FILE_NAME): - df = pd.DataFrame(columns=HEADERS) - df.to_csv(FILE_NAME, index=False) - -st.set_page_config(page_title='😃 Twitter Sentiment Analysis', layout='wide') - -def display_history(): - df = pd.read_csv(FILE_NAME) - st.dataframe(df.style.highlight_max(axis=0)) - -def run(): - with st.form(key='Enter name'): - search_words = st.text_input('Enter a word or phrase you want to know about') - number_of_tweets = st.number_input('How many tweets do you want to see? (maximum 50)', 1, 50, 50) - submit_button = st.form_submit_button(label='Submit') - - if submit_button: - unique_tweets, tweet_list, sentiment_list = set(), [], [] - tweets = tw.Cursor(api.search_tweets, q=search_words, lang="en").items(number_of_tweets) - for tweet in tweets: - if tweet.text not in unique_tweets: - unique_tweets.add(tweet.text) - tweet_list.append(tweet.text) - p = classifier(tweet.text) - sentiment_list.append(p[0]['label']) - - df = pd.DataFrame(list(zip(tweet_list, sentiment_list)), columns=['Tweets', 'Sentiment']) - st.write(df) - - summary = df.groupby('Sentiment').size().reset_index(name='Counts') - fig, ax = plt.subplots() - ax.pie(summary['Counts'], labels=summary['Sentiment'], autopct='%1.1f%%', startangle=90) - ax.axis('equal') - st.pyplot(fig) - - with open(FILE_NAME, mode='a', newline='') as file: - df.to_csv(file, header=False, index=False) - - if st.button('Clear History'): - os.remove(FILE_NAME) - st.write('History has been cleared.') - - if st.button('Display History'): - display_history() - -if __name__=='__main__': - run() diff --git a/spaces/azaninello/gpt2-general-english/app.py b/spaces/azaninello/gpt2-general-english/app.py deleted file mode 100644 index 54c70d8148be45a2d3d61a4885fa28f31a48c748..0000000000000000000000000000000000000000 --- a/spaces/azaninello/gpt2-general-english/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -import transformers -from transformers import AutoModelForCausalLM, AutoModelWithLMHead, AutoTokenizer, pipeline -from transformers import GPT2Tokenizer, GPT2Model - - -general_model = AutoModelForCausalLM.from_pretrained('gpt2') -general_generator = pipeline("text-generation", model=general_model, tokenizer="gpt2") -general_result = general_generator("Today is ", max_length=200) -general_result[0]["generated_text"] - - -def generator(start_your_text = ''): - result = general_generator(start_your_text) - return result[0]["generated_text"] - -iface = gr.Interface(fn=generator, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/azizalto/vanilla-ml-algorithms/page_config.py b/spaces/azizalto/vanilla-ml-algorithms/page_config.py deleted file mode 100644 index b0c39c70dcab2fb04e67bd9c48cd54a40149b3fc..0000000000000000000000000000000000000000 --- a/spaces/azizalto/vanilla-ml-algorithms/page_config.py +++ /dev/null @@ -1,27 +0,0 @@ -from datetime import date - -import streamlit as st - - -def APP_PAGE_HEADER(): - st.set_page_config( - page_title="ML Algorithms", - page_icon=":camel:", - layout="wide", - initial_sidebar_state="collapsed", - ) - - hide_style = """ - - """ - st.markdown(hide_style, unsafe_allow_html=True) - HEADER() - - -def HEADER(): - today = date.today() - st.header("_Simple ML Algorithms explained in Math & Code_") - st.write(str(today)) diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/OutlinePass.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/OutlinePass.js deleted file mode 100644 index 56dcba11c83b0fe0241bf42695bbc8706dae5f68..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/OutlinePass.js +++ /dev/null @@ -1,584 +0,0 @@ -/** - * @author spidersharma / http://eduperiment.com/ - */ - -THREE.OutlinePass = function ( resolution, scene, camera, selectedObjects ) { - - this.renderScene = scene; - this.renderCamera = camera; - this.selectedObjects = selectedObjects !== undefined ? selectedObjects : []; - this.visibleEdgeColor = new THREE.Color( 1, 1, 1 ); - this.hiddenEdgeColor = new THREE.Color( 0.1, 0.04, 0.02 ); - this.edgeGlow = 0.0; - this.usePatternTexture = false; - this.edgeThickness = 1.0; - this.edgeStrength = 3.0; - this.downSampleRatio = 2; - this.pulsePeriod = 0; - - THREE.Pass.call( this ); - - this.resolution = ( resolution !== undefined ) ? new THREE.Vector2( resolution.x, resolution.y ) : new THREE.Vector2( 256, 256 ); - - var pars = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat }; - - var resx = Math.round( this.resolution.x / this.downSampleRatio ); - var resy = Math.round( this.resolution.y / this.downSampleRatio ); - - this.maskBufferMaterial = new THREE.MeshBasicMaterial( { color: 0xffffff } ); - this.maskBufferMaterial.side = THREE.DoubleSide; - this.renderTargetMaskBuffer = new THREE.WebGLRenderTarget( this.resolution.x, this.resolution.y, pars ); - this.renderTargetMaskBuffer.texture.name = "OutlinePass.mask"; - this.renderTargetMaskBuffer.texture.generateMipmaps = false; - - this.depthMaterial = new THREE.MeshDepthMaterial(); - this.depthMaterial.side = THREE.DoubleSide; - this.depthMaterial.depthPacking = THREE.RGBADepthPacking; - this.depthMaterial.blending = THREE.NoBlending; - - this.prepareMaskMaterial = this.getPrepareMaskMaterial(); - this.prepareMaskMaterial.side = THREE.DoubleSide; - this.prepareMaskMaterial.fragmentShader = replaceDepthToViewZ( this.prepareMaskMaterial.fragmentShader, this.renderCamera ); - - this.renderTargetDepthBuffer = new THREE.WebGLRenderTarget( this.resolution.x, this.resolution.y, pars ); - this.renderTargetDepthBuffer.texture.name = "OutlinePass.depth"; - this.renderTargetDepthBuffer.texture.generateMipmaps = false; - - this.renderTargetMaskDownSampleBuffer = new THREE.WebGLRenderTarget( resx, resy, pars ); - this.renderTargetMaskDownSampleBuffer.texture.name = "OutlinePass.depthDownSample"; - this.renderTargetMaskDownSampleBuffer.texture.generateMipmaps = false; - - this.renderTargetBlurBuffer1 = new THREE.WebGLRenderTarget( resx, resy, pars ); - this.renderTargetBlurBuffer1.texture.name = "OutlinePass.blur1"; - this.renderTargetBlurBuffer1.texture.generateMipmaps = false; - this.renderTargetBlurBuffer2 = new THREE.WebGLRenderTarget( Math.round( resx / 2 ), Math.round( resy / 2 ), pars ); - this.renderTargetBlurBuffer2.texture.name = "OutlinePass.blur2"; - this.renderTargetBlurBuffer2.texture.generateMipmaps = false; - - this.edgeDetectionMaterial = this.getEdgeDetectionMaterial(); - this.renderTargetEdgeBuffer1 = new THREE.WebGLRenderTarget( resx, resy, pars ); - this.renderTargetEdgeBuffer1.texture.name = "OutlinePass.edge1"; - this.renderTargetEdgeBuffer1.texture.generateMipmaps = false; - this.renderTargetEdgeBuffer2 = new THREE.WebGLRenderTarget( Math.round( resx / 2 ), Math.round( resy / 2 ), pars ); - this.renderTargetEdgeBuffer2.texture.name = "OutlinePass.edge2"; - this.renderTargetEdgeBuffer2.texture.generateMipmaps = false; - - var MAX_EDGE_THICKNESS = 4; - var MAX_EDGE_GLOW = 4; - - this.separableBlurMaterial1 = this.getSeperableBlurMaterial( MAX_EDGE_THICKNESS ); - this.separableBlurMaterial1.uniforms[ "texSize" ].value = new THREE.Vector2( resx, resy ); - this.separableBlurMaterial1.uniforms[ "kernelRadius" ].value = 1; - this.separableBlurMaterial2 = this.getSeperableBlurMaterial( MAX_EDGE_GLOW ); - this.separableBlurMaterial2.uniforms[ "texSize" ].value = new THREE.Vector2( Math.round( resx / 2 ), Math.round( resy / 2 ) ); - this.separableBlurMaterial2.uniforms[ "kernelRadius" ].value = MAX_EDGE_GLOW; - - // Overlay material - this.overlayMaterial = this.getOverlayMaterial(); - - // copy material - if ( THREE.CopyShader === undefined ) - console.error( "THREE.OutlinePass relies on THREE.CopyShader" ); - - var copyShader = THREE.CopyShader; - - this.copyUniforms = THREE.UniformsUtils.clone( copyShader.uniforms ); - this.copyUniforms[ "opacity" ].value = 1.0; - - this.materialCopy = new THREE.ShaderMaterial( { - uniforms: this.copyUniforms, - vertexShader: copyShader.vertexShader, - fragmentShader: copyShader.fragmentShader, - blending: THREE.NoBlending, - depthTest: false, - depthWrite: false, - transparent: true - } ); - - this.enabled = true; - this.needsSwap = false; - - this.oldClearColor = new THREE.Color(); - this.oldClearAlpha = 1; - - this.fsQuad = new THREE.Pass.FullScreenQuad( null ); - - this.tempPulseColor1 = new THREE.Color(); - this.tempPulseColor2 = new THREE.Color(); - this.textureMatrix = new THREE.Matrix4(); - - function replaceDepthToViewZ( string, camera ) { - - var type = camera.isPerspectiveCamera ? 'perspective' : 'orthographic'; - - return string.replace( /DEPTH_TO_VIEW_Z/g, type + 'DepthToViewZ' ); - - } - -}; - -THREE.OutlinePass.prototype = Object.assign( Object.create( THREE.Pass.prototype ), { - - constructor: THREE.OutlinePass, - - dispose: function () { - - this.renderTargetMaskBuffer.dispose(); - this.renderTargetDepthBuffer.dispose(); - this.renderTargetMaskDownSampleBuffer.dispose(); - this.renderTargetBlurBuffer1.dispose(); - this.renderTargetBlurBuffer2.dispose(); - this.renderTargetEdgeBuffer1.dispose(); - this.renderTargetEdgeBuffer2.dispose(); - - }, - - setSize: function ( width, height ) { - - this.renderTargetMaskBuffer.setSize( width, height ); - - var resx = Math.round( width / this.downSampleRatio ); - var resy = Math.round( height / this.downSampleRatio ); - this.renderTargetMaskDownSampleBuffer.setSize( resx, resy ); - this.renderTargetBlurBuffer1.setSize( resx, resy ); - this.renderTargetEdgeBuffer1.setSize( resx, resy ); - this.separableBlurMaterial1.uniforms[ "texSize" ].value = new THREE.Vector2( resx, resy ); - - resx = Math.round( resx / 2 ); - resy = Math.round( resy / 2 ); - - this.renderTargetBlurBuffer2.setSize( resx, resy ); - this.renderTargetEdgeBuffer2.setSize( resx, resy ); - - this.separableBlurMaterial2.uniforms[ "texSize" ].value = new THREE.Vector2( resx, resy ); - - }, - - changeVisibilityOfSelectedObjects: function ( bVisible ) { - - function gatherSelectedMeshesCallBack( object ) { - - if ( object.isMesh ) { - - if ( bVisible ) { - - object.visible = object.userData.oldVisible; - delete object.userData.oldVisible; - - } else { - - object.userData.oldVisible = object.visible; - object.visible = bVisible; - - } - - } - - } - - for ( var i = 0; i < this.selectedObjects.length; i ++ ) { - - var selectedObject = this.selectedObjects[ i ]; - selectedObject.traverse( gatherSelectedMeshesCallBack ); - - } - - }, - - changeVisibilityOfNonSelectedObjects: function ( bVisible ) { - - var selectedMeshes = []; - - function gatherSelectedMeshesCallBack( object ) { - - if ( object.isMesh ) selectedMeshes.push( object ); - - } - - for ( var i = 0; i < this.selectedObjects.length; i ++ ) { - - var selectedObject = this.selectedObjects[ i ]; - selectedObject.traverse( gatherSelectedMeshesCallBack ); - - } - - function VisibilityChangeCallBack( object ) { - - if ( object.isMesh || object.isLine || object.isSprite ) { - - var bFound = false; - - for ( var i = 0; i < selectedMeshes.length; i ++ ) { - - var selectedObjectId = selectedMeshes[ i ].id; - - if ( selectedObjectId === object.id ) { - - bFound = true; - break; - - } - - } - - if ( ! bFound ) { - - var visibility = object.visible; - - if ( ! bVisible || object.bVisible ) object.visible = bVisible; - - object.bVisible = visibility; - - } - - } - - } - - this.renderScene.traverse( VisibilityChangeCallBack ); - - }, - - updateTextureMatrix: function () { - - this.textureMatrix.set( 0.5, 0.0, 0.0, 0.5, - 0.0, 0.5, 0.0, 0.5, - 0.0, 0.0, 0.5, 0.5, - 0.0, 0.0, 0.0, 1.0 ); - this.textureMatrix.multiply( this.renderCamera.projectionMatrix ); - this.textureMatrix.multiply( this.renderCamera.matrixWorldInverse ); - - }, - - render: function ( renderer, writeBuffer, readBuffer, deltaTime, maskActive ) { - - if ( this.selectedObjects.length > 0 ) { - - this.oldClearColor.copy( renderer.getClearColor() ); - this.oldClearAlpha = renderer.getClearAlpha(); - var oldAutoClear = renderer.autoClear; - - renderer.autoClear = false; - - if ( maskActive ) renderer.context.disable( renderer.context.STENCIL_TEST ); - - renderer.setClearColor( 0xffffff, 1 ); - - // Make selected objects invisible - this.changeVisibilityOfSelectedObjects( false ); - - var currentBackground = this.renderScene.background; - this.renderScene.background = null; - - // 1. Draw Non Selected objects in the depth buffer - this.renderScene.overrideMaterial = this.depthMaterial; - renderer.setRenderTarget( this.renderTargetDepthBuffer ); - renderer.clear(); - renderer.render( this.renderScene, this.renderCamera ); - - // Make selected objects visible - this.changeVisibilityOfSelectedObjects( true ); - - // Update Texture Matrix for Depth compare - this.updateTextureMatrix(); - - // Make non selected objects invisible, and draw only the selected objects, by comparing the depth buffer of non selected objects - this.changeVisibilityOfNonSelectedObjects( false ); - this.renderScene.overrideMaterial = this.prepareMaskMaterial; - this.prepareMaskMaterial.uniforms[ "cameraNearFar" ].value = new THREE.Vector2( this.renderCamera.near, this.renderCamera.far ); - this.prepareMaskMaterial.uniforms[ "depthTexture" ].value = this.renderTargetDepthBuffer.texture; - this.prepareMaskMaterial.uniforms[ "textureMatrix" ].value = this.textureMatrix; - renderer.setRenderTarget( this.renderTargetMaskBuffer ); - renderer.clear(); - renderer.render( this.renderScene, this.renderCamera ); - this.renderScene.overrideMaterial = null; - this.changeVisibilityOfNonSelectedObjects( true ); - - this.renderScene.background = currentBackground; - - // 2. Downsample to Half resolution - this.fsQuad.material = this.materialCopy; - this.copyUniforms[ "tDiffuse" ].value = this.renderTargetMaskBuffer.texture; - renderer.setRenderTarget( this.renderTargetMaskDownSampleBuffer ); - renderer.clear(); - this.fsQuad.render( renderer ); - - this.tempPulseColor1.copy( this.visibleEdgeColor ); - this.tempPulseColor2.copy( this.hiddenEdgeColor ); - - if ( this.pulsePeriod > 0 ) { - - var scalar = ( 1 + 0.25 ) / 2 + Math.cos( performance.now() * 0.01 / this.pulsePeriod ) * ( 1.0 - 0.25 ) / 2; - this.tempPulseColor1.multiplyScalar( scalar ); - this.tempPulseColor2.multiplyScalar( scalar ); - - } - - // 3. Apply Edge Detection Pass - this.fsQuad.material = this.edgeDetectionMaterial; - this.edgeDetectionMaterial.uniforms[ "maskTexture" ].value = this.renderTargetMaskDownSampleBuffer.texture; - this.edgeDetectionMaterial.uniforms[ "texSize" ].value = new THREE.Vector2( this.renderTargetMaskDownSampleBuffer.width, this.renderTargetMaskDownSampleBuffer.height ); - this.edgeDetectionMaterial.uniforms[ "visibleEdgeColor" ].value = this.tempPulseColor1; - this.edgeDetectionMaterial.uniforms[ "hiddenEdgeColor" ].value = this.tempPulseColor2; - renderer.setRenderTarget( this.renderTargetEdgeBuffer1 ); - renderer.clear(); - this.fsQuad.render( renderer ); - - // 4. Apply Blur on Half res - this.fsQuad.material = this.separableBlurMaterial1; - this.separableBlurMaterial1.uniforms[ "colorTexture" ].value = this.renderTargetEdgeBuffer1.texture; - this.separableBlurMaterial1.uniforms[ "direction" ].value = THREE.OutlinePass.BlurDirectionX; - this.separableBlurMaterial1.uniforms[ "kernelRadius" ].value = this.edgeThickness; - renderer.setRenderTarget( this.renderTargetBlurBuffer1 ); - renderer.clear(); - this.fsQuad.render( renderer ); - this.separableBlurMaterial1.uniforms[ "colorTexture" ].value = this.renderTargetBlurBuffer1.texture; - this.separableBlurMaterial1.uniforms[ "direction" ].value = THREE.OutlinePass.BlurDirectionY; - renderer.setRenderTarget( this.renderTargetEdgeBuffer1 ); - renderer.clear(); - this.fsQuad.render( renderer ); - - // Apply Blur on quarter res - this.fsQuad.material = this.separableBlurMaterial2; - this.separableBlurMaterial2.uniforms[ "colorTexture" ].value = this.renderTargetEdgeBuffer1.texture; - this.separableBlurMaterial2.uniforms[ "direction" ].value = THREE.OutlinePass.BlurDirectionX; - renderer.setRenderTarget( this.renderTargetBlurBuffer2 ); - renderer.clear(); - this.fsQuad.render( renderer ); - this.separableBlurMaterial2.uniforms[ "colorTexture" ].value = this.renderTargetBlurBuffer2.texture; - this.separableBlurMaterial2.uniforms[ "direction" ].value = THREE.OutlinePass.BlurDirectionY; - renderer.setRenderTarget( this.renderTargetEdgeBuffer2 ); - renderer.clear(); - this.fsQuad.render( renderer ); - - // Blend it additively over the input texture - this.fsQuad.material = this.overlayMaterial; - this.overlayMaterial.uniforms[ "maskTexture" ].value = this.renderTargetMaskBuffer.texture; - this.overlayMaterial.uniforms[ "edgeTexture1" ].value = this.renderTargetEdgeBuffer1.texture; - this.overlayMaterial.uniforms[ "edgeTexture2" ].value = this.renderTargetEdgeBuffer2.texture; - this.overlayMaterial.uniforms[ "patternTexture" ].value = this.patternTexture; - this.overlayMaterial.uniforms[ "edgeStrength" ].value = this.edgeStrength; - this.overlayMaterial.uniforms[ "edgeGlow" ].value = this.edgeGlow; - this.overlayMaterial.uniforms[ "usePatternTexture" ].value = this.usePatternTexture; - - - if ( maskActive ) renderer.context.enable( renderer.context.STENCIL_TEST ); - - renderer.setRenderTarget( readBuffer ); - this.fsQuad.render( renderer ); - - renderer.setClearColor( this.oldClearColor, this.oldClearAlpha ); - renderer.autoClear = oldAutoClear; - - } - - if ( this.renderToScreen ) { - - this.fsQuad.material = this.materialCopy; - this.copyUniforms[ "tDiffuse" ].value = readBuffer.texture; - renderer.setRenderTarget( null ); - this.fsQuad.render( renderer ); - - } - - }, - - getPrepareMaskMaterial: function () { - - return new THREE.ShaderMaterial( { - - uniforms: { - "depthTexture": { value: null }, - "cameraNearFar": { value: new THREE.Vector2( 0.5, 0.5 ) }, - "textureMatrix": { value: new THREE.Matrix4() } - }, - - vertexShader: [ - 'varying vec4 projTexCoord;', - 'varying vec4 vPosition;', - 'uniform mat4 textureMatrix;', - - 'void main() {', - - ' vPosition = modelViewMatrix * vec4( position, 1.0 );', - ' vec4 worldPosition = modelMatrix * vec4( position, 1.0 );', - ' projTexCoord = textureMatrix * worldPosition;', - ' gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );', - - '}' - ].join( '\n' ), - - fragmentShader: [ - '#include ', - 'varying vec4 vPosition;', - 'varying vec4 projTexCoord;', - 'uniform sampler2D depthTexture;', - 'uniform vec2 cameraNearFar;', - - 'void main() {', - - ' float depth = unpackRGBAToDepth(texture2DProj( depthTexture, projTexCoord ));', - ' float viewZ = - DEPTH_TO_VIEW_Z( depth, cameraNearFar.x, cameraNearFar.y );', - ' float depthTest = (-vPosition.z > viewZ) ? 1.0 : 0.0;', - ' gl_FragColor = vec4(0.0, depthTest, 1.0, 1.0);', - - '}' - ].join( '\n' ) - - } ); - - }, - - getEdgeDetectionMaterial: function () { - - return new THREE.ShaderMaterial( { - - uniforms: { - "maskTexture": { value: null }, - "texSize": { value: new THREE.Vector2( 0.5, 0.5 ) }, - "visibleEdgeColor": { value: new THREE.Vector3( 1.0, 1.0, 1.0 ) }, - "hiddenEdgeColor": { value: new THREE.Vector3( 1.0, 1.0, 1.0 ) }, - }, - - vertexShader: - "varying vec2 vUv;\n\ - void main() {\n\ - vUv = uv;\n\ - gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\n\ - }", - - fragmentShader: - "varying vec2 vUv;\ - uniform sampler2D maskTexture;\ - uniform vec2 texSize;\ - uniform vec3 visibleEdgeColor;\ - uniform vec3 hiddenEdgeColor;\ - \ - void main() {\n\ - vec2 invSize = 1.0 / texSize;\ - vec4 uvOffset = vec4(1.0, 0.0, 0.0, 1.0) * vec4(invSize, invSize);\ - vec4 c1 = texture2D( maskTexture, vUv + uvOffset.xy);\ - vec4 c2 = texture2D( maskTexture, vUv - uvOffset.xy);\ - vec4 c3 = texture2D( maskTexture, vUv + uvOffset.yw);\ - vec4 c4 = texture2D( maskTexture, vUv - uvOffset.yw);\ - float diff1 = (c1.r - c2.r)*0.5;\ - float diff2 = (c3.r - c4.r)*0.5;\ - float d = length( vec2(diff1, diff2) );\ - float a1 = min(c1.g, c2.g);\ - float a2 = min(c3.g, c4.g);\ - float visibilityFactor = min(a1, a2);\ - vec3 edgeColor = 1.0 - visibilityFactor > 0.001 ? visibleEdgeColor : hiddenEdgeColor;\ - gl_FragColor = vec4(edgeColor, 1.0) * vec4(d);\ - }" - } ); - - }, - - getSeperableBlurMaterial: function ( maxRadius ) { - - return new THREE.ShaderMaterial( { - - defines: { - "MAX_RADIUS": maxRadius, - }, - - uniforms: { - "colorTexture": { value: null }, - "texSize": { value: new THREE.Vector2( 0.5, 0.5 ) }, - "direction": { value: new THREE.Vector2( 0.5, 0.5 ) }, - "kernelRadius": { value: 1.0 } - }, - - vertexShader: - "varying vec2 vUv;\n\ - void main() {\n\ - vUv = uv;\n\ - gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\n\ - }", - - fragmentShader: - "#include \ - varying vec2 vUv;\ - uniform sampler2D colorTexture;\ - uniform vec2 texSize;\ - uniform vec2 direction;\ - uniform float kernelRadius;\ - \ - float gaussianPdf(in float x, in float sigma) {\ - return 0.39894 * exp( -0.5 * x * x/( sigma * sigma))/sigma;\ - }\ - void main() {\ - vec2 invSize = 1.0 / texSize;\ - float weightSum = gaussianPdf(0.0, kernelRadius);\ - vec3 diffuseSum = texture2D( colorTexture, vUv).rgb * weightSum;\ - vec2 delta = direction * invSize * kernelRadius/float(MAX_RADIUS);\ - vec2 uvOffset = delta;\ - for( int i = 1; i <= MAX_RADIUS; i ++ ) {\ - float w = gaussianPdf(uvOffset.x, kernelRadius);\ - vec3 sample1 = texture2D( colorTexture, vUv + uvOffset).rgb;\ - vec3 sample2 = texture2D( colorTexture, vUv - uvOffset).rgb;\ - diffuseSum += ((sample1 + sample2) * w);\ - weightSum += (2.0 * w);\ - uvOffset += delta;\ - }\ - gl_FragColor = vec4(diffuseSum/weightSum, 1.0);\ - }" - } ); - - }, - - getOverlayMaterial: function () { - - return new THREE.ShaderMaterial( { - - uniforms: { - "maskTexture": { value: null }, - "edgeTexture1": { value: null }, - "edgeTexture2": { value: null }, - "patternTexture": { value: null }, - "edgeStrength": { value: 1.0 }, - "edgeGlow": { value: 1.0 }, - "usePatternTexture": { value: 0.0 } - }, - - vertexShader: - "varying vec2 vUv;\n\ - void main() {\n\ - vUv = uv;\n\ - gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\n\ - }", - - fragmentShader: - "varying vec2 vUv;\ - uniform sampler2D maskTexture;\ - uniform sampler2D edgeTexture1;\ - uniform sampler2D edgeTexture2;\ - uniform sampler2D patternTexture;\ - uniform float edgeStrength;\ - uniform float edgeGlow;\ - uniform bool usePatternTexture;\ - \ - void main() {\ - vec4 edgeValue1 = texture2D(edgeTexture1, vUv);\ - vec4 edgeValue2 = texture2D(edgeTexture2, vUv);\ - vec4 maskColor = texture2D(maskTexture, vUv);\ - vec4 patternColor = texture2D(patternTexture, 6.0 * vUv);\ - float visibilityFactor = 1.0 - maskColor.g > 0.0 ? 1.0 : 0.5;\ - vec4 edgeValue = edgeValue1 + edgeValue2 * edgeGlow;\ - vec4 finalColor = edgeStrength * maskColor.r * edgeValue;\ - if(usePatternTexture)\ - finalColor += + visibilityFactor * (1.0 - maskColor.r) * (1.0 - patternColor.r);\ - gl_FragColor = finalColor;\ - }", - blending: THREE.AdditiveBlending, - depthTest: false, - depthWrite: false, - transparent: true - } ); - - } - -} ); - -THREE.OutlinePass.BlurDirectionX = new THREE.Vector2( 1.0, 0.0 ); -THREE.OutlinePass.BlurDirectionY = new THREE.Vector2( 0.0, 1.0 ); diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/sprite_frag.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/sprite_frag.glsl.js deleted file mode 100644 index f215219855de41792335b3af2d6e1a123f5a50b5..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/sprite_frag.glsl.js +++ /dev/null @@ -1,32 +0,0 @@ -export default /* glsl */` -uniform vec3 diffuse; -uniform float opacity; - -#include -#include -#include -#include -#include -#include - -void main() { - - #include - - vec3 outgoingLight = vec3( 0.0 ); - vec4 diffuseColor = vec4( diffuse, opacity ); - - #include - #include - #include - - outgoingLight = diffuseColor.rgb; - - gl_FragColor = vec4( outgoingLight, diffuseColor.a ); - - #include - #include - #include - -} -`; diff --git a/spaces/barabum/image-duplicate-finder/app.py b/spaces/barabum/image-duplicate-finder/app.py deleted file mode 100644 index aa8c396a8874ef24f3635148895834654e685dcf..0000000000000000000000000000000000000000 --- a/spaces/barabum/image-duplicate-finder/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy -from sentence_transformers import SentenceTransformer, util -from PIL import Image -import gradio as gr - -model = SentenceTransformer('clip-ViT-B-32') - - -def image_classifier(im1: numpy.ndarray, im2: numpy.ndarray): - encoded_image = model.encode([Image.fromarray(im1), Image.fromarray(im2)], batch_size=128, - convert_to_tensor=True, show_progress_bar=True) - processed_images = util.paraphrase_mining_embeddings(encoded_image) - return {"Схожи на": round(processed_images[0][0], 2)} - - -with gr.Blocks() as b: - with gr.Row(): - with gr.Column(): - image1 = gr.Image(label="image 1") - image2 = gr.Image(label="image 2") - with gr.Row(): - compare = gr.Button("Compare") - output = gr.Label(label="output") - compare.click(fn=image_classifier, inputs=[image1, image2], outputs=output) - -b.launch() diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326230358.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326230358.py deleted file mode 100644 index 53e43ff27bdac92263729cc4b6cd03693a1ae8df..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326230358.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -os.system("pip install gfpgan") - -os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - - - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/bguberfain/Detic/tools/dump_clip_features.py b/spaces/bguberfain/Detic/tools/dump_clip_features.py deleted file mode 100644 index 127f8c2a86c2425611c8ec075006664f5e07df45..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/tools/dump_clip_features.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import torch -import numpy as np -import itertools -from nltk.corpus import wordnet -import sys - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/lvis/lvis_v1_val.json') - parser.add_argument('--out_path', default='') - parser.add_argument('--prompt', default='a') - parser.add_argument('--model', default='clip') - parser.add_argument('--clip_model', default="ViT-B/32") - parser.add_argument('--fix_space', action='store_true') - parser.add_argument('--use_underscore', action='store_true') - parser.add_argument('--avg_synonyms', action='store_true') - parser.add_argument('--use_wn_name', action='store_true') - args = parser.parse_args() - - print('Loading', args.ann) - data = json.load(open(args.ann, 'r')) - cat_names = [x['name'] for x in \ - sorted(data['categories'], key=lambda x: x['id'])] - if 'synonyms' in data['categories'][0]: - if args.use_wn_name: - synonyms = [ - [xx.name() for xx in wordnet.synset(x['synset']).lemmas()] \ - if x['synset'] != 'stop_sign.n.01' else ['stop_sign'] \ - for x in sorted(data['categories'], key=lambda x: x['id'])] - else: - synonyms = [x['synonyms'] for x in \ - sorted(data['categories'], key=lambda x: x['id'])] - else: - synonyms = [] - if args.fix_space: - cat_names = [x.replace('_', ' ') for x in cat_names] - if args.use_underscore: - cat_names = [x.strip().replace('/ ', '/').replace(' ', '_') for x in cat_names] - print('cat_names', cat_names) - device = "cuda" if torch.cuda.is_available() else "cpu" - - if args.prompt == 'a': - sentences = ['a ' + x for x in cat_names] - sentences_synonyms = [['a ' + xx for xx in x] for x in synonyms] - if args.prompt == 'none': - sentences = [x for x in cat_names] - sentences_synonyms = [[xx for xx in x] for x in synonyms] - elif args.prompt == 'photo': - sentences = ['a photo of a {}'.format(x) for x in cat_names] - sentences_synonyms = [['a photo of a {}'.format(xx) for xx in x] \ - for x in synonyms] - elif args.prompt == 'scene': - sentences = ['a photo of a {} in the scene'.format(x) for x in cat_names] - sentences_synonyms = [['a photo of a {} in the scene'.format(xx) for xx in x] \ - for x in synonyms] - - print('sentences_synonyms', len(sentences_synonyms), \ - sum(len(x) for x in sentences_synonyms)) - if args.model == 'clip': - import clip - print('Loading CLIP') - model, preprocess = clip.load(args.clip_model, device=device) - if args.avg_synonyms: - sentences = list(itertools.chain.from_iterable(sentences_synonyms)) - print('flattened_sentences', len(sentences)) - text = clip.tokenize(sentences).to(device) - with torch.no_grad(): - if len(text) > 10000: - text_features = torch.cat([ - model.encode_text(text[:len(text) // 2]), - model.encode_text(text[len(text) // 2:])], - dim=0) - else: - text_features = model.encode_text(text) - print('text_features.shape', text_features.shape) - if args.avg_synonyms: - synonyms_per_cat = [len(x) for x in sentences_synonyms] - text_features = text_features.split(synonyms_per_cat, dim=0) - text_features = [x.mean(dim=0) for x in text_features] - text_features = torch.stack(text_features, dim=0) - print('after stack', text_features.shape) - text_features = text_features.cpu().numpy() - elif args.model in ['bert', 'roberta']: - from transformers import AutoTokenizer, AutoModel - if args.model == 'bert': - model_name = 'bert-large-uncased' - if args.model == 'roberta': - model_name = 'roberta-large' - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = AutoModel.from_pretrained(model_name) - model.eval() - if args.avg_synonyms: - sentences = list(itertools.chain.from_iterable(sentences_synonyms)) - print('flattened_sentences', len(sentences)) - inputs = tokenizer(sentences, padding=True, return_tensors="pt") - with torch.no_grad(): - model_outputs = model(**inputs) - outputs = model_outputs.pooler_output - text_features = outputs.detach().cpu() - if args.avg_synonyms: - synonyms_per_cat = [len(x) for x in sentences_synonyms] - text_features = text_features.split(synonyms_per_cat, dim=0) - text_features = [x.mean(dim=0) for x in text_features] - text_features = torch.stack(text_features, dim=0) - print('after stack', text_features.shape) - text_features = text_features.numpy() - print('text_features.shape', text_features.shape) - else: - assert 0, args.model - if args.out_path != '': - print('saveing to', args.out_path) - np.save(open(args.out_path, 'wb'), text_features) - import pdb; pdb.set_trace() diff --git a/spaces/bhaskartripathi/pdfChatter/README.md b/spaces/bhaskartripathi/pdfChatter/README.md deleted file mode 100644 index 1936b4b9b8694a0885bddc59936619b5917e1358..0000000000000000000000000000000000000000 --- a/spaces/bhaskartripathi/pdfChatter/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PdfChatter -emoji: 🏢 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Download the Sinhala Film Motor Bicycle and Join the Journey of Three Friends.md b/spaces/bioriAsaeru/text-to-voice/Download the Sinhala Film Motor Bicycle and Join the Journey of Three Friends.md deleted file mode 100644 index 64b0bccab65f140a7794e2f46214817f3a4b5f5e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download the Sinhala Film Motor Bicycle and Join the Journey of Three Friends.md +++ /dev/null @@ -1,6 +0,0 @@ -

    motorbicyclesinhalafilmdownload


    Download Zip https://urloso.com/2uyRiA



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Led Fan Editor Software Download 2021.md b/spaces/bioriAsaeru/text-to-voice/Led Fan Editor Software Download 2021.md deleted file mode 100644 index 7bbbe5568d051a18b2caa08b392a8c83d9e35e4b..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Led Fan Editor Software Download 2021.md +++ /dev/null @@ -1,106 +0,0 @@ - -

    LED Fan Editor Software Download: Everything You Need to Know

    - -

    If you have a LED fan in your computer case, you might want to customize its lighting effects and messages. But how can you do that? You need a LED fan editor software that can communicate with your fan and let you control its RGB settings. In this article, we will show you how to download and use the best LED fan editor software available.

    - -

    What is LED Fan Editor Software?

    - -

    LED fan editor software is a program that allows you to create and edit animations and text messages for your LED fan. You can choose from different colors, patterns, speeds, and modes to make your fan look amazing. You can also save and load your creations and share them with other users.

    -

    led fan editor software download


    Downloadhttps://urloso.com/2uyRNo



    - -

    LED fan editor software works by sending signals to your fan through a USB cable or a wireless connection. The fan has a built-in memory that stores the data and displays it on the blades as they spin. Some fans also have sensors that can adjust the brightness and speed according to the temperature and noise level of your system.

    - -

    How to Download LED Fan Editor Software?

    - -

    There are many LED fan editor software options available online, but not all of them are compatible with every fan model. You need to check the specifications of your fan and see what software it supports. Some fans come with their own software, while others can work with generic or third-party programs.

    - -

    Here are some of the most popular LED fan editor software that you can download for free:

    - -
      -
    • USB LED Fan Software: This is a simple and easy-to-use software that works with most USB LED fans. You can download it from here. It lets you create up to eight messages with up to 16 characters each. You can also adjust the brightness, speed, direction, and mode of your fan.
    • -
    • OpenRGB: This is an open source RGB lighting control software that works with many RGB devices, including LED fans. You can download it from here. It has a lightweight user interface that lets you control all of your RGB devices from a single app. You can also synchronize lighting across multiple brands of devices and integrate RGB into your games, music, and more.
    • -
    • PowerTRC LED Fan Software: This is a software that works with PowerTRC LED fans. You can download it from here. It lets you create custom animations and text messages for your fan. You can also choose from different fonts, colors, effects, and modes.
    • -
    - -

    How to Use LED Fan Editor Software?

    - -

    Once you have downloaded and installed the LED fan editor software of your choice, you need to connect your fan to your computer using a USB cable or a wireless adapter. Then, you need to launch the software and follow these steps:

    - -
      -
    1. Select your fan from the list of devices.
    2. -
    3. Create or edit an animation or a message using the tools provided by the software.
    4. -
    5. Preview your creation on the software screen or on your fan.
    6. -
    7. Save and upload your creation to your fan.
    8. -
    9. Enjoy your customized LED fan!
    10. -
    - -

    You can also switch between different animations and messages using the buttons on your fan or on the software.

    - -

    Conclusion

    - -

    LED fan editor software is a great way to personalize your computer case and make it stand out. You can download and use various software options depending on your fan model and preferences. You can also create and share your own animations and messages with other users. With LED fan editor software, you can turn your fan into a cool display of your creativity and style.

    -

    -

    What are the Benefits of LED Fan Editor Software?

    - -

    Using LED fan editor software can have many benefits for your computer and yourself. Here are some of them:

    - -
      -
    • Improve your cooling performance: By customizing your fan speed and brightness, you can optimize your cooling performance and reduce the noise and heat of your system. You can also use sensors to automatically adjust your fan settings according to the temperature and noise level.
    • -
    • Enhance your aesthetics: By creating and editing your own animations and messages, you can make your computer case look more attractive and unique. You can also match your fan lighting with other RGB devices and create a harmonious color scheme.
    • -
    • Express your personality: By displaying your own animations and messages on your fan, you can show your personality and mood to others. You can also use your fan as a way to communicate with other users or display useful information such as time, date, weather, etc.
    • -
    - -

    What are the Drawbacks of LED Fan Editor Software?

    - -

    While LED fan editor software can have many advantages, it can also have some drawbacks that you should be aware of. Here are some of them:

    - -
      -
    • Compatibility issues: Not all LED fans are compatible with all LED fan editor software. You need to check the specifications of your fan and the software before downloading and installing them. Some software may not work with certain fan models or may require additional hardware or drivers.
    • -
    • Security risks: Some LED fan editor software may contain malware or spyware that can harm your computer or steal your personal information. You need to be careful when downloading and installing software from unknown sources or websites. You should also scan your computer regularly with antivirus software and update your software when needed.
    • -
    • Distraction problems: Having a LED fan with colorful animations and messages can be fun and cool, but it can also be distracting and annoying at times. You may find yourself paying more attention to your fan than to your work or gaming. You may also disturb others with your fan noise or lighting effects. You should use your LED fan editor software responsibly and adjust your fan settings according to the situation.
    • -
    -

    What are the Features of LED Fan Editor Software?

    - -

    LED fan editor software can offer many features that can enhance your LED fan experience. Here are some of them:

    - -
      -
    • Multiple languages support: Some LED fan editor software can support multiple languages, such as English, Chinese, Japanese, Korean, etc. You can choose your preferred language and display it on your fan.
    • -
    • Image and video support: Some LED fan editor software can support image and video formats, such as BMP, JPG, GIF, MP4, etc. You can import your own images and videos and display them on your fan.
    • -
    • Sound and music support: Some LED fan editor software can support sound and music formats, such as WAV, MP3, etc. You can import your own sound and music files and play them along with your fan animations and messages.
    • -
    • Online library and community: Some LED fan editor software can connect to an online library and community where you can download and upload your creations and share them with other users. You can also rate and comment on other users' creations and get feedback on yours.
    • -
    - -

    What are the Tips for Using LED Fan Editor Software?

    - -

    Using LED fan editor software can be fun and easy, but there are some tips that you should follow to get the best results. Here are some of them:

    - -
      -
    • Choose the right software for your fan: As mentioned before, not all LED fan editor software are compatible with all LED fan models. You need to check the specifications of your fan and the software before downloading and installing them. You also need to make sure that your software is updated to the latest version.
    • -
    • Use high-quality images and videos: If you want to display images and videos on your fan, you need to use high-quality files that have a good resolution and frame rate. Low-quality files may look blurry or choppy on your fan.
    • -
    • Use clear and simple messages: If you want to display text messages on your fan, you need to use clear and simple words that are easy to read and understand. Avoid using long or complex sentences that may confuse or bore your viewers.
    • -
    • Use appropriate colors and effects: If you want to create animations for your fan, you need to use appropriate colors and effects that match your theme and mood. Avoid using too many or too few colors that may make your animations look dull or chaotic.
    • -
    -

    How to Choose the Best LED Fan Editor Software?

    - -

    With so many LED fan editor software options available, how can you choose the best one for your needs? Here are some factors that you should consider:

    - -
      -
    • Compatibility: The most important factor is compatibility. You need to make sure that the software you choose is compatible with your fan model and your operating system. You also need to check if the software requires any additional hardware or drivers to work properly.
    • -
    • Features: The next factor is features. You need to compare the features of different software and see what they can offer. Some software may have more features than others, such as image and video support, sound and music support, online library and community, etc. You need to decide what features are important for you and what features are not.
    • -
    • Usability: The last factor is usability. You need to test the software and see how easy it is to use. Some software may have a user-friendly interface that makes it easy to create and edit animations and messages, while others may have a complex or confusing interface that makes it hard to use. You need to choose a software that suits your skill level and preferences.
    • -
    - -

    What are the Alternatives to LED Fan Editor Software?

    - -

    If you don't want to use LED fan editor software, or if you can't find a suitable software for your fan, you can still customize your fan in other ways. Here are some alternatives:

    - -
      -
    • Manual control: Some fans have manual control buttons that let you switch between different animations and messages that are pre-programmed in the fan. You can also adjust the brightness, speed, direction, and mode of your fan using these buttons.
    • -
    • Remote control: Some fans have remote control devices that let you control your fan wirelessly. You can use the remote control to change the animations and messages, as well as the brightness, speed, direction, and mode of your fan.
    • -
    • DIY: If you are feeling adventurous and creative, you can try to make your own LED fan editor software or hardware. You can use Arduino or Raspberry Pi boards to connect your fan to your computer and program your own animations and messages. You can also modify your fan hardware and add more LEDs or sensors to it.
    • -
    -

    Conclusion

    - -

    LED fan editor software is a great tool that can help you customize your LED fan and make it more fun and cool. You can download and use various software options depending on your fan model and preferences. You can also create and share your own animations and messages with other users. However, you should also be aware of the drawbacks and risks of using LED fan editor software, such as compatibility issues, security risks, and distraction problems. You should also consider the alternatives to LED fan editor software, such as manual control, remote control, and DIY. With LED fan editor software, you can unleash your creativity and style on your fan.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/dnnlib/__init__.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/dnnlib/__init__.py deleted file mode 100644 index 2f08cf36f11f9b0fd94c1b7caeadf69b98375b04..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/dnnlib/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from .util import EasyDict, make_cache_dir_path diff --git a/spaces/bobrooos/test/README.md b/spaces/bobrooos/test/README.md deleted file mode 100644 index 48bc2b9807f508beb93dfb3386b9ec696935ad96..0000000000000000000000000000000000000000 --- a/spaces/bobrooos/test/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Test -emoji: 💻 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bradley6597/Spell-Bee-Solver/app.py b/spaces/bradley6597/Spell-Bee-Solver/app.py deleted file mode 100644 index 6e80e8da811b073573744db34920bd03a8b581f8..0000000000000000000000000000000000000000 --- a/spaces/bradley6597/Spell-Bee-Solver/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import gradio as gr -import pandas as pd -import requests -import re -import json -from datetime import date - -english_dict = pd.read_csv("dictionary.txt", - header = None, - sep = ' ', - names = ['word']) -english_dict = english_dict.reset_index(drop = True) -english_dict = english_dict.dropna() - -url = 'https://spellbee.org' -def spell_bee_solver(no_centre, centre): - full_set = set(no_centre.lower() + centre.lower()) - spell_bee_solver = english_dict[english_dict['word'].str.contains(str(centre.lower()), regex = False)] - final_words = list() - for i in range(0, spell_bee_solver.shape[0]): - words = spell_bee_solver['word'].iloc[i] - words_set = set(words) - if len(words_set - full_set) == 0: - final_words.append(words) - - final_word_df = pd.DataFrame(final_words) - final_word_df.columns = ['word'] - final_word_df['word_length'] = final_word_df['word'].str.len() - final_word_df = final_word_df[final_word_df['word_length'] > 3] - final_word_df = final_word_df.sort_values('word_length', ascending = False) - return(final_word_df) - -def get_spellbee_answers(x): - today = date.today().strftime("%Y-%m-%d") - - content = requests.get(url)._content - content = re.sub(".*window.games = ", "", str(content)) - content = re.sub("(.*?)\\;.*", "\\1", content) - content = json.loads(content) - - valid_words = content[today]['data']['dictionary'] - final_word_df = pd.DataFrame(valid_words, columns = ['word']) - final_word_df['word_length'] = final_word_df['word'].str.len() - final_word_df = final_word_df[final_word_df['word_length'] > 3] - final_word_df = final_word_df.sort_values('word_length', ascending = False) - return(final_word_df) - -with gr.Blocks() as app: - with gr.Row(): - no_centre = gr.Textbox(label = 'Letters Outside of Centre') - centre = gr.Textbox(label = 'Centre Letter') - with gr.Row(): - solve_button = gr.Button(value = 'Solve') - get_today_answers = gr.Button(value = "Get Today's answers") - with gr.Row(): - output_df = gr.DataFrame(headers = ['word', 'word_length']) - solve_button.click(spell_bee_solver, inputs = [no_centre, centre], outputs = [output_df]) - get_today_answers.click(get_spellbee_answers, inputs = [no_centre], outputs = [output_df]) - -app.launch(debug = True, share = False) \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/README.md b/spaces/brjathu/HMR2.0/README.md deleted file mode 100644 index 60bbdfb968ba4cf7734bbf1d6582b562cc604f6f..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HMR2.0 -emoji: 🔥 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.26.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/point_sup/point_utils.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/point_sup/point_utils.py deleted file mode 100644 index eed876ea9e0127c584c008bd5aab3e16e2c8c66a..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointSup/point_sup/point_utils.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import torch - -from detectron2.layers import cat - - -def get_point_coords_from_point_annotation(instances): - """ - Load point coords and their corresponding labels from point annotation. - - Args: - instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. These instances are in 1:1 - correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask, - ...) associated with each instance are stored in fields. - Returns: - point_coords (Tensor): A tensor of shape (N, P, 2) that contains the coordinates of P - sampled points. - point_labels (Tensor): A tensor of shape (N, P) that contains the labels of P - sampled points. `point_labels` takes 3 possible values: - - 0: the point belongs to background - - 1: the point belongs to the object - - -1: the point is ignored during training - """ - point_coords_list = [] - point_labels_list = [] - for instances_per_image in instances: - if len(instances_per_image) == 0: - continue - point_coords = instances_per_image.gt_point_coords.to(torch.float32) - point_labels = instances_per_image.gt_point_labels.to(torch.float32).clone() - proposal_boxes_per_image = instances_per_image.proposal_boxes.tensor - - # Convert point coordinate system, ground truth points are in image coord. - point_coords_wrt_box = get_point_coords_wrt_box(proposal_boxes_per_image, point_coords) - - # Ignore points that are outside predicted boxes. - point_ignores = ( - (point_coords_wrt_box[:, :, 0] < 0) - | (point_coords_wrt_box[:, :, 0] > 1) - | (point_coords_wrt_box[:, :, 1] < 0) - | (point_coords_wrt_box[:, :, 1] > 1) - ) - point_labels[point_ignores] = -1 - - point_coords_list.append(point_coords_wrt_box) - point_labels_list.append(point_labels) - - return ( - cat(point_coords_list, dim=0), - cat(point_labels_list, dim=0), - ) - - -def get_point_coords_wrt_box(boxes_coords, point_coords): - """ - Convert image-level absolute coordinates to box-normalized [0, 1] x [0, 1] point cooordinates. - Args: - boxes_coords (Tensor): A tensor of shape (R, 4) that contains bounding boxes. - coordinates. - point_coords (Tensor): A tensor of shape (R, P, 2) that contains - image-normalized coordinates of P sampled points. - Returns: - point_coords_wrt_box (Tensor): A tensor of shape (R, P, 2) that contains - [0, 1] x [0, 1] box-normalized coordinates of the P sampled points. - """ - with torch.no_grad(): - point_coords_wrt_box = point_coords.clone() - point_coords_wrt_box[:, :, 0] -= boxes_coords[:, None, 0] - point_coords_wrt_box[:, :, 1] -= boxes_coords[:, None, 1] - point_coords_wrt_box[:, :, 0] = point_coords_wrt_box[:, :, 0] / ( - boxes_coords[:, None, 2] - boxes_coords[:, None, 0] - ) - point_coords_wrt_box[:, :, 1] = point_coords_wrt_box[:, :, 1] / ( - boxes_coords[:, None, 3] - boxes_coords[:, None, 1] - ) - return point_coords_wrt_box diff --git a/spaces/bryanmildort/stockpricepredict/README.md b/spaces/bryanmildort/stockpricepredict/README.md deleted file mode 100644 index f53d7651101525d39c5d591296c7d84e7f41412b..0000000000000000000000000000000000000000 --- a/spaces/bryanmildort/stockpricepredict/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stockpricepredict -emoji: 🦀 -colorFrom: purple -colorTo: indigo -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/IcnsImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/IcnsImagePlugin.py deleted file mode 100644 index 27cb89f735e2a1883b2b52ee42fd9ba34c5805fb..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/IcnsImagePlugin.py +++ /dev/null @@ -1,399 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# macOS icns file decoder, based on icns.py by Bob Ippolito. -# -# history: -# 2004-10-09 fl Turned into a PIL plugin; removed 2.3 dependencies. -# 2020-04-04 Allow saving on all operating systems. -# -# Copyright (c) 2004 by Bob Ippolito. -# Copyright (c) 2004 by Secret Labs. -# Copyright (c) 2004 by Fredrik Lundh. -# Copyright (c) 2014 by Alastair Houghton. -# Copyright (c) 2020 by Pan Jing. -# -# See the README file for information on usage and redistribution. -# - -import io -import os -import struct -import sys - -from . import Image, ImageFile, PngImagePlugin, features - -enable_jpeg2k = features.check_codec("jpg_2000") -if enable_jpeg2k: - from . import Jpeg2KImagePlugin - -MAGIC = b"icns" -HEADERSIZE = 8 - - -def nextheader(fobj): - return struct.unpack(">4sI", fobj.read(HEADERSIZE)) - - -def read_32t(fobj, start_length, size): - # The 128x128 icon seems to have an extra header for some reason. - (start, length) = start_length - fobj.seek(start) - sig = fobj.read(4) - if sig != b"\x00\x00\x00\x00": - msg = "Unknown signature, expecting 0x00000000" - raise SyntaxError(msg) - return read_32(fobj, (start + 4, length - 4), size) - - -def read_32(fobj, start_length, size): - """ - Read a 32bit RGB icon resource. Seems to be either uncompressed or - an RLE packbits-like scheme. - """ - (start, length) = start_length - fobj.seek(start) - pixel_size = (size[0] * size[2], size[1] * size[2]) - sizesq = pixel_size[0] * pixel_size[1] - if length == sizesq * 3: - # uncompressed ("RGBRGBGB") - indata = fobj.read(length) - im = Image.frombuffer("RGB", pixel_size, indata, "raw", "RGB", 0, 1) - else: - # decode image - im = Image.new("RGB", pixel_size, None) - for band_ix in range(3): - data = [] - bytesleft = sizesq - while bytesleft > 0: - byte = fobj.read(1) - if not byte: - break - byte = byte[0] - if byte & 0x80: - blocksize = byte - 125 - byte = fobj.read(1) - for i in range(blocksize): - data.append(byte) - else: - blocksize = byte + 1 - data.append(fobj.read(blocksize)) - bytesleft -= blocksize - if bytesleft <= 0: - break - if bytesleft != 0: - msg = f"Error reading channel [{repr(bytesleft)} left]" - raise SyntaxError(msg) - band = Image.frombuffer("L", pixel_size, b"".join(data), "raw", "L", 0, 1) - im.im.putband(band.im, band_ix) - return {"RGB": im} - - -def read_mk(fobj, start_length, size): - # Alpha masks seem to be uncompressed - start = start_length[0] - fobj.seek(start) - pixel_size = (size[0] * size[2], size[1] * size[2]) - sizesq = pixel_size[0] * pixel_size[1] - band = Image.frombuffer("L", pixel_size, fobj.read(sizesq), "raw", "L", 0, 1) - return {"A": band} - - -def read_png_or_jpeg2000(fobj, start_length, size): - (start, length) = start_length - fobj.seek(start) - sig = fobj.read(12) - if sig[:8] == b"\x89PNG\x0d\x0a\x1a\x0a": - fobj.seek(start) - im = PngImagePlugin.PngImageFile(fobj) - Image._decompression_bomb_check(im.size) - return {"RGBA": im} - elif ( - sig[:4] == b"\xff\x4f\xff\x51" - or sig[:4] == b"\x0d\x0a\x87\x0a" - or sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a" - ): - if not enable_jpeg2k: - msg = ( - "Unsupported icon subimage format (rebuild PIL " - "with JPEG 2000 support to fix this)" - ) - raise ValueError(msg) - # j2k, jpc or j2c - fobj.seek(start) - jp2kstream = fobj.read(length) - f = io.BytesIO(jp2kstream) - im = Jpeg2KImagePlugin.Jpeg2KImageFile(f) - Image._decompression_bomb_check(im.size) - if im.mode != "RGBA": - im = im.convert("RGBA") - return {"RGBA": im} - else: - msg = "Unsupported icon subimage format" - raise ValueError(msg) - - -class IcnsFile: - SIZES = { - (512, 512, 2): [(b"ic10", read_png_or_jpeg2000)], - (512, 512, 1): [(b"ic09", read_png_or_jpeg2000)], - (256, 256, 2): [(b"ic14", read_png_or_jpeg2000)], - (256, 256, 1): [(b"ic08", read_png_or_jpeg2000)], - (128, 128, 2): [(b"ic13", read_png_or_jpeg2000)], - (128, 128, 1): [ - (b"ic07", read_png_or_jpeg2000), - (b"it32", read_32t), - (b"t8mk", read_mk), - ], - (64, 64, 1): [(b"icp6", read_png_or_jpeg2000)], - (32, 32, 2): [(b"ic12", read_png_or_jpeg2000)], - (48, 48, 1): [(b"ih32", read_32), (b"h8mk", read_mk)], - (32, 32, 1): [ - (b"icp5", read_png_or_jpeg2000), - (b"il32", read_32), - (b"l8mk", read_mk), - ], - (16, 16, 2): [(b"ic11", read_png_or_jpeg2000)], - (16, 16, 1): [ - (b"icp4", read_png_or_jpeg2000), - (b"is32", read_32), - (b"s8mk", read_mk), - ], - } - - def __init__(self, fobj): - """ - fobj is a file-like object as an icns resource - """ - # signature : (start, length) - self.dct = dct = {} - self.fobj = fobj - sig, filesize = nextheader(fobj) - if not _accept(sig): - msg = "not an icns file" - raise SyntaxError(msg) - i = HEADERSIZE - while i < filesize: - sig, blocksize = nextheader(fobj) - if blocksize <= 0: - msg = "invalid block header" - raise SyntaxError(msg) - i += HEADERSIZE - blocksize -= HEADERSIZE - dct[sig] = (i, blocksize) - fobj.seek(blocksize, io.SEEK_CUR) - i += blocksize - - def itersizes(self): - sizes = [] - for size, fmts in self.SIZES.items(): - for fmt, reader in fmts: - if fmt in self.dct: - sizes.append(size) - break - return sizes - - def bestsize(self): - sizes = self.itersizes() - if not sizes: - msg = "No 32bit icon resources found" - raise SyntaxError(msg) - return max(sizes) - - def dataforsize(self, size): - """ - Get an icon resource as {channel: array}. Note that - the arrays are bottom-up like windows bitmaps and will likely - need to be flipped or transposed in some way. - """ - dct = {} - for code, reader in self.SIZES[size]: - desc = self.dct.get(code) - if desc is not None: - dct.update(reader(self.fobj, desc, size)) - return dct - - def getimage(self, size=None): - if size is None: - size = self.bestsize() - if len(size) == 2: - size = (size[0], size[1], 1) - channels = self.dataforsize(size) - - im = channels.get("RGBA", None) - if im: - return im - - im = channels.get("RGB").copy() - try: - im.putalpha(channels["A"]) - except KeyError: - pass - return im - - -## -# Image plugin for Mac OS icons. - - -class IcnsImageFile(ImageFile.ImageFile): - """ - PIL image support for Mac OS .icns files. - Chooses the best resolution, but will possibly load - a different size image if you mutate the size attribute - before calling 'load'. - - The info dictionary has a key 'sizes' that is a list - of sizes that the icns file has. - """ - - format = "ICNS" - format_description = "Mac OS icns resource" - - def _open(self): - self.icns = IcnsFile(self.fp) - self.mode = "RGBA" - self.info["sizes"] = self.icns.itersizes() - self.best_size = self.icns.bestsize() - self.size = ( - self.best_size[0] * self.best_size[2], - self.best_size[1] * self.best_size[2], - ) - - @property - def size(self): - return self._size - - @size.setter - def size(self, value): - info_size = value - if info_size not in self.info["sizes"] and len(info_size) == 2: - info_size = (info_size[0], info_size[1], 1) - if ( - info_size not in self.info["sizes"] - and len(info_size) == 3 - and info_size[2] == 1 - ): - simple_sizes = [ - (size[0] * size[2], size[1] * size[2]) for size in self.info["sizes"] - ] - if value in simple_sizes: - info_size = self.info["sizes"][simple_sizes.index(value)] - if info_size not in self.info["sizes"]: - msg = "This is not one of the allowed sizes of this image" - raise ValueError(msg) - self._size = value - - def load(self): - if len(self.size) == 3: - self.best_size = self.size - self.size = ( - self.best_size[0] * self.best_size[2], - self.best_size[1] * self.best_size[2], - ) - - px = Image.Image.load(self) - if self.im is not None and self.im.size == self.size: - # Already loaded - return px - self.load_prepare() - # This is likely NOT the best way to do it, but whatever. - im = self.icns.getimage(self.best_size) - - # If this is a PNG or JPEG 2000, it won't be loaded yet - px = im.load() - - self.im = im.im - self.mode = im.mode - self.size = im.size - - return px - - -def _save(im, fp, filename): - """ - Saves the image as a series of PNG files, - that are then combined into a .icns file. - """ - if hasattr(fp, "flush"): - fp.flush() - - sizes = { - b"ic07": 128, - b"ic08": 256, - b"ic09": 512, - b"ic10": 1024, - b"ic11": 32, - b"ic12": 64, - b"ic13": 256, - b"ic14": 512, - } - provided_images = {im.width: im for im in im.encoderinfo.get("append_images", [])} - size_streams = {} - for size in set(sizes.values()): - image = ( - provided_images[size] - if size in provided_images - else im.resize((size, size)) - ) - - temp = io.BytesIO() - image.save(temp, "png") - size_streams[size] = temp.getvalue() - - entries = [] - for type, size in sizes.items(): - stream = size_streams[size] - entries.append( - {"type": type, "size": HEADERSIZE + len(stream), "stream": stream} - ) - - # Header - fp.write(MAGIC) - file_length = HEADERSIZE # Header - file_length += HEADERSIZE + 8 * len(entries) # TOC - file_length += sum(entry["size"] for entry in entries) - fp.write(struct.pack(">i", file_length)) - - # TOC - fp.write(b"TOC ") - fp.write(struct.pack(">i", HEADERSIZE + len(entries) * HEADERSIZE)) - for entry in entries: - fp.write(entry["type"]) - fp.write(struct.pack(">i", entry["size"])) - - # Data - for entry in entries: - fp.write(entry["type"]) - fp.write(struct.pack(">i", entry["size"])) - fp.write(entry["stream"]) - - if hasattr(fp, "flush"): - fp.flush() - - -def _accept(prefix): - return prefix[:4] == MAGIC - - -Image.register_open(IcnsImageFile.format, IcnsImageFile, _accept) -Image.register_extension(IcnsImageFile.format, ".icns") - -Image.register_save(IcnsImageFile.format, _save) -Image.register_mime(IcnsImageFile.format, "image/icns") - -if __name__ == "__main__": - if len(sys.argv) < 2: - print("Syntax: python3 IcnsImagePlugin.py [file]") - sys.exit() - - with open(sys.argv[1], "rb") as fp: - imf = IcnsImageFile(fp) - for size in imf.info["sizes"]: - imf.size = size - imf.save("out-%s-%s-%s.png" % size) - with Image.open(sys.argv[1]) as im: - im.save("out.png") - if sys.platform == "windows": - os.startfile("out.png") diff --git a/spaces/captainChan/CaptainChan/transforms.py b/spaces/captainChan/CaptainChan/transforms.py deleted file mode 100644 index 5a7042f3368bc832566d5c22d1e18abe5d8547f5..0000000000000000000000000000000000000000 --- a/spaces/captainChan/CaptainChan/transforms.py +++ /dev/null @@ -1,329 +0,0 @@ -import math -import numbers -import random - -import cv2 -import numpy as np -from PIL import Image -from torchvision import transforms -from torchvision.transforms import Compose - - -def sample_asym(magnitude, size=None): - return np.random.beta(1, 4, size) * magnitude - -def sample_sym(magnitude, size=None): - return (np.random.beta(4, 4, size=size) - 0.5) * 2 * magnitude - -def sample_uniform(low, high, size=None): - return np.random.uniform(low, high, size=size) - -def get_interpolation(type='random'): - if type == 'random': - choice = [cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA] - interpolation = choice[random.randint(0, len(choice)-1)] - elif type == 'nearest': interpolation = cv2.INTER_NEAREST - elif type == 'linear': interpolation = cv2.INTER_LINEAR - elif type == 'cubic': interpolation = cv2.INTER_CUBIC - elif type == 'area': interpolation = cv2.INTER_AREA - else: raise TypeError('Interpolation types only nearest, linear, cubic, area are supported!') - return interpolation - -class CVRandomRotation(object): - def __init__(self, degrees=15): - assert isinstance(degrees, numbers.Number), "degree should be a single number." - assert degrees >= 0, "degree must be positive." - self.degrees = degrees - - @staticmethod - def get_params(degrees): - return sample_sym(degrees) - - def __call__(self, img): - angle = self.get_params(self.degrees) - src_h, src_w = img.shape[:2] - M = cv2.getRotationMatrix2D(center=(src_w/2, src_h/2), angle=angle, scale=1.0) - abs_cos, abs_sin = abs(M[0,0]), abs(M[0,1]) - dst_w = int(src_h * abs_sin + src_w * abs_cos) - dst_h = int(src_h * abs_cos + src_w * abs_sin) - M[0, 2] += (dst_w - src_w)/2 - M[1, 2] += (dst_h - src_h)/2 - - flags = get_interpolation() - return cv2.warpAffine(img, M, (dst_w, dst_h), flags=flags, borderMode=cv2.BORDER_REPLICATE) - -class CVRandomAffine(object): - def __init__(self, degrees, translate=None, scale=None, shear=None): - assert isinstance(degrees, numbers.Number), "degree should be a single number." - assert degrees >= 0, "degree must be positive." - self.degrees = degrees - - if translate is not None: - assert isinstance(translate, (tuple, list)) and len(translate) == 2, \ - "translate should be a list or tuple and it must be of length 2." - for t in translate: - if not (0.0 <= t <= 1.0): - raise ValueError("translation values should be between 0 and 1") - self.translate = translate - - if scale is not None: - assert isinstance(scale, (tuple, list)) and len(scale) == 2, \ - "scale should be a list or tuple and it must be of length 2." - for s in scale: - if s <= 0: - raise ValueError("scale values should be positive") - self.scale = scale - - if shear is not None: - if isinstance(shear, numbers.Number): - if shear < 0: - raise ValueError("If shear is a single number, it must be positive.") - self.shear = [shear] - else: - assert isinstance(shear, (tuple, list)) and (len(shear) == 2), \ - "shear should be a list or tuple and it must be of length 2." - self.shear = shear - else: - self.shear = shear - - def _get_inverse_affine_matrix(self, center, angle, translate, scale, shear): - # https://github.com/pytorch/vision/blob/v0.4.0/torchvision/transforms/functional.py#L717 - from numpy import sin, cos, tan - - if isinstance(shear, numbers.Number): - shear = [shear, 0] - - if not isinstance(shear, (tuple, list)) and len(shear) == 2: - raise ValueError( - "Shear should be a single value or a tuple/list containing " + - "two values. Got {}".format(shear)) - - rot = math.radians(angle) - sx, sy = [math.radians(s) for s in shear] - - cx, cy = center - tx, ty = translate - - # RSS without scaling - a = cos(rot - sy) / cos(sy) - b = -cos(rot - sy) * tan(sx) / cos(sy) - sin(rot) - c = sin(rot - sy) / cos(sy) - d = -sin(rot - sy) * tan(sx) / cos(sy) + cos(rot) - - # Inverted rotation matrix with scale and shear - # det([[a, b], [c, d]]) == 1, since det(rotation) = 1 and det(shear) = 1 - M = [d, -b, 0, - -c, a, 0] - M = [x / scale for x in M] - - # Apply inverse of translation and of center translation: RSS^-1 * C^-1 * T^-1 - M[2] += M[0] * (-cx - tx) + M[1] * (-cy - ty) - M[5] += M[3] * (-cx - tx) + M[4] * (-cy - ty) - - # Apply center translation: C * RSS^-1 * C^-1 * T^-1 - M[2] += cx - M[5] += cy - return M - - @staticmethod - def get_params(degrees, translate, scale_ranges, shears, height): - angle = sample_sym(degrees) - if translate is not None: - max_dx = translate[0] * height - max_dy = translate[1] * height - translations = (np.round(sample_sym(max_dx)), np.round(sample_sym(max_dy))) - else: - translations = (0, 0) - - if scale_ranges is not None: - scale = sample_uniform(scale_ranges[0], scale_ranges[1]) - else: - scale = 1.0 - - if shears is not None: - if len(shears) == 1: - shear = [sample_sym(shears[0]), 0.] - elif len(shears) == 2: - shear = [sample_sym(shears[0]), sample_sym(shears[1])] - else: - shear = 0.0 - - return angle, translations, scale, shear - - - def __call__(self, img): - src_h, src_w = img.shape[:2] - angle, translate, scale, shear = self.get_params( - self.degrees, self.translate, self.scale, self.shear, src_h) - - M = self._get_inverse_affine_matrix((src_w/2, src_h/2), angle, (0, 0), scale, shear) - M = np.array(M).reshape(2,3) - - startpoints = [(0, 0), (src_w - 1, 0), (src_w - 1, src_h - 1), (0, src_h - 1)] - project = lambda x, y, a, b, c: int(a*x + b*y + c) - endpoints = [(project(x, y, *M[0]), project(x, y, *M[1])) for x, y in startpoints] - - rect = cv2.minAreaRect(np.array(endpoints)) - bbox = cv2.boxPoints(rect).astype(dtype=np.int) - max_x, max_y = bbox[:, 0].max(), bbox[:, 1].max() - min_x, min_y = bbox[:, 0].min(), bbox[:, 1].min() - - dst_w = int(max_x - min_x) - dst_h = int(max_y - min_y) - M[0, 2] += (dst_w - src_w) / 2 - M[1, 2] += (dst_h - src_h) / 2 - - # add translate - dst_w += int(abs(translate[0])) - dst_h += int(abs(translate[1])) - if translate[0] < 0: M[0, 2] += abs(translate[0]) - if translate[1] < 0: M[1, 2] += abs(translate[1]) - - flags = get_interpolation() - return cv2.warpAffine(img, M, (dst_w , dst_h), flags=flags, borderMode=cv2.BORDER_REPLICATE) - -class CVRandomPerspective(object): - def __init__(self, distortion=0.5): - self.distortion = distortion - - def get_params(self, width, height, distortion): - offset_h = sample_asym(distortion * height / 2, size=4).astype(dtype=np.int) - offset_w = sample_asym(distortion * width / 2, size=4).astype(dtype=np.int) - topleft = ( offset_w[0], offset_h[0]) - topright = (width - 1 - offset_w[1], offset_h[1]) - botright = (width - 1 - offset_w[2], height - 1 - offset_h[2]) - botleft = ( offset_w[3], height - 1 - offset_h[3]) - - startpoints = [(0, 0), (width - 1, 0), (width - 1, height - 1), (0, height - 1)] - endpoints = [topleft, topright, botright, botleft] - return np.array(startpoints, dtype=np.float32), np.array(endpoints, dtype=np.float32) - - def __call__(self, img): - height, width = img.shape[:2] - startpoints, endpoints = self.get_params(width, height, self.distortion) - M = cv2.getPerspectiveTransform(startpoints, endpoints) - - # TODO: more robust way to crop image - rect = cv2.minAreaRect(endpoints) - bbox = cv2.boxPoints(rect).astype(dtype=np.int) - max_x, max_y = bbox[:, 0].max(), bbox[:, 1].max() - min_x, min_y = bbox[:, 0].min(), bbox[:, 1].min() - min_x, min_y = max(min_x, 0), max(min_y, 0) - - flags = get_interpolation() - img = cv2.warpPerspective(img, M, (max_x, max_y), flags=flags, borderMode=cv2.BORDER_REPLICATE) - img = img[min_y:, min_x:] - return img - -class CVRescale(object): - - def __init__(self, factor=4, base_size=(128, 512)): - """ Define image scales using gaussian pyramid and rescale image to target scale. - - Args: - factor: the decayed factor from base size, factor=4 keeps target scale by default. - base_size: base size the build the bottom layer of pyramid - """ - if isinstance(factor, numbers.Number): - self.factor = round(sample_uniform(0, factor)) - elif isinstance(factor, (tuple, list)) and len(factor) == 2: - self.factor = round(sample_uniform(factor[0], factor[1])) - else: - raise Exception('factor must be number or list with length 2') - # assert factor is valid - self.base_h, self.base_w = base_size[:2] - - def __call__(self, img): - if self.factor == 0: return img - src_h, src_w = img.shape[:2] - cur_w, cur_h = self.base_w, self.base_h - scale_img = cv2.resize(img, (cur_w, cur_h), interpolation=get_interpolation()) - for _ in range(self.factor): - scale_img = cv2.pyrDown(scale_img) - scale_img = cv2.resize(scale_img, (src_w, src_h), interpolation=get_interpolation()) - return scale_img - -class CVGaussianNoise(object): - def __init__(self, mean=0, var=20): - self.mean = mean - if isinstance(var, numbers.Number): - self.var = max(int(sample_asym(var)), 1) - elif isinstance(var, (tuple, list)) and len(var) == 2: - self.var = int(sample_uniform(var[0], var[1])) - else: - raise Exception('degree must be number or list with length 2') - - def __call__(self, img): - noise = np.random.normal(self.mean, self.var**0.5, img.shape) - img = np.clip(img + noise, 0, 255).astype(np.uint8) - return img - -class CVMotionBlur(object): - def __init__(self, degrees=12, angle=90): - if isinstance(degrees, numbers.Number): - self.degree = max(int(sample_asym(degrees)), 1) - elif isinstance(degrees, (tuple, list)) and len(degrees) == 2: - self.degree = int(sample_uniform(degrees[0], degrees[1])) - else: - raise Exception('degree must be number or list with length 2') - self.angle = sample_uniform(-angle, angle) - - def __call__(self, img): - M = cv2.getRotationMatrix2D((self.degree // 2, self.degree // 2), self.angle, 1) - motion_blur_kernel = np.zeros((self.degree, self.degree)) - motion_blur_kernel[self.degree // 2, :] = 1 - motion_blur_kernel = cv2.warpAffine(motion_blur_kernel, M, (self.degree, self.degree)) - motion_blur_kernel = motion_blur_kernel / self.degree - img = cv2.filter2D(img, -1, motion_blur_kernel) - img = np.clip(img, 0, 255).astype(np.uint8) - return img - -class CVGeometry(object): - def __init__(self, degrees=15, translate=(0.3, 0.3), scale=(0.5, 2.), - shear=(45, 15), distortion=0.5, p=0.5): - self.p = p - type_p = random.random() - if type_p < 0.33: - self.transforms = CVRandomRotation(degrees=degrees) - elif type_p < 0.66: - self.transforms = CVRandomAffine(degrees=degrees, translate=translate, scale=scale, shear=shear) - else: - self.transforms = CVRandomPerspective(distortion=distortion) - - def __call__(self, img): - if random.random() < self.p: - img = np.array(img) - return Image.fromarray(self.transforms(img)) - else: return img - -class CVDeterioration(object): - def __init__(self, var, degrees, factor, p=0.5): - self.p = p - transforms = [] - if var is not None: - transforms.append(CVGaussianNoise(var=var)) - if degrees is not None: - transforms.append(CVMotionBlur(degrees=degrees)) - if factor is not None: - transforms.append(CVRescale(factor=factor)) - - random.shuffle(transforms) - transforms = Compose(transforms) - self.transforms = transforms - - def __call__(self, img): - if random.random() < self.p: - img = np.array(img) - return Image.fromarray(self.transforms(img)) - else: return img - - -class CVColorJitter(object): - def __init__(self, brightness=0.5, contrast=0.5, saturation=0.5, hue=0.1, p=0.5): - self.p = p - self.transforms = transforms.ColorJitter(brightness=brightness, contrast=contrast, - saturation=saturation, hue=hue) - - def __call__(self, img): - if random.random() < self.p: return self.transforms(img) - else: return img diff --git a/spaces/captchaboy/sendmespecs/README.md b/spaces/captchaboy/sendmespecs/README.md deleted file mode 100644 index c3c307c8ac36ded9a6b1b67b332cd588e1afa997..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/sendmespecs/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sendmespecs -emoji: 🌖 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/config/test_yacs_config.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/config/test_yacs_config.py deleted file mode 100644 index 01dd6955f78e2700ffc10ed723ab1c95df0e5a18..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/config/test_yacs_config.py +++ /dev/null @@ -1,270 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - - -import os -import tempfile -import unittest -import torch -from omegaconf import OmegaConf - -from detectron2 import model_zoo -from detectron2.config import configurable, downgrade_config, get_cfg, upgrade_config -from detectron2.layers import ShapeSpec -from detectron2.modeling import build_model - -_V0_CFG = """ -MODEL: - RPN_HEAD: - NAME: "TEST" -VERSION: 0 -""" - -_V1_CFG = """ -MODEL: - WEIGHT: "/path/to/weight" -""" - - -class TestConfigVersioning(unittest.TestCase): - def test_upgrade_downgrade_consistency(self): - cfg = get_cfg() - # check that custom is preserved - cfg.USER_CUSTOM = 1 - - down = downgrade_config(cfg, to_version=0) - up = upgrade_config(down) - self.assertTrue(up == cfg) - - def _merge_cfg_str(self, cfg, merge_str): - f = tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) - try: - f.write(merge_str) - f.close() - cfg.merge_from_file(f.name) - finally: - os.remove(f.name) - return cfg - - def test_auto_upgrade(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - cfg.USER_CUSTOM = 1 - - self._merge_cfg_str(cfg, _V0_CFG) - - self.assertEqual(cfg.MODEL.RPN.HEAD_NAME, "TEST") - self.assertEqual(cfg.VERSION, latest_ver) - - def test_guess_v1(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - self._merge_cfg_str(cfg, _V1_CFG) - self.assertEqual(cfg.VERSION, latest_ver) - - -class _TestClassA(torch.nn.Module): - @configurable - def __init__(self, arg1, arg2, arg3=3): - super().__init__() - self.arg1 = arg1 - self.arg2 = arg2 - self.arg3 = arg3 - assert arg1 == 1 - assert arg2 == 2 - assert arg3 == 3 - - @classmethod - def from_config(cls, cfg): - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - return args - - -class _TestClassB(_TestClassA): - @configurable - def __init__(self, input_shape, arg1, arg2, arg3=3): - """ - Doc of _TestClassB - """ - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - @classmethod - def from_config(cls, cfg, input_shape): # test extra positional arg in from_config - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - return args - - -class _LegacySubClass(_TestClassB): - # an old subclass written in cfg style - def __init__(self, cfg, input_shape, arg4=4): - super().__init__(cfg, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _NewSubClassNewInit(_TestClassB): - # test new subclass with a new __init__ - @configurable - def __init__(self, input_shape, arg4=4, **kwargs): - super().__init__(input_shape, **kwargs) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _LegacySubClassNotCfg(_TestClassB): - # an old subclass written in cfg style, but argument is not called "cfg" - def __init__(self, config, input_shape): - super().__init__(config, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _TestClassC(_TestClassB): - @classmethod - def from_config(cls, cfg, input_shape, **kwargs): # test extra kwarg overwrite - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - args.update(kwargs) - return args - - -class _TestClassD(_TestClassA): - @configurable - def __init__(self, input_shape: ShapeSpec, arg1: int, arg2, arg3=3): - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - # _TestClassA.from_config does not have input_shape args. - # Test whether input_shape will be forwarded to __init__ - - -@configurable(from_config=lambda cfg, arg2: {"arg1": cfg.ARG1, "arg2": arg2, "arg3": cfg.ARG3}) -def _test_func(arg1, arg2=2, arg3=3, arg4=4): - return arg1, arg2, arg3, arg4 - - -class TestConfigurable(unittest.TestCase): - def testInitWithArgs(self): - _ = _TestClassA(arg1=1, arg2=2, arg3=3) - _ = _TestClassB("shape", arg1=1, arg2=2) - _ = _TestClassC("shape", arg1=1, arg2=2) - _ = _TestClassD("shape", arg1=1, arg2=2, arg3=3) - - def testPatchedAttr(self): - self.assertTrue("Doc" in _TestClassB.__init__.__doc__) - self.assertEqual(_TestClassD.__init__.__annotations__["arg1"], int) - - def testInitWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - cfg.ARG3 = 3 - _ = _TestClassA(cfg) - _ = _TestClassB(cfg, input_shape="shape") - _ = _TestClassC(cfg, input_shape="shape") - _ = _TestClassD(cfg, input_shape="shape") - _ = _LegacySubClass(cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(cfg, input_shape="shape") - with self.assertRaises(TypeError): - # disallow forwarding positional args to __init__ since it's prone to errors - _ = _TestClassD(cfg, "shape") - - # call with kwargs instead - _ = _TestClassA(cfg=cfg) - _ = _TestClassB(cfg=cfg, input_shape="shape") - _ = _TestClassC(cfg=cfg, input_shape="shape") - _ = _TestClassD(cfg=cfg, input_shape="shape") - _ = _LegacySubClass(cfg=cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg=cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(config=cfg, input_shape="shape") - - def testInitWithCfgOverwrite(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 999 # wrong config - with self.assertRaises(AssertionError): - _ = _TestClassA(cfg, arg3=3) - - # overwrite arg2 with correct config later: - _ = _TestClassA(cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg, input_shape="shape", arg2=2, arg3=3) - - # call with kwargs cfg=cfg instead - _ = _TestClassA(cfg=cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - - def testInitWithCfgWrongArgs(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - with self.assertRaises(TypeError): - _ = _TestClassB(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassC(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassD(cfg, "shape", not_exist=1) - - def testBadClass(self): - class _BadClass1: - @configurable - def __init__(self, a=1, b=2): - pass - - class _BadClass2: - @configurable - def __init__(self, a=1, b=2): - pass - - def from_config(self, cfg): # noqa - pass - - class _BadClass3: - @configurable - def __init__(self, a=1, b=2): - pass - - # bad name: must be cfg - @classmethod - def from_config(cls, config): # noqa - pass - - with self.assertRaises(AttributeError): - _ = _BadClass1(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass2(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass3(get_cfg()) - - def testFuncWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 10 - cfg.ARG3 = 30 - - self.assertEqual(_test_func(1), (1, 2, 3, 4)) - with self.assertRaises(TypeError): - _test_func(cfg) - self.assertEqual(_test_func(cfg, arg2=2), (10, 2, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20), (100, 20, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20, arg4=40), (100, 20, 30, 40)) - - self.assertTrue(callable(_test_func.from_config)) - - def testOmegaConf(self): - cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - cfg = OmegaConf.create(cfg.dump()) - if not torch.cuda.is_available(): - cfg.MODEL.DEVICE = "cpu" - # test that a model can be built with omegaconf config as well - build_model(cfg) diff --git a/spaces/chansung/LLM-As-Chatbot/models/kullm.py b/spaces/chansung/LLM-As-Chatbot/models/kullm.py deleted file mode 100644 index 5152b2d7627b6ae4e5fd2bfd680af23d48e18c8c..0000000000000000000000000000000000000000 --- a/spaces/chansung/LLM-As-Chatbot/models/kullm.py +++ /dev/null @@ -1,51 +0,0 @@ -import torch - -from transformers import AutoModelForCausalLM, AutoTokenizer -from optimum.bettertransformer import BetterTransformer - -def load_model( - base, - finetuned, - mode_cpu, - mode_mps, - mode_full_gpu, - mode_8bit, - mode_4bit, - force_download_ckpt -): - tokenizer = AutoTokenizer.from_pretrained(base) - - if mode_cpu: - print("cpu mode") - model = AutoModelForCausalLM.from_pretrained( - base, - device_map={"": "cpu"}, - use_safetensors=False - ) - - elif mode_mps: - print("mps mode") - model = AutoModelForCausalLM.from_pretrained( - base, - device_map={"": "mps"}, - torch_dtype=torch.float16, - use_safetensors=False - ) - - else: - print("gpu mode") - print(f"8bit = {mode_8bit}, 4bit = {mode_4bit}") - model = AutoModelForCausalLM.from_pretrained( - base, - load_in_8bit=mode_8bit, - load_in_4bit=mode_4bit, - torch_dtype=torch.float16, - device_map="auto", - use_safetensors=False - ) - - if not mode_8bit and not mode_4bit: - model.half() - - # model = BetterTransformer.transform(model) - return model, tokenizer \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolov3.py b/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolov3.py deleted file mode 100644 index c747f8ae9f42549a1dbd7f03d8ee80e235d6467a..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolov3.py +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -import torch.nn as nn - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 1.0 - self.width = 1.0 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - - def get_model(self, sublinear=False): - def init_yolo(M): - for m in M.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eps = 1e-3 - m.momentum = 0.03 - if "model" not in self.__dict__: - from yolox.models import YOLOX, YOLOFPN, YOLOXHead - backbone = YOLOFPN() - head = YOLOXHead(self.num_classes, self.width, in_channels=[128, 256, 512], act="lrelu") - self.model = YOLOX(backbone, head) - self.model.apply(init_yolo) - self.model.head.initialize_biases(1e-2) - - return self.model diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/README.md b/spaces/chendl/compositional_test/transformers/examples/legacy/README.md deleted file mode 100644 index eaf64f624637778d9b07fe3e034c30ca0acb70e9..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/README.md +++ /dev/null @@ -1,21 +0,0 @@ - - -# Legacy examples - -This folder contains examples which are not actively maintained (mostly contributed by the community). - -Using these examples together with a recent version of the library usually requires to make small (sometimes big) adaptations to get the scripts working. diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/movement-pruning/emmental/__init__.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/movement-pruning/emmental/__init__.py deleted file mode 100644 index 6646667ea883781c3bd6b9cff0267b68ee1478e4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/movement-pruning/emmental/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .configuration_bert_masked import MaskedBertConfig -from .modeling_bert_masked import ( - MaskedBertForMultipleChoice, - MaskedBertForQuestionAnswering, - MaskedBertForSequenceClassification, - MaskedBertForTokenClassification, - MaskedBertModel, -) -from .modules import * diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/dictTools.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/dictTools.py deleted file mode 100644 index 259613b27048c458980986167d429847d270691f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/dictTools.py +++ /dev/null @@ -1,83 +0,0 @@ -"""Misc dict tools.""" - - -__all__ = ["hashdict"] - -# https://stackoverflow.com/questions/1151658/python-hashable-dicts -class hashdict(dict): - """ - hashable dict implementation, suitable for use as a key into - other dicts. - - >>> h1 = hashdict({"apples": 1, "bananas":2}) - >>> h2 = hashdict({"bananas": 3, "mangoes": 5}) - >>> h1+h2 - hashdict(apples=1, bananas=3, mangoes=5) - >>> d1 = {} - >>> d1[h1] = "salad" - >>> d1[h1] - 'salad' - >>> d1[h2] - Traceback (most recent call last): - ... - KeyError: hashdict(bananas=3, mangoes=5) - - based on answers from - http://stackoverflow.com/questions/1151658/python-hashable-dicts - - """ - - def __key(self): - return tuple(sorted(self.items())) - - def __repr__(self): - return "{0}({1})".format( - self.__class__.__name__, - ", ".join("{0}={1}".format(str(i[0]), repr(i[1])) for i in self.__key()), - ) - - def __hash__(self): - return hash(self.__key()) - - def __setitem__(self, key, value): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def __delitem__(self, key): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def clear(self): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def pop(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def popitem(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def setdefault(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def update(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - # update is not ok because it mutates the object - # __add__ is ok because it creates a new object - # while the new object is under construction, it's ok to mutate it - def __add__(self, right): - result = hashdict(self) - dict.update(result, right) - return result diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/t2CharStringPen.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/t2CharStringPen.py deleted file mode 100644 index 41ab0f92f2b683ac2dc87ca1b16f54047d0fef81..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/t2CharStringPen.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) 2009 Type Supply LLC -# Author: Tal Leming - -from fontTools.misc.roundTools import otRound, roundFunc -from fontTools.misc.psCharStrings import T2CharString -from fontTools.pens.basePen import BasePen -from fontTools.cffLib.specializer import specializeCommands, commandsToProgram - - -class T2CharStringPen(BasePen): - """Pen to draw Type 2 CharStrings. - - The 'roundTolerance' argument controls the rounding of point coordinates. - It is defined as the maximum absolute difference between the original - float and the rounded integer value. - The default tolerance of 0.5 means that all floats are rounded to integer; - a value of 0 disables rounding; values in between will only round floats - which are close to their integral part within the tolerated range. - """ - - def __init__(self, width, glyphSet, roundTolerance=0.5, CFF2=False): - super(T2CharStringPen, self).__init__(glyphSet) - self.round = roundFunc(roundTolerance) - self._CFF2 = CFF2 - self._width = width - self._commands = [] - self._p0 = (0, 0) - - def _p(self, pt): - p0 = self._p0 - pt = self._p0 = (self.round(pt[0]), self.round(pt[1])) - return [pt[0] - p0[0], pt[1] - p0[1]] - - def _moveTo(self, pt): - self._commands.append(("rmoveto", self._p(pt))) - - def _lineTo(self, pt): - self._commands.append(("rlineto", self._p(pt))) - - def _curveToOne(self, pt1, pt2, pt3): - _p = self._p - self._commands.append(("rrcurveto", _p(pt1) + _p(pt2) + _p(pt3))) - - def _closePath(self): - pass - - def _endPath(self): - pass - - def getCharString(self, private=None, globalSubrs=None, optimize=True): - commands = self._commands - if optimize: - maxstack = 48 if not self._CFF2 else 513 - commands = specializeCommands( - commands, generalizeFirst=False, maxstack=maxstack - ) - program = commandsToProgram(commands) - if self._width is not None: - assert ( - not self._CFF2 - ), "CFF2 does not allow encoding glyph width in CharString." - program.insert(0, otRound(self._width)) - if not self._CFF2: - program.append("endchar") - charString = T2CharString( - program=program, private=private, globalSubrs=globalSubrs - ) - return charString diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/varLib/builder.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/varLib/builder.py deleted file mode 100644 index 94cc5bf063b1dc67ff58bdb7f2bd3d642bee4ce4..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/varLib/builder.py +++ /dev/null @@ -1,157 +0,0 @@ -from fontTools import ttLib -from fontTools.ttLib.tables import otTables as ot - -# VariationStore - - -def buildVarRegionAxis(axisSupport): - self = ot.VarRegionAxis() - self.StartCoord, self.PeakCoord, self.EndCoord = [float(v) for v in axisSupport] - return self - - -def buildVarRegion(support, axisTags): - assert all(tag in axisTags for tag in support.keys()), ( - "Unknown axis tag found.", - support, - axisTags, - ) - self = ot.VarRegion() - self.VarRegionAxis = [] - for tag in axisTags: - self.VarRegionAxis.append(buildVarRegionAxis(support.get(tag, (0, 0, 0)))) - return self - - -def buildVarRegionList(supports, axisTags): - self = ot.VarRegionList() - self.RegionAxisCount = len(axisTags) - self.Region = [] - for support in supports: - self.Region.append(buildVarRegion(support, axisTags)) - self.RegionCount = len(self.Region) - return self - - -def _reorderItem(lst, mapping): - return [lst[i] for i in mapping] - - -def VarData_calculateNumShorts(self, optimize=False): - count = self.VarRegionCount - items = self.Item - bit_lengths = [0] * count - for item in items: - # The "+ (i < -1)" magic is to handle two's-compliment. - # That is, we want to get back 7 for -128, whereas - # bit_length() returns 8. Similarly for -65536. - # The reason "i < -1" is used instead of "i < 0" is that - # the latter would make it return 0 for "-1" instead of 1. - bl = [(i + (i < -1)).bit_length() for i in item] - bit_lengths = [max(*pair) for pair in zip(bl, bit_lengths)] - # The addition of 8, instead of seven, is to account for the sign bit. - # This "((b + 8) >> 3) if b else 0" when combined with the above - # "(i + (i < -1)).bit_length()" is a faster way to compute byte-lengths - # conforming to: - # - # byte_length = (0 if i == 0 else - # 1 if -128 <= i < 128 else - # 2 if -65536 <= i < 65536 else - # ...) - byte_lengths = [((b + 8) >> 3) if b else 0 for b in bit_lengths] - - # https://github.com/fonttools/fonttools/issues/2279 - longWords = any(b > 2 for b in byte_lengths) - - if optimize: - # Reorder columns such that wider columns come before narrower columns - mapping = [] - mapping.extend(i for i, b in enumerate(byte_lengths) if b > 2) - mapping.extend(i for i, b in enumerate(byte_lengths) if b == 2) - mapping.extend(i for i, b in enumerate(byte_lengths) if b == 1) - - byte_lengths = _reorderItem(byte_lengths, mapping) - self.VarRegionIndex = _reorderItem(self.VarRegionIndex, mapping) - self.VarRegionCount = len(self.VarRegionIndex) - for i in range(len(items)): - items[i] = _reorderItem(items[i], mapping) - - if longWords: - self.NumShorts = ( - max((i for i, b in enumerate(byte_lengths) if b > 2), default=-1) + 1 - ) - self.NumShorts |= 0x8000 - else: - self.NumShorts = ( - max((i for i, b in enumerate(byte_lengths) if b > 1), default=-1) + 1 - ) - - self.VarRegionCount = len(self.VarRegionIndex) - return self - - -ot.VarData.calculateNumShorts = VarData_calculateNumShorts - - -def VarData_CalculateNumShorts(self, optimize=True): - """Deprecated name for VarData_calculateNumShorts() which - defaults to optimize=True. Use varData.calculateNumShorts() - or varData.optimize().""" - return VarData_calculateNumShorts(self, optimize=optimize) - - -def VarData_optimize(self): - return VarData_calculateNumShorts(self, optimize=True) - - -ot.VarData.optimize = VarData_optimize - - -def buildVarData(varRegionIndices, items, optimize=True): - self = ot.VarData() - self.VarRegionIndex = list(varRegionIndices) - regionCount = self.VarRegionCount = len(self.VarRegionIndex) - records = self.Item = [] - if items: - for item in items: - assert len(item) == regionCount - records.append(list(item)) - self.ItemCount = len(self.Item) - self.calculateNumShorts(optimize=optimize) - return self - - -def buildVarStore(varRegionList, varDataList): - self = ot.VarStore() - self.Format = 1 - self.VarRegionList = varRegionList - self.VarData = list(varDataList) - self.VarDataCount = len(self.VarData) - return self - - -# Variation helpers - - -def buildVarIdxMap(varIdxes, glyphOrder): - self = ot.VarIdxMap() - self.mapping = {g: v for g, v in zip(glyphOrder, varIdxes)} - return self - - -def buildDeltaSetIndexMap(varIdxes): - mapping = list(varIdxes) - if all(i == v for i, v in enumerate(mapping)): - return None - self = ot.DeltaSetIndexMap() - self.mapping = mapping - self.Format = 1 if len(mapping) > 0xFFFF else 0 - return self - - -def buildVarDevTable(varIdx): - self = ot.Device() - self.DeltaFormat = 0x8000 - self.StartSize = varIdx >> 16 - self.EndSize = varIdx & 0xFFFF - return self diff --git a/spaces/cihyFjudo/fairness-paper-search/AutoData 3.40 crack and full version download - Pastebin.com[3].md b/spaces/cihyFjudo/fairness-paper-search/AutoData 3.40 crack and full version download - Pastebin.com[3].md deleted file mode 100644 index 42663822e47657ce7d4ad49cae3a730c06f5ebd1..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/AutoData 3.40 crack and full version download - Pastebin.com[3].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Autodata 3.40 ita download gratis 1184


    DOWNLOAD >>> https://tinurli.com/2uwkMd



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Explore Hindi Superhit Movies Online Free A Treasure Trove of Quality Entertainment.md b/spaces/cihyFjudo/fairness-paper-search/Explore Hindi Superhit Movies Online Free A Treasure Trove of Quality Entertainment.md deleted file mode 100644 index 17545ab126ca251e061d64d83c429d2a4991628c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Explore Hindi Superhit Movies Online Free A Treasure Trove of Quality Entertainment.md +++ /dev/null @@ -1,32 +0,0 @@ -
    -

    Also known as Hindi cinema, Bollywood is one of the biggest film industries on earth. Combining different genres such as action, comedy, romance, drama, and melodrama along with musicals, Bollywood movies have won loyal fans with unique charms. Want to watch Bollywood movies online free? Just keep reading to find the best sites that allow you to watch online Bollywood movies without cost.

    -

    HindiMoviesTV is one of the best free movie websites to watch Bollywood movies online. With a clean interface, the home page is loaded with a wide selection of Hindi movies. Want to watch new Bollywood movies online? Just go to the "Latest" menu. You can also locate titles with the "Genre" easily. Most content can be streamed in HD quality and the streaming speed is super fast.

    -

    hindi superhit movies online free


    Download >>>>> https://tinurli.com/2uwi5R



    -

    With a sleek design, Moovana is a great choice to watch Bollywood movies online free. Do you want to watch the best new movies on Hotstar without a subscription? Moovana has got you covered. You can not only find the latest hotstar movies but also trendy TV shows here in HD quality. To watch new Bollywood movies online with Moovana, just check out the "Upcoming movies'' section. One outstanding feature is that the streaming speed is way faster than most competitors, which guarantees a pleasant viewing experience.

    -

    Yomovies is one of the best sites to watch Bollywood movies online free. You can find the must-watch Bollywood movies and the latest Hindi movies in superb video quality. Yomovies provides a good variety of movies such as Hollywood movies, Hollywood Hindi dubbed movies, South Indian Hindi dubbed movies, Punjabi movies, Telugu movies, Tamil, and 18+ movies. Each title comes with 2 servers so you watch online Bollywood movies with guaranteed streaming links.

    -

    Judging by its name, HindiLinks4u is surely one of the best sites to watch Bollywood movies online free. The site provides constant updates with the latest Hindi movies and Hollywood movies in Hindi dubbed. What's more, each title comes with 3 or 4 streaming servers, so you can just get the one that works the best. In addition, HindiLinks4u is one of the Bollywood movies download sites and you can download all the content for free.

    -

    To find the best sites to watch Bollywood movies online free, you can also check out those general movie streaming sites, for instance, Bmovies. The site does not only provide tons of Bollywood movies to watch but also a large number of movies from different countries for free. It's one of the best free online movie streaming sites without registrations. One advantage is that there are at least 3 servers for every title to make sure you can stream a movie without a problem.

    -

    Zee5 offers a wide selection of titles with great streaming quality, making it the best site to watch Bollywood movies online free. You can also watch online Bollywood movies for free as the site provides tons of free titles to regular users. In addition, Zee5 has 90+ live TV channels available for news, movies, and entertainment in Indian languages like Telugu, Kannada, Tamil, Marathi, Bangla, and more.

    -

    There is no doubt that Gofilms4U is the best site to watch Bollywood movies online free if you just take a look at its homepage. The site offers hundreds of thousands of must-watch Bollywood movies, Hollywood, and trend TV series in HD quality. What's more, Gofilms4U is one of the best Bollywood movies download sites that allow you to download movies without cost. With a fast loading speed, Gofilms4U is an excellent choice to watch Bollywood movies online free.

    -

    With 22.5 million pieces of media content, Hungama is not only India's most popular digital entertainment company but also one of the best sites to watch Bollywood movies online. You can not only watch online Bollywood movies but also titles in regional languages like English, Punjabi, Telugu, Tamil, Kannada, Malayalam, and more. Hungama has apps available on mobiles, tablets, PCs, and smart TVs, so you can watch Bollywood movies online free anywhere and anytime.

    -

    -

    Surprisingly, YouTube is one of the best sites to watch Bollywood movies online free If you like classic titles. You can't seem to watch new Bollywood movies online with Youtube due to copyright concerns, but there are two official channels that provide Bollywood movies legally. Just look for "Shemaroo Movies" and "Rajshri" channels and you can find a large number of classic titles and watch Bollywood movies online free. What's more, the content on these channels can be streamed up to HD1080p video quality, which is not bad.

    -

    The general movie site, Fmovies is another place to watch Bollywood movies online free. Just head to the "Country" menu and look for India. There is no need to sign up for anything and the streaming speed is faster than most free streaming sites. You can not only find the must-watch Bollywood movies but also watch new Bollywood movies online here. With superb video quality, Fmovies is a solid choice not only for Bollywood movies but movies of all sorts.

    -

    The previous part has introduced the 10 best sites that let you watch Bollywood movies online free. However, most of the free streaming sites introduced above include ads to support themselves, and these ads can be super annoying when you watch Bollywood movies online free. Plus the buffering due to 4K/HD streaming, your viewing experience will be compromised as a whole.

    -

    Hence you might want to watch online Bollywood movies offline instead. Here you can try CleverGet Video Downloader, the best tool to save your favorite Bollywood movies for offline access. With CleverGet Video Downloader, there is no need to watch Bollywood movies online with streaming issues. Because you can save movies and watch them offline with the best video quality possible.

    -

    CleverGet Video Downloader allows you to save Bollywood movies in MP4/MKV from the Bollywood movies download sites above with resolutions ranging from 480p, 720p, 1080p, 4K, and up to 8K UHD with 320 Kbps audio quality. You can download up to 5 Bollywood movies at the same time. All the metadata like titles and formats would be saved as well. Aside from the sites that let you watch Bollywood movies online free, CleverGet Video Downloader supports a wide range of websites such as YouTube, Instagram, Vimeo, and more, making it a perfect choice to download online videos of all sorts.

    -

    Now you know where to watch Bollywood movies online free and what Bollywood movies to watch. Just check them out! Meanwhile, don't forget to download your favorite movies with CleverGet Video Downloader, so you can watch them with the best viewing experience offline!

    -

    Before moving ahead and taking a look at these services in detail, do take a look at our other lists where you can watch some more awesome movies, free TV series, and songs for free to get your daily dose of entertainment.

    -

    Zee5 offers a collection of both old and new superhit movies like Tanu Weds Manu, Omkaara, Golmaal, etc. Apart from Hindi, the site offers movies in other regional languages too. You can also watch TV shows, news, and other short videos on the website. The site is pretty neat and offers good streaming speed even at low internet connections.

    -

    While there are many channels on YouTube that let you watch Hindi movies online for free, a majority of them are doing it illegally. There are only a couple of YouTube channels that stream their copyrighted Bollywood movies for free and legally:

    -

    Rajshri Production Films is a well-known name in Bollywood that brought superhit movies like Hum Saath Saath Hain. The channel mostly offers the old Hindi films made by them but they are still worth watching. It also features clippings of the best Bollywood movie scenes and music videos online.

    -

    Shemaroo Movies is another YouTube channel to watch Hindi movies online for free and legally in 2022. Just like Rajshri Productions, this channel lets you stream their copyrighted movies free of cost. You can watch full-length Bollywood movies online like Amar Akbar Anthony and Bhagam Bhaag, to name a few.

    -

    This site to watch Bollywood movies online is well designed, but I found it hard to list all the Hindi movies in one place. However, the movie collection is apparently better as compared to Xstream. You get to watch films like Andhadhun, Drishyam, Stree, Pyar Ka Panchnama, Singham, Luka Chhuppi, etc., for free.

    -

    Spuul is another good site to watch Hindi movies online free in 2022. The website has a clean interface with a dark mode which is visually appealing. While many of the movies of fall under the premium segment under which you can choose to pay Rs.99 per/mo to watch new Bollywood movies online.

    -

    You can watch the first 10 minutes of a film for free at Hungama Movies, after which it offers you an option to subscribe and play the entire movie. It also offers a 30-day trial period to users, but currently, this option is available to app users only. During this period, you can watch Hindi movies online free or download it for offline viewing.

    -

    The collection of Bollywood movies online on Hungama is quite rich, ranging from classics to recently released movies. Besides movies, the site also hosts 3.5+ million songs that can be streamed at HD quality. For an ad-free unlimited experience, you can opt for their paid plans.

    -

    YuppTV has a really good collection of online Hindi movies as well as English, Telugu, Tamil, Kannada films. The service is provided by YuppTV, which provides live TV channel services in India and abroad. It also has a mobile app for Android and iOS where you can watch Indian movies for free in 2022.

    -

    It offers a 30-day free trial option where you can watch unlimited Bollywood movies for free. After that, you can avail of annual subscription of Amazon Prime Video at a fee of Rs. 179/mo, Rs. 459/qtr, and Rs. 1499/yr.

    -

    Hulu is one of the biggest streaming services in the United States. It has a plethora of movies and TV shows spanning across various languages. Now, you can also watch online hindi movies on it, thanks to a new feature introduced by the platform.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Mano Solo-La Marmaille Nue Full Album Zip VERIFIED.md b/spaces/cihyFjudo/fairness-paper-search/Mano Solo-La Marmaille Nue Full Album Zip VERIFIED.md deleted file mode 100644 index 127db860fe9f0310256a0b2aec4992d43c382536..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Mano Solo-La Marmaille Nue Full Album Zip VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Mano Solo-La Marmaille Nue full album zip


    Download Filehttps://tinurli.com/2uwjKI



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Structural Geology by Haakon Fossen pdf Download Free PDF of the 2nd Edition with E-learning Modules and Exercises.md b/spaces/cihyFjudo/fairness-paper-search/Structural Geology by Haakon Fossen pdf Download Free PDF of the 2nd Edition with E-learning Modules and Exercises.md deleted file mode 100644 index c629d9dbf3aaaf0862fcfc6da794d0152d8fb337..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Structural Geology by Haakon Fossen pdf Download Free PDF of the 2nd Edition with E-learning Modules and Exercises.md +++ /dev/null @@ -1,6 +0,0 @@ -

    StructuralGeologyByHaakonFossenPdf


    Download File ✦✦✦ https://tinurli.com/2uwjxO



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/BmpImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/BmpImagePlugin.py deleted file mode 100644 index 5bda0a5b05d8b6a6a0ccaa91da3475e34c9b1cf3..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/BmpImagePlugin.py +++ /dev/null @@ -1,471 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# BMP file handler -# -# Windows (and OS/2) native bitmap storage format. -# -# history: -# 1995-09-01 fl Created -# 1996-04-30 fl Added save -# 1997-08-27 fl Fixed save of 1-bit images -# 1998-03-06 fl Load P images as L where possible -# 1998-07-03 fl Load P images as 1 where possible -# 1998-12-29 fl Handle small palettes -# 2002-12-30 fl Fixed load of 1-bit palette images -# 2003-04-21 fl Fixed load of 1-bit monochrome images -# 2003-04-23 fl Added limited support for BI_BITFIELDS compression -# -# Copyright (c) 1997-2003 by Secret Labs AB -# Copyright (c) 1995-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - - -import os - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import i32le as i32 -from ._binary import o8 -from ._binary import o16le as o16 -from ._binary import o32le as o32 - -# -# -------------------------------------------------------------------- -# Read BMP file - -BIT2MODE = { - # bits => mode, rawmode - 1: ("P", "P;1"), - 4: ("P", "P;4"), - 8: ("P", "P"), - 16: ("RGB", "BGR;15"), - 24: ("RGB", "BGR"), - 32: ("RGB", "BGRX"), -} - - -def _accept(prefix): - return prefix[:2] == b"BM" - - -def _dib_accept(prefix): - return i32(prefix) in [12, 40, 64, 108, 124] - - -# ============================================================================= -# Image plugin for the Windows BMP format. -# ============================================================================= -class BmpImageFile(ImageFile.ImageFile): - """Image plugin for the Windows Bitmap format (BMP)""" - - # ------------------------------------------------------------- Description - format_description = "Windows Bitmap" - format = "BMP" - - # -------------------------------------------------- BMP Compression values - COMPRESSIONS = {"RAW": 0, "RLE8": 1, "RLE4": 2, "BITFIELDS": 3, "JPEG": 4, "PNG": 5} - for k, v in COMPRESSIONS.items(): - vars()[k] = v - - def _bitmap(self, header=0, offset=0): - """Read relevant info about the BMP""" - read, seek = self.fp.read, self.fp.seek - if header: - seek(header) - # read bmp header size @offset 14 (this is part of the header size) - file_info = {"header_size": i32(read(4)), "direction": -1} - - # -------------------- If requested, read header at a specific position - # read the rest of the bmp header, without its size - header_data = ImageFile._safe_read(self.fp, file_info["header_size"] - 4) - - # -------------------------------------------------- IBM OS/2 Bitmap v1 - # ----- This format has different offsets because of width/height types - if file_info["header_size"] == 12: - file_info["width"] = i16(header_data, 0) - file_info["height"] = i16(header_data, 2) - file_info["planes"] = i16(header_data, 4) - file_info["bits"] = i16(header_data, 6) - file_info["compression"] = self.RAW - file_info["palette_padding"] = 3 - - # --------------------------------------------- Windows Bitmap v2 to v5 - # v3, OS/2 v2, v4, v5 - elif file_info["header_size"] in (40, 64, 108, 124): - file_info["y_flip"] = header_data[7] == 0xFF - file_info["direction"] = 1 if file_info["y_flip"] else -1 - file_info["width"] = i32(header_data, 0) - file_info["height"] = ( - i32(header_data, 4) - if not file_info["y_flip"] - else 2**32 - i32(header_data, 4) - ) - file_info["planes"] = i16(header_data, 8) - file_info["bits"] = i16(header_data, 10) - file_info["compression"] = i32(header_data, 12) - # byte size of pixel data - file_info["data_size"] = i32(header_data, 16) - file_info["pixels_per_meter"] = ( - i32(header_data, 20), - i32(header_data, 24), - ) - file_info["colors"] = i32(header_data, 28) - file_info["palette_padding"] = 4 - self.info["dpi"] = tuple(x / 39.3701 for x in file_info["pixels_per_meter"]) - if file_info["compression"] == self.BITFIELDS: - if len(header_data) >= 52: - for idx, mask in enumerate( - ["r_mask", "g_mask", "b_mask", "a_mask"] - ): - file_info[mask] = i32(header_data, 36 + idx * 4) - else: - # 40 byte headers only have the three components in the - # bitfields masks, ref: - # https://msdn.microsoft.com/en-us/library/windows/desktop/dd183376(v=vs.85).aspx - # See also - # https://github.com/python-pillow/Pillow/issues/1293 - # There is a 4th component in the RGBQuad, in the alpha - # location, but it is listed as a reserved component, - # and it is not generally an alpha channel - file_info["a_mask"] = 0x0 - for mask in ["r_mask", "g_mask", "b_mask"]: - file_info[mask] = i32(read(4)) - file_info["rgb_mask"] = ( - file_info["r_mask"], - file_info["g_mask"], - file_info["b_mask"], - ) - file_info["rgba_mask"] = ( - file_info["r_mask"], - file_info["g_mask"], - file_info["b_mask"], - file_info["a_mask"], - ) - else: - msg = f"Unsupported BMP header type ({file_info['header_size']})" - raise OSError(msg) - - # ------------------ Special case : header is reported 40, which - # ---------------------- is shorter than real size for bpp >= 16 - self._size = file_info["width"], file_info["height"] - - # ------- If color count was not found in the header, compute from bits - file_info["colors"] = ( - file_info["colors"] - if file_info.get("colors", 0) - else (1 << file_info["bits"]) - ) - if offset == 14 + file_info["header_size"] and file_info["bits"] <= 8: - offset += 4 * file_info["colors"] - - # ---------------------- Check bit depth for unusual unsupported values - self.mode, raw_mode = BIT2MODE.get(file_info["bits"], (None, None)) - if self.mode is None: - msg = f"Unsupported BMP pixel depth ({file_info['bits']})" - raise OSError(msg) - - # ---------------- Process BMP with Bitfields compression (not palette) - decoder_name = "raw" - if file_info["compression"] == self.BITFIELDS: - SUPPORTED = { - 32: [ - (0xFF0000, 0xFF00, 0xFF, 0x0), - (0xFF000000, 0xFF0000, 0xFF00, 0x0), - (0xFF000000, 0xFF0000, 0xFF00, 0xFF), - (0xFF, 0xFF00, 0xFF0000, 0xFF000000), - (0xFF0000, 0xFF00, 0xFF, 0xFF000000), - (0x0, 0x0, 0x0, 0x0), - ], - 24: [(0xFF0000, 0xFF00, 0xFF)], - 16: [(0xF800, 0x7E0, 0x1F), (0x7C00, 0x3E0, 0x1F)], - } - MASK_MODES = { - (32, (0xFF0000, 0xFF00, 0xFF, 0x0)): "BGRX", - (32, (0xFF000000, 0xFF0000, 0xFF00, 0x0)): "XBGR", - (32, (0xFF000000, 0xFF0000, 0xFF00, 0xFF)): "ABGR", - (32, (0xFF, 0xFF00, 0xFF0000, 0xFF000000)): "RGBA", - (32, (0xFF0000, 0xFF00, 0xFF, 0xFF000000)): "BGRA", - (32, (0x0, 0x0, 0x0, 0x0)): "BGRA", - (24, (0xFF0000, 0xFF00, 0xFF)): "BGR", - (16, (0xF800, 0x7E0, 0x1F)): "BGR;16", - (16, (0x7C00, 0x3E0, 0x1F)): "BGR;15", - } - if file_info["bits"] in SUPPORTED: - if ( - file_info["bits"] == 32 - and file_info["rgba_mask"] in SUPPORTED[file_info["bits"]] - ): - raw_mode = MASK_MODES[(file_info["bits"], file_info["rgba_mask"])] - self.mode = "RGBA" if "A" in raw_mode else self.mode - elif ( - file_info["bits"] in (24, 16) - and file_info["rgb_mask"] in SUPPORTED[file_info["bits"]] - ): - raw_mode = MASK_MODES[(file_info["bits"], file_info["rgb_mask"])] - else: - msg = "Unsupported BMP bitfields layout" - raise OSError(msg) - else: - msg = "Unsupported BMP bitfields layout" - raise OSError(msg) - elif file_info["compression"] == self.RAW: - if file_info["bits"] == 32 and header == 22: # 32-bit .cur offset - raw_mode, self.mode = "BGRA", "RGBA" - elif file_info["compression"] in (self.RLE8, self.RLE4): - decoder_name = "bmp_rle" - else: - msg = f"Unsupported BMP compression ({file_info['compression']})" - raise OSError(msg) - - # --------------- Once the header is processed, process the palette/LUT - if self.mode == "P": # Paletted for 1, 4 and 8 bit images - # ---------------------------------------------------- 1-bit images - if not (0 < file_info["colors"] <= 65536): - msg = f"Unsupported BMP Palette size ({file_info['colors']})" - raise OSError(msg) - else: - padding = file_info["palette_padding"] - palette = read(padding * file_info["colors"]) - greyscale = True - indices = ( - (0, 255) - if file_info["colors"] == 2 - else list(range(file_info["colors"])) - ) - - # ----------------- Check if greyscale and ignore palette if so - for ind, val in enumerate(indices): - rgb = palette[ind * padding : ind * padding + 3] - if rgb != o8(val) * 3: - greyscale = False - - # ------- If all colors are grey, white or black, ditch palette - if greyscale: - self.mode = "1" if file_info["colors"] == 2 else "L" - raw_mode = self.mode - else: - self.mode = "P" - self.palette = ImagePalette.raw( - "BGRX" if padding == 4 else "BGR", palette - ) - - # ---------------------------- Finally set the tile data for the plugin - self.info["compression"] = file_info["compression"] - args = [raw_mode] - if decoder_name == "bmp_rle": - args.append(file_info["compression"] == self.RLE4) - else: - args.append(((file_info["width"] * file_info["bits"] + 31) >> 3) & (~3)) - args.append(file_info["direction"]) - self.tile = [ - ( - decoder_name, - (0, 0, file_info["width"], file_info["height"]), - offset or self.fp.tell(), - tuple(args), - ) - ] - - def _open(self): - """Open file, check magic number and read header""" - # read 14 bytes: magic number, filesize, reserved, header final offset - head_data = self.fp.read(14) - # choke if the file does not have the required magic bytes - if not _accept(head_data): - msg = "Not a BMP file" - raise SyntaxError(msg) - # read the start position of the BMP image data (u32) - offset = i32(head_data, 10) - # load bitmap information (offset=raster info) - self._bitmap(offset=offset) - - -class BmpRleDecoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer): - rle4 = self.args[1] - data = bytearray() - x = 0 - while len(data) < self.state.xsize * self.state.ysize: - pixels = self.fd.read(1) - byte = self.fd.read(1) - if not pixels or not byte: - break - num_pixels = pixels[0] - if num_pixels: - # encoded mode - if x + num_pixels > self.state.xsize: - # Too much data for row - num_pixels = max(0, self.state.xsize - x) - if rle4: - first_pixel = o8(byte[0] >> 4) - second_pixel = o8(byte[0] & 0x0F) - for index in range(num_pixels): - if index % 2 == 0: - data += first_pixel - else: - data += second_pixel - else: - data += byte * num_pixels - x += num_pixels - else: - if byte[0] == 0: - # end of line - while len(data) % self.state.xsize != 0: - data += b"\x00" - x = 0 - elif byte[0] == 1: - # end of bitmap - break - elif byte[0] == 2: - # delta - bytes_read = self.fd.read(2) - if len(bytes_read) < 2: - break - right, up = self.fd.read(2) - data += b"\x00" * (right + up * self.state.xsize) - x = len(data) % self.state.xsize - else: - # absolute mode - if rle4: - # 2 pixels per byte - byte_count = byte[0] // 2 - bytes_read = self.fd.read(byte_count) - for byte_read in bytes_read: - data += o8(byte_read >> 4) - data += o8(byte_read & 0x0F) - else: - byte_count = byte[0] - bytes_read = self.fd.read(byte_count) - data += bytes_read - if len(bytes_read) < byte_count: - break - x += byte[0] - - # align to 16-bit word boundary - if self.fd.tell() % 2 != 0: - self.fd.seek(1, os.SEEK_CUR) - rawmode = "L" if self.mode == "L" else "P" - self.set_as_raw(bytes(data), (rawmode, 0, self.args[-1])) - return -1, 0 - - -# ============================================================================= -# Image plugin for the DIB format (BMP alias) -# ============================================================================= -class DibImageFile(BmpImageFile): - format = "DIB" - format_description = "Windows Bitmap" - - def _open(self): - self._bitmap() - - -# -# -------------------------------------------------------------------- -# Write BMP file - - -SAVE = { - "1": ("1", 1, 2), - "L": ("L", 8, 256), - "P": ("P", 8, 256), - "RGB": ("BGR", 24, 0), - "RGBA": ("BGRA", 32, 0), -} - - -def _dib_save(im, fp, filename): - _save(im, fp, filename, False) - - -def _save(im, fp, filename, bitmap_header=True): - try: - rawmode, bits, colors = SAVE[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as BMP" - raise OSError(msg) from e - - info = im.encoderinfo - - dpi = info.get("dpi", (96, 96)) - - # 1 meter == 39.3701 inches - ppm = tuple(map(lambda x: int(x * 39.3701 + 0.5), dpi)) - - stride = ((im.size[0] * bits + 7) // 8 + 3) & (~3) - header = 40 # or 64 for OS/2 version 2 - image = stride * im.size[1] - - if im.mode == "1": - palette = b"".join(o8(i) * 4 for i in (0, 255)) - elif im.mode == "L": - palette = b"".join(o8(i) * 4 for i in range(256)) - elif im.mode == "P": - palette = im.im.getpalette("RGB", "BGRX") - colors = len(palette) // 4 - else: - palette = None - - # bitmap header - if bitmap_header: - offset = 14 + header + colors * 4 - file_size = offset + image - if file_size > 2**32 - 1: - msg = "File size is too large for the BMP format" - raise ValueError(msg) - fp.write( - b"BM" # file type (magic) - + o32(file_size) # file size - + o32(0) # reserved - + o32(offset) # image data offset - ) - - # bitmap info header - fp.write( - o32(header) # info header size - + o32(im.size[0]) # width - + o32(im.size[1]) # height - + o16(1) # planes - + o16(bits) # depth - + o32(0) # compression (0=uncompressed) - + o32(image) # size of bitmap - + o32(ppm[0]) # resolution - + o32(ppm[1]) # resolution - + o32(colors) # colors used - + o32(colors) # colors important - ) - - fp.write(b"\0" * (header - 40)) # padding (for OS/2 format) - - if palette: - fp.write(palette) - - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, stride, -1))]) - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(BmpImageFile.format, BmpImageFile, _accept) -Image.register_save(BmpImageFile.format, _save) - -Image.register_extension(BmpImageFile.format, ".bmp") - -Image.register_mime(BmpImageFile.format, "image/bmp") - -Image.register_decoder("bmp_rle", BmpRleDecoder) - -Image.register_open(DibImageFile.format, DibImageFile, _dib_accept) -Image.register_save(DibImageFile.format, _dib_save) - -Image.register_extension(DibImageFile.format, ".dib") - -Image.register_mime(DibImageFile.format, "image/bmp") diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/verifier.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/verifier.py deleted file mode 100644 index a500c7814adf8ce52e911e0679d0b98335ae6597..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/verifier.py +++ /dev/null @@ -1,307 +0,0 @@ -# -# DEPRECATED: implementation for ffi.verify() -# -import sys, os, binascii, shutil, io -from . import __version_verifier_modules__ -from . import ffiplatform -from .error import VerificationError - -if sys.version_info >= (3, 3): - import importlib.machinery - def _extension_suffixes(): - return importlib.machinery.EXTENSION_SUFFIXES[:] -else: - import imp - def _extension_suffixes(): - return [suffix for suffix, _, type in imp.get_suffixes() - if type == imp.C_EXTENSION] - - -if sys.version_info >= (3,): - NativeIO = io.StringIO -else: - class NativeIO(io.BytesIO): - def write(self, s): - if isinstance(s, unicode): - s = s.encode('ascii') - super(NativeIO, self).write(s) - - -class Verifier(object): - - def __init__(self, ffi, preamble, tmpdir=None, modulename=None, - ext_package=None, tag='', force_generic_engine=False, - source_extension='.c', flags=None, relative_to=None, **kwds): - if ffi._parser._uses_new_feature: - raise VerificationError( - "feature not supported with ffi.verify(), but only " - "with ffi.set_source(): %s" % (ffi._parser._uses_new_feature,)) - self.ffi = ffi - self.preamble = preamble - if not modulename: - flattened_kwds = ffiplatform.flatten(kwds) - vengine_class = _locate_engine_class(ffi, force_generic_engine) - self._vengine = vengine_class(self) - self._vengine.patch_extension_kwds(kwds) - self.flags = flags - self.kwds = self.make_relative_to(kwds, relative_to) - # - if modulename: - if tag: - raise TypeError("can't specify both 'modulename' and 'tag'") - else: - key = '\x00'.join(['%d.%d' % sys.version_info[:2], - __version_verifier_modules__, - preamble, flattened_kwds] + - ffi._cdefsources) - if sys.version_info >= (3,): - key = key.encode('utf-8') - k1 = hex(binascii.crc32(key[0::2]) & 0xffffffff) - k1 = k1.lstrip('0x').rstrip('L') - k2 = hex(binascii.crc32(key[1::2]) & 0xffffffff) - k2 = k2.lstrip('0').rstrip('L') - modulename = '_cffi_%s_%s%s%s' % (tag, self._vengine._class_key, - k1, k2) - suffix = _get_so_suffixes()[0] - self.tmpdir = tmpdir or _caller_dir_pycache() - self.sourcefilename = os.path.join(self.tmpdir, modulename + source_extension) - self.modulefilename = os.path.join(self.tmpdir, modulename + suffix) - self.ext_package = ext_package - self._has_source = False - self._has_module = False - - def write_source(self, file=None): - """Write the C source code. It is produced in 'self.sourcefilename', - which can be tweaked beforehand.""" - with self.ffi._lock: - if self._has_source and file is None: - raise VerificationError( - "source code already written") - self._write_source(file) - - def compile_module(self): - """Write the C source code (if not done already) and compile it. - This produces a dynamic link library in 'self.modulefilename'.""" - with self.ffi._lock: - if self._has_module: - raise VerificationError("module already compiled") - if not self._has_source: - self._write_source() - self._compile_module() - - def load_library(self): - """Get a C module from this Verifier instance. - Returns an instance of a FFILibrary class that behaves like the - objects returned by ffi.dlopen(), but that delegates all - operations to the C module. If necessary, the C code is written - and compiled first. - """ - with self.ffi._lock: - if not self._has_module: - self._locate_module() - if not self._has_module: - if not self._has_source: - self._write_source() - self._compile_module() - return self._load_library() - - def get_module_name(self): - basename = os.path.basename(self.modulefilename) - # kill both the .so extension and the other .'s, as introduced - # by Python 3: 'basename.cpython-33m.so' - basename = basename.split('.', 1)[0] - # and the _d added in Python 2 debug builds --- but try to be - # conservative and not kill a legitimate _d - if basename.endswith('_d') and hasattr(sys, 'gettotalrefcount'): - basename = basename[:-2] - return basename - - def get_extension(self): - ffiplatform._hack_at_distutils() # backward compatibility hack - if not self._has_source: - with self.ffi._lock: - if not self._has_source: - self._write_source() - sourcename = ffiplatform.maybe_relative_path(self.sourcefilename) - modname = self.get_module_name() - return ffiplatform.get_extension(sourcename, modname, **self.kwds) - - def generates_python_module(self): - return self._vengine._gen_python_module - - def make_relative_to(self, kwds, relative_to): - if relative_to and os.path.dirname(relative_to): - dirname = os.path.dirname(relative_to) - kwds = kwds.copy() - for key in ffiplatform.LIST_OF_FILE_NAMES: - if key in kwds: - lst = kwds[key] - if not isinstance(lst, (list, tuple)): - raise TypeError("keyword '%s' should be a list or tuple" - % (key,)) - lst = [os.path.join(dirname, fn) for fn in lst] - kwds[key] = lst - return kwds - - # ---------- - - def _locate_module(self): - if not os.path.isfile(self.modulefilename): - if self.ext_package: - try: - pkg = __import__(self.ext_package, None, None, ['__doc__']) - except ImportError: - return # cannot import the package itself, give up - # (e.g. it might be called differently before installation) - path = pkg.__path__ - else: - path = None - filename = self._vengine.find_module(self.get_module_name(), path, - _get_so_suffixes()) - if filename is None: - return - self.modulefilename = filename - self._vengine.collect_types() - self._has_module = True - - def _write_source_to(self, file): - self._vengine._f = file - try: - self._vengine.write_source_to_f() - finally: - del self._vengine._f - - def _write_source(self, file=None): - if file is not None: - self._write_source_to(file) - else: - # Write our source file to an in memory file. - f = NativeIO() - self._write_source_to(f) - source_data = f.getvalue() - - # Determine if this matches the current file - if os.path.exists(self.sourcefilename): - with open(self.sourcefilename, "r") as fp: - needs_written = not (fp.read() == source_data) - else: - needs_written = True - - # Actually write the file out if it doesn't match - if needs_written: - _ensure_dir(self.sourcefilename) - with open(self.sourcefilename, "w") as fp: - fp.write(source_data) - - # Set this flag - self._has_source = True - - def _compile_module(self): - # compile this C source - tmpdir = os.path.dirname(self.sourcefilename) - outputfilename = ffiplatform.compile(tmpdir, self.get_extension()) - try: - same = ffiplatform.samefile(outputfilename, self.modulefilename) - except OSError: - same = False - if not same: - _ensure_dir(self.modulefilename) - shutil.move(outputfilename, self.modulefilename) - self._has_module = True - - def _load_library(self): - assert self._has_module - if self.flags is not None: - return self._vengine.load_library(self.flags) - else: - return self._vengine.load_library() - -# ____________________________________________________________ - -_FORCE_GENERIC_ENGINE = False # for tests - -def _locate_engine_class(ffi, force_generic_engine): - if _FORCE_GENERIC_ENGINE: - force_generic_engine = True - if not force_generic_engine: - if '__pypy__' in sys.builtin_module_names: - force_generic_engine = True - else: - try: - import _cffi_backend - except ImportError: - _cffi_backend = '?' - if ffi._backend is not _cffi_backend: - force_generic_engine = True - if force_generic_engine: - from . import vengine_gen - return vengine_gen.VGenericEngine - else: - from . import vengine_cpy - return vengine_cpy.VCPythonEngine - -# ____________________________________________________________ - -_TMPDIR = None - -def _caller_dir_pycache(): - if _TMPDIR: - return _TMPDIR - result = os.environ.get('CFFI_TMPDIR') - if result: - return result - filename = sys._getframe(2).f_code.co_filename - return os.path.abspath(os.path.join(os.path.dirname(filename), - '__pycache__')) - -def set_tmpdir(dirname): - """Set the temporary directory to use instead of __pycache__.""" - global _TMPDIR - _TMPDIR = dirname - -def cleanup_tmpdir(tmpdir=None, keep_so=False): - """Clean up the temporary directory by removing all files in it - called `_cffi_*.{c,so}` as well as the `build` subdirectory.""" - tmpdir = tmpdir or _caller_dir_pycache() - try: - filelist = os.listdir(tmpdir) - except OSError: - return - if keep_so: - suffix = '.c' # only remove .c files - else: - suffix = _get_so_suffixes()[0].lower() - for fn in filelist: - if fn.lower().startswith('_cffi_') and ( - fn.lower().endswith(suffix) or fn.lower().endswith('.c')): - try: - os.unlink(os.path.join(tmpdir, fn)) - except OSError: - pass - clean_dir = [os.path.join(tmpdir, 'build')] - for dir in clean_dir: - try: - for fn in os.listdir(dir): - fn = os.path.join(dir, fn) - if os.path.isdir(fn): - clean_dir.append(fn) - else: - os.unlink(fn) - except OSError: - pass - -def _get_so_suffixes(): - suffixes = _extension_suffixes() - if not suffixes: - # bah, no C_EXTENSION available. Occurs on pypy without cpyext - if sys.platform == 'win32': - suffixes = [".pyd"] - else: - suffixes = [".so"] - - return suffixes - -def _ensure_dir(filename): - dirname = os.path.dirname(filename) - if dirname and not os.path.isdir(dirname): - os.makedirs(dirname) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/dateutil/zoneinfo/rebuild.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/dateutil/zoneinfo/rebuild.py deleted file mode 100644 index 684c6586f091350c347f2b6150935f5214ffec27..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/dateutil/zoneinfo/rebuild.py +++ /dev/null @@ -1,75 +0,0 @@ -import logging -import os -import tempfile -import shutil -import json -from subprocess import check_call, check_output -from tarfile import TarFile - -from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME - - -def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None): - """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar* - - filename is the timezone tarball from ``ftp.iana.org/tz``. - - """ - tmpdir = tempfile.mkdtemp() - zonedir = os.path.join(tmpdir, "zoneinfo") - moduledir = os.path.dirname(__file__) - try: - with TarFile.open(filename) as tf: - for name in zonegroups: - tf.extract(name, tmpdir) - filepaths = [os.path.join(tmpdir, n) for n in zonegroups] - - _run_zic(zonedir, filepaths) - - # write metadata file - with open(os.path.join(zonedir, METADATA_FN), 'w') as f: - json.dump(metadata, f, indent=4, sort_keys=True) - target = os.path.join(moduledir, ZONEFILENAME) - with TarFile.open(target, "w:%s" % format) as tf: - for entry in os.listdir(zonedir): - entrypath = os.path.join(zonedir, entry) - tf.add(entrypath, entry) - finally: - shutil.rmtree(tmpdir) - - -def _run_zic(zonedir, filepaths): - """Calls the ``zic`` compiler in a compatible way to get a "fat" binary. - - Recent versions of ``zic`` default to ``-b slim``, while older versions - don't even have the ``-b`` option (but default to "fat" binaries). The - current version of dateutil does not support Version 2+ TZif files, which - causes problems when used in conjunction with "slim" binaries, so this - function is used to ensure that we always get a "fat" binary. - """ - - try: - help_text = check_output(["zic", "--help"]) - except OSError as e: - _print_on_nosuchfile(e) - raise - - if b"-b " in help_text: - bloat_args = ["-b", "fat"] - else: - bloat_args = [] - - check_call(["zic"] + bloat_args + ["-d", zonedir] + filepaths) - - -def _print_on_nosuchfile(e): - """Print helpful troubleshooting message - - e is an exception raised by subprocess.check_call() - - """ - if e.errno == 2: - logging.error( - "Could not find zic. Perhaps you need to install " - "libc-bin or some other package that provides it, " - "or it's not in your PATH?") diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/frwu.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/frwu.c deleted file mode 100644 index cf183f84107d6140de54e608a9677afb4d82e7af..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/frwu.c +++ /dev/null @@ -1,128 +0,0 @@ -/* - * Forward Uncompressed - * - * Copyright (c) 2009 Reimar Döffinger - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "avcodec.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" -#include "libavutil/opt.h" - -typedef struct { - AVClass *av_class; - int change_field_order; -} FRWUContext; - -static av_cold int decode_init(AVCodecContext *avctx) -{ - if (avctx->width & 1) { - av_log(avctx, AV_LOG_ERROR, "frwu needs even width\n"); - return AVERROR(EINVAL); - } - avctx->pix_fmt = AV_PIX_FMT_UYVY422; - - return 0; -} - -static int decode_frame(AVCodecContext *avctx, AVFrame *pic, - int *got_frame, AVPacket *avpkt) -{ - FRWUContext *s = avctx->priv_data; - int field, ret; - const uint8_t *buf = avpkt->data; - const uint8_t *buf_end = buf + avpkt->size; - - if (avpkt->size < avctx->width * 2 * avctx->height + 4 + 2*8) { - av_log(avctx, AV_LOG_ERROR, "Packet is too small.\n"); - return AVERROR_INVALIDDATA; - } - if (bytestream_get_le32(&buf) != MKTAG('F', 'R', 'W', '1')) { - av_log(avctx, AV_LOG_ERROR, "incorrect marker\n"); - return AVERROR_INVALIDDATA; - } - - if ((ret = ff_get_buffer(avctx, pic, 0)) < 0) - return ret; - - pic->pict_type = AV_PICTURE_TYPE_I; - pic->key_frame = 1; - - for (field = 0; field < 2; field++) { - int i; - int field_h = (avctx->height + !field) >> 1; - int field_size, min_field_size = avctx->width * 2 * field_h; - uint8_t *dst = pic->data[0]; - if (buf_end - buf < 8) - return AVERROR_INVALIDDATA; - buf += 4; // flags? 0x80 == bottom field maybe? - field_size = bytestream_get_le32(&buf); - if (field_size < min_field_size) { - av_log(avctx, AV_LOG_ERROR, "Field size %i is too small (required %i)\n", field_size, min_field_size); - return AVERROR_INVALIDDATA; - } - if (buf_end - buf < field_size) { - av_log(avctx, AV_LOG_ERROR, "Packet is too small, need %i, have %i\n", field_size, (int)(buf_end - buf)); - return AVERROR_INVALIDDATA; - } - if (field ^ s->change_field_order) { - dst += pic->linesize[0]; - } else if (s->change_field_order) { - dst += 2 * pic->linesize[0]; - } - for (i = 0; i < field_h; i++) { - if (s->change_field_order && field && i == field_h - 1) - dst = pic->data[0]; - memcpy(dst, buf, avctx->width * 2); - buf += avctx->width * 2; - dst += pic->linesize[0] << 1; - } - buf += field_size - min_field_size; - } - - *got_frame = 1; - - return avpkt->size; -} - -static const AVOption frwu_options[] = { - {"change_field_order", "Change field order", offsetof(FRWUContext, change_field_order), AV_OPT_TYPE_BOOL, - {.i64 = 0}, 0, 1, AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM}, - {NULL} -}; - -static const AVClass frwu_class = { - .class_name = "frwu Decoder", - .item_name = av_default_item_name, - .option = frwu_options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_frwu_decoder = { - .p.name = "frwu", - CODEC_LONG_NAME("Forward Uncompressed"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_FRWU, - .priv_data_size = sizeof(FRWUContext), - .init = decode_init, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, - .p.priv_class = &frwu_class, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/huffyuvencdsp.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/huffyuvencdsp.c deleted file mode 100644 index 36e8f6130b9dfe1f1288e3fee35b5e8f89521ec7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/huffyuvencdsp.c +++ /dev/null @@ -1,79 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config.h" -#include "libavutil/attributes.h" -#include "huffyuvencdsp.h" -#include "mathops.h" - -// 0x00010001 or 0x0001000100010001 or whatever, depending on the cpu's native arithmetic size -#define pw_1 (ULONG_MAX / UINT16_MAX) - -static void diff_int16_c(uint16_t *dst, const uint16_t *src1, const uint16_t *src2, unsigned mask, int w){ - long i; -#if !HAVE_FAST_UNALIGNED - if((long)src2 & (sizeof(long)-1)){ - for(i=0; i+3> 1) * pw_1; - unsigned long pw_msb = pw_lsb + pw_1; - - for (i = 0; i <= w - (int)sizeof(long)/2; i += sizeof(long)/2) { - long a = *(long*)(src1+i); - long b = *(long*)(src2+i); - *(long*)(dst+i) = ((a|pw_msb) - (b&pw_lsb)) ^ ((a^b^pw_msb)&pw_msb); - } - } - for (; idiff_int16 = diff_int16_c; - c->sub_hfyu_median_pred_int16 = sub_hfyu_median_pred_int16_c; - -#if ARCH_X86 - ff_huffyuvencdsp_init_x86(c, pix_fmt); -#endif -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Become the Supreme Leader with Dummynation Mod APK - Unlimited Money and Military Options.md b/spaces/congsaPfin/Manga-OCR/logs/Become the Supreme Leader with Dummynation Mod APK - Unlimited Money and Military Options.md deleted file mode 100644 index ce425f56916088ba210cbcd05ad5cccf359e3743..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Become the Supreme Leader with Dummynation Mod APK - Unlimited Money and Military Options.md +++ /dev/null @@ -1,229 +0,0 @@ -
    - - -Dummynation APK Mod Unlimited Money: How to Download and Play - - -

    Dummynation APK Mod Unlimited Money: How to Download and Play

    -

    Do you love playing simulation games where you can control a country and its destiny? Do you want to have unlimited power and resources to achieve world domination? If yes, then you might want to try Dummynation, a popular game that lets you do all that and more. But wait, there's more! You can also download Dummynation APK Mod Unlimited Money, a modified version of the game that gives you access to unlimited money and other features that can make your gameplay more fun and exciting. In this article, we will tell you everything you need to know about Dummynation APK Mod Unlimited Money, including what it is, how to download and install it, how to play it, and how it compares to the original game.

    -

    dummynation apk mod unlimited money


    Downloadhttps://urlca.com/2uOesJ



    -

    What is Dummynation?

    -

    A brief introduction to the game

    -

    Dummynation is a simulation game developed by AHF Games that was released in 2020. The game is available for Android devices on Google Play Store. The game has received positive reviews from players who praised its graphics, gameplay, humor, and variety.

    -

    The gameplay and features of Dummynation

    -

    In Dummynation, you are given unlimited power over a country, with a single promise to fulfill: world domination. How you manage to achieve it is up to you. You can choose from different scenarios and countries, each with its own challenges and opportunities. You can also customize your country's name, flag, anthem, currency, leader, laws, policies, allies, enemies, etc.

    -

    various aspects of your country, such as economy, military, diplomacy, culture, science, religion, environment, etc. You will also have to face random events that can affect your country positively or negatively. You can use different strategies and tactics to achieve your goals, such as war, trade, espionage, propaganda, diplomacy, etc. You can also interact with other countries and leaders, either as friends or foes.

    -

    Dummynation is a game that combines humor, satire, and realism. The game features realistic graphics and sounds, as well as witty dialogues and texts. The game also has a lot of references and jokes about real-world events and personalities. The game is constantly updated with new content and features to keep the players entertained and challenged.

    -

    What is Dummynation APK Mod Unlimited Money?

    -

    A brief introduction to the mod

    -

    Dummynation APK Mod Unlimited Money is a modified version of the original game that gives you access to unlimited money and other features that can enhance your gameplay. The mod is created by third-party developers who are not affiliated with the official game developers. The mod is not available on Google Play Store, but you can download it from other sources on the internet.

    -

    dummynation mod apk free download
    -dummynation hack apk unlimited coins
    -dummynation modded apk no ads
    -dummynation cheat apk unlimited power
    -dummynation premium apk mod unlocked
    -dummynation cracked apk unlimited resources
    -dummynation latest mod apk download
    -dummynation hacked apk unlimited territory
    -dummynation mod apk without advertising
    -dummynation full apk mod unlimited everything
    -dummynation pro apk mod unlocked all
    -dummynation patched apk unlimited diplomacy
    -dummynation updated mod apk free
    -dummynation modded apk unlimited military
    -dummynation mod apk no root required
    -dummynation hack apk unlimited research
    -dummynation modded apk no verification
    -dummynation cheat apk unlimited growth
    -dummynation vip apk mod unlocked features
    -dummynation mod apk android 7.0+
    -dummynation hack apk unlimited occupation
    -dummynation modded apk offline mode
    -dummynation cheat apk unlimited balance
    -dummynation plus apk mod unlocked weapons
    -dummynation mod apk android game free download
    -dummynation hack apk unlimited expansion
    -dummynation modded apk online mode
    -dummynation cheat apk unlimited strategy
    -dummynation gold apk mod unlocked levels
    -dummynation mod apk android game - free download - APKCombo[^2^]
    -dummynation hack apk unlimited domination
    -dummynation modded apk new version 1.1.15[^1^]
    -dummynation cheat apk unlimited simulation
    -dummynation deluxe apk mod unlocked graphics
    -dummynation mod apk android game - happymod.com[^1^]

    -

    The benefits and drawbacks of using the mod

    -

    Using Dummynation APK Mod Unlimited Money can have some benefits and drawbacks for your gameplay. Here are some of them:

    -
      -
    • Benefits:
        -
      • You can have unlimited money to spend on anything you want in the game, such as buildings, weapons, research, etc.
      • -
      • You can unlock all the scenarios and countries in the game without having to complete them.
      • -
      • You can have more freedom and flexibility to experiment with different strategies and outcomes in the game.
      • -
      • You can enjoy the game without worrying about running out of money or resources.
      • -
      -
    • -
    • Drawbacks:
        -
      • You can lose the challenge and thrill of the game, as you can easily achieve anything you want without any effort or risk.
      • -
      • You can encounter some bugs or glitches in the game, as the mod may not be compatible with the latest version of the game or your device.
      • -
      • You can expose your device to malware or viruses, as the mod may contain harmful files or codes that can damage your device or steal your data.
      • -
      • You can violate the terms and conditions of the game developers, as the mod may infringe their intellectual property rights or interfere with their revenue streams.
      • -
      -
    • -
    -

    How to Download and Install Dummynation APK Mod Unlimited Money?

    -

    The steps to download and install the mod

    -

    If you want to try Dummynation APK Mod Unlimited Money, you will have to follow these steps:

    -
      -
    1. Find a reliable source that provides the download link for the mod. You can search on Google or use sites like APKPure or APKMirror.
    2. -
    3. Download the mod file to your device. Make sure you have enough storage space and a stable internet connection.
    4. -
    5. Enable the installation of unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
    6. -
    7. Locate the mod file on your device and tap on it to start the installation process. Follow the instructions on the screen to complete the installation.
    8. -
    9. Launch the game and enjoy playing with unlimited money and other features.
    10. -
    -

    The precautions and risks of using the mod

    -

    Before you download and install Dummynation APK Mod Unlimited Money, you should be aware of some precautions and risks that come with using it:

    -
      -
    • Make sure you backup your original game data before installing the mod. You can do this by going to Settings > Apps > Dummynation > Storage > Clear Data. This will prevent you from losing your progress or achievements in case something goes wrong with the mod.
    • -
    • Make sure you scan the mod file for malware or viruses before installing it. You can use an antivirus app or an online scanner to do this. This will prevent you from infecting your device or compromising your data.
    • -
    • Make sure you uninstall the mod before updating the original game. You can do this by going to Settings > Apps > Dummynation > Uninstall. This will prevent you from experiencing any compatibility issues or errors with the new version of the game.
    • -

      you are using the mod without the permission or endorsement of the game developers, and that you may face some legal or ethical consequences for doing so. -

    -

    How to Play Dummynation APK Mod Unlimited Money?

    -

    The tips and tricks to enjoy the game with the mod

    -

    Once you have downloaded and installed Dummynation APK Mod Unlimited Money, you can start playing the game with unlimited money and other features. Here are some tips and tricks to enjoy the game with the mod:

    -
      -
    • Use your money wisely. Even though you have unlimited money, you should still spend it on things that can help you achieve your goals faster and easier. For example, you can invest in research, infrastructure, military, etc.
    • -
    • Explore different scenarios and countries. The mod allows you to unlock all the scenarios and countries in the game without having to complete them. You can try different combinations and see how they affect your gameplay and outcome.
    • -
    • Experiment with different strategies and outcomes. The mod gives you more freedom and flexibility to try different strategies and outcomes in the game. You can use war, trade, espionage, propaganda, diplomacy, etc. to influence other countries and leaders. You can also see how your actions affect your country's economy, military, diplomacy, culture, science, religion, environment, etc.
    • -
    • Have fun and be creative. The mod is meant to enhance your gameplay and make it more fun and exciting. You can use your imagination and creativity to create your own scenarios and stories in the game. You can also use humor and satire to mock or praise real-world events and personalities.
    • -
    -

    The comparison between the original game and the modded version

    -

    Table: Dummynation vs Dummynation APK Mod Unlimited Money

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FeatureDummynationDummynation APK Mod Unlimited Money
    MoneyLimitedUnlimited
    ScenariosLocked until completedUnlocked from the start
    CountriesLocked until completedUnlocked from the start
    Bugs/GlitchesFew or nonePossible or frequent
    MALWARE/VIRUSESNone or safePossible or risky
    Legal/Ethical IssuesNone or acceptablePossible or questionable
    Source: Created by the author based on web search results.
    -

    Conclusion

    -

    A summary of the main points of the article

    -

    In conclusion, Dummynation APK Mod Unlimited Money is a modified version of Dummynation, a simulation game that lets you control a country and its destiny. The mod gives you access to unlimited money and other features that can make your gameplay more fun and exciting. However, the mod also comes with some drawbacks and risks that you should be aware of before using it. If you want to try Dummynation APK Mod Unlimited Money, you should follow the steps to download and install it, as well as the tips and tricks to enjoy it. You should also compare it with the original game and see how it differs in terms of features, benefits, and drawbacks.

    -

    A call to action for the readers

    -

    If you are interested in playing Dummynation APK Mod Unlimited Money, you can download it from one of the sources we mentioned above. However, we recommend that you play it at your own risk and discretion, as we do not endorse or support the use of mods that may violate the terms and conditions of the game developers or cause harm to your device or data. We also suggest that you check out the original game on Google Play Store and support the game developers by purchasing their products or services. Dummynation is a great game that deserves your attention and appreciation.

    -

    Frequently Asked Questions (FAQs)

    -

    Here are some FAQs that you may have about Dummynation APK Mod Unlimited Money:

    -
      -
    1. What is Dummynation?
    2. -

      with a single promise to fulfill: world domination. You can choose from different scenarios and countries, each with its own challenges and opportunities. You can also customize your country's name, flag, anthem, currency, leader, laws, policies, allies, enemies, etc. You can use different strategies and tactics to achieve your goals, such as war, trade, espionage, propaganda, diplomacy, etc. You can also interact with other countries and leaders, either as friends or foes. Dummynation is a game that combines humor, satire, and realism. The game features realistic graphics and sounds, as well as witty dialogues and texts. The game also has a lot of references and jokes about real-world events and personalities. The game is constantly updated with new content and features to keep the players entertained and challenged.

      -
    3. What is Dummynation APK Mod Unlimited Money?
    4. -

      Dummynation APK Mod Unlimited Money is a modified version of the original game that gives you access to unlimited money and other features that can enhance your gameplay. The mod is created by third-party developers who are not affiliated with the official game developers. The mod is not available on Google Play Store, but you can download it from other sources on the internet.

      -
    5. How to Download and Install Dummynation APK Mod Unlimited Money?
    6. -

      If you want to try Dummynation APK Mod Unlimited Money, you will have to follow these steps:

      -
        -
      1. Find a reliable source that provides the download link for the mod. You can search on Google or use sites like APKPure or APKMirror.
      2. -
      3. Download the mod file to your device. Make sure you have enough storage space and a stable internet connection.
      4. -
      5. Enable the installation of unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
      6. -
      7. Locate the mod file on your device and tap on it to start the installation process. Follow the instructions on the screen to complete the installation.
      8. -
      9. Launch the game and enjoy playing with unlimited money and other features.
      10. -
      -
    7. How to Play Dummynation APK Mod Unlimited Money?
    8. -

      Once you have downloaded and installed Dummynation APK Mod Unlimited Money, you can start playing the game with unlimited money and other features. Here are some tips and tricks to enjoy the game with the mod:

      -
        -
      • Use your money wisely. Even though you have unlimited money, you should still spend it on things that can help you achieve your goals faster and easier. For example, you can invest in research, infrastructure, military, etc.
      • -
      • Explore different scenarios and countries. The mod allows you to unlock all the scenarios and countries in the game without having to complete them. You can try different combinations and see how they affect your gameplay and outcome.
      • -
      • Experiment with different strategies and outcomes. The mod gives you more freedom and flexibility to try different strategies and outcomes in the game. You can use war, trade, espionage, propaganda, diplomacy, etc. to influence other countries and leaders. You can also see how your actions affect your country's economy, military, diplomacy, culture, science, religion, environment, etc.
      • -
      • Have fun and be creative. The mod is meant to enhance your gameplay and make it more fun and exciting. You can use your imagination and creativity to create your own scenarios and stories in the game. You can also use humor and satire to mock or praise real-world events and personalities.
      • -
      -
    9. What are the benefits and drawbacks of using Dummynation APK Mod Unlimited Money?
    10. -

      Using Dummynation APK Mod Unlimited Money can have some benefits and drawbacks for your gameplay. Here are some of them:

      -
        -
      • Benefits:
          -
        • You can have unlimited money to spend on anything you want in the game, such as buildings, weapons, research, etc.
        • -
        • You can unlock all the scenarios and countries in the game without having to complete them.
        • -
        • You can have more freedom and flexibility to experiment with different strategies and outcomes in the game.
        • -
        • You can enjoy the game without worrying about running out of money or resources.
        • -
        -
      • -
      • Drawbacks:
          -
        • You can lose the challenge and thrill of the game, as you can easily achieve anything you want without any effort or risk.
        • -
        • You can encounter some bugs or glitches in the game, as the mod may not be compatible with the latest version of the game or your device.
        • -
        • You can expose your device to malware or viruses, as the mod may contain harmful files or codes that can damage your device or steal your data.
        • -
        • You can violate the terms and conditions of the game developers, as the mod may infringe their intellectual property rights or interfere with their revenue streams.
        • -
        -
      • -
      -
    11. How does Dummynation APK Mod Unlimited Money compare to the original game?
    12. -

      Dummynation APK Mod Unlimited Money differs from the original game in terms of features, benefits, and drawbacks. The mod gives you access to unlimited money and other features that can make your gameplay more fun and exciting, but it also comes with some drawbacks and risks that you should be aware of before using it. The original game has limited money and other features that can make your gameplay more challenging and thrilling, but it also has fewer bugs, glitches, malware, viruses, legal, and ethical issues. You can compare the two versions using the table below:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      FeatureDummynationDummynation APK Mod Unlimited Money
      MoneyLimitedUnlimited
      ScenariosLocked until completedUnlocked from the start
      CountriesLocked until completedUnlocked from the start
      Bugs/GlitchesFew or nonePossible or frequent
      MALWARE/VIRUSESNone or safePossible or risky
      Legal/Ethical IssuesNone or acceptablePossible or questionable
      Source: Created by the author based on web search results.
      -

      I hope this article has helped you understand more about Dummynation APK Mod Unlimited Money and how to download and play it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and have a great day!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Chess Titans for Windows 10 How to Get the Classic Game on Your PC.md b/spaces/congsaPfin/Manga-OCR/logs/Chess Titans for Windows 10 How to Get the Classic Game on Your PC.md deleted file mode 100644 index e6022eae0d3f0eb4ed782077f945cf1f573cb342..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Chess Titans for Windows 10 How to Get the Classic Game on Your PC.md +++ /dev/null @@ -1,141 +0,0 @@ - -

      Chess Titans Download for Windows 10: How to Get the Classic Game on Your PC

      -

      If you are a fan of chess games, you might remember Chess Titans, a 3D chess game that was included in Windows 7. It was a popular and fun game that allowed you to play against the computer or another human player, with different difficulty levels and realistic graphics. However, if you have upgraded to Windows 10, you might have noticed that Chess Titans is no longer available. So, how can you get Chess Titans on Windows 10? Is it possible to download and install it on your PC? In this article, we will answer these questions and more. We will tell you what Chess Titans is, why Microsoft removed it from Windows 10, how to download and install it on your PC, and how to play it. Let's get started!

      -

      chess titans download for windows 10


      Download File ……… https://urlca.com/2uO9wW



      -

      What is Chess Titans?

      -

      Chess Titans is a chess game with 3D graphics developed by Oberon Games and included in Windows 7. It is a fully animated, photorealistic interactive game with ten difficulty levels. It can be played by two participants, or one player against the computer.

      -

      A brief history of Chess Titans

      -

      Chess Titans was first released in 2006 as part of the Windows Vista Ultimate Extras package, which was a collection of additional features and games for Windows Vista users. It was later included in all editions of Windows 7, except for the Starter and Home Basic editions. Chess Titans was one of the most popular games in Windows 7, along with other classics like Solitaire, Minesweeper, and Mahjong. However, when Microsoft released Windows 8 in 2012, they decided to remove Chess Titans and other games from the operating system. They also did not include them in Windows 10, which was launched in 2015.

      -

      Features and gameplay of Chess Titans

      -

      Chess Titans is a game that simulates the classic board game of chess. You can choose to play as white or black pieces, and select the difficulty level from one to ten. The higher the level, the more challenging the computer opponent will be. You can also choose to play against another human player on the same PC, or online via a network connection.

      -

      The game has a realistic 3D graphics engine that allows you to view the board from different angles and zoom in and out. You can also customize the appearance of the board and pieces, choosing from different themes and colors. The game also has sound effects and music that enhance the atmosphere of the game.

      -

      Chess Titans for Windows 10 free download
      -How to install Chess Titans on Windows 10
      -Chess Titans game for Windows 10 PC
      -Chess Titans 3D graphics for Windows 10
      -Chess Titans Windows 10 version 17763.0 or higher
      -Chess Titans Microsoft Store download for Windows 10
      -Chess Titans from Oberon Games for Windows 10
      -Chess Titans ten difficulty levels for Windows 10
      -Chess Titans offline game for Windows 10
      -Chess Titans classic chess game for Windows 10
      -Chess Titans photorealistic interactive game for Windows 10
      -Chess Titans one player or two players mode for Windows 10
      -Chess Titans discontinued by Microsoft for Windows 10
      -Chess Titans third-party repositories for Windows 10
      -Chess Titans tutorial for Windows 10 users
      -Chess Titans reviews and ratings for Windows 10
      -Chess Titans system requirements for Windows 10
      -Chess Titans privacy policy and terms of transaction for Windows 10
      -Chess Titans x64 architecture for Windows 10
      -Chess Titans multiple language support for Windows 10
      -Chess Titans official club and PEGI rating for Windows 10
      -Chess Titans Adrian Wagner publisher for Windows 10
      -Chess Titans card and board category for Windows 10
      -Chess Titans similar games and apps for Windows 10
      -Chess Titans screenshots and videos for Windows 10
      -Download and play classic Chess Titans on Windows 10
      -Get Chess Titans from Microsoft Store en-GB for Windows 10
      -Get Chess Titans from Microsoft Store en-IM for Windows 10
      -Download and install Chess Titans on Windows 7 and then upgrade to Windows 10
      -Download and install Chess Titans on any version of Windows using compatibility mode
      -Download and install Chess Titans from Softonic or Softpedia for Windows 10
      -Download and install Chess Titans from FileHippo or FileHorse for Windows 10
      -Download and install Chess Titans from CNET or Malavida for Windows 10
      -Download and install Chess Titans from MajorGeeks or Soft32 for Windows 10
      -Download and install Chess Titans from SourceForge or GitHub for Windows 10
      -Download and install Chess Titans from Uptodown or APKPure for Windows 10
      -Download and install Chess Titans from Ocean of Games or GameTop for Windows 10
      -Download and install Chess Titans from Steam or Epic Games Store for Windows 10
      -Download and install Chess Titans from GOG or Origin for Windows 10
      -Download and install Chess Titans from Microsoft Edge or Chrome Web Store for Windows 10
      -Download and install modded or hacked version of Chess Titans for Windows 10
      -Download and install portable or standalone version of Chess Titans for Windows 10
      -Download and install cracked or pirated version of Chess Titans for Windows 10 (not recommended)
      -Download and install latest or updated version of Chess Titans for Windows 10
      -Download and install old or original version of Chess Titans for Windows 10

      -

      The gameplay of Chess Titans follows the standard rules of chess, with some optional features that you can enable or disable. For example, you can turn on or off hints, legal moves, move animations, undo moves, timers, and more. You can also save your game progress and resume it later.

      -

      Why did Microsoft remove Chess Titans from Windows 10?

      -

      If you are wondering why Microsoft decided to remove Chess Titans and other games from Windows 10, there are several reasons behind their decision.

      -

      The reasons behind Microsoft's decision

      -

      One of the main reasons why Microsoft removed Chess Titans and other games from Windows 10 was to make the operating system more lightweight and efficient. By removing unnecessary features and programs, they aimed to improve the performance and security of Windows 10. They also wanted to encourage users to download new games and apps from the Microsoft Store, which is their online platform for digital content. They also wanted to update the games and apps to make them more compatible with modern devices and features, such as touchscreens, cloud services, and social media integration.

      -

      The alternatives to Chess Titans offered by Microsoft

      -

      Although Microsoft removed Chess Titans and other games from Windows 10, they did not leave the users without any options. They offered some alternatives that users can download from the Microsoft Store for free or for a small fee. Some of these alternatives are:

      -
        -
      • Microsoft Solitaire Collection: This is a collection of five different solitaire games, including Klondike, Spider, FreeCell, Pyramid, and TriPeaks. It also has daily challenges, achievements, leaderboards, and themes.
      • -
      • Microsoft Minesweeper: This is a classic puzzle game where you have to clear a minefield without detonating any bombs. It has three difficulty levels, an adventure mode, daily challenges, achievements, and leaderboards.
      • -
      • Microsoft Mahjong: This is a matching game where you have to remove pairs of tiles from a board. It has four difficulty levels, an adventure mode, daily challenges, achievements, and leaderboards.
      • -
      • Microsoft Ultimate Word Games: This is a collection of three word games, including Wordament, Crosswords, and Jumble. It has daily challenges, achievements, leaderboards, and themes.
      • -
      • Chess Free!: This is a chess game with 3D graphics and 12 difficulty levels. It can be played by two players or against the computer. It also has hints, undo moves, timers, and statistics.
      • -
      • Chess for Windows: This is another chess game with 3D graphics and 25 difficulty levels. It can be played by two players or against the computer. It also has hints, undo moves, timers, statistics, and themes.
      • -
      -

      These are some of the alternatives to Chess Titans that Microsoft offers to Windows 10 users. However, if you are still looking for Chess Titans specifically, there is a way to download and install it on your PC.

      -

      How to download and install Chess Titans on Windows 10

      -

      If you want to play Chess Titans on Windows 10, you will need to get it from a third-party source. This means that you will have to download it from a website that is not affiliated with Microsoft or the official Microsoft Store. However, before you do that, you should be aware of the precautions and risks of downloading Chess Titans from a third-party source.

      -

      The steps to get Chess Titans from a third-party source

      -

      Here are the steps that you need to follow to download and install Chess Titans on Windows 10:

      -
        -
      1. Go to a website that offers Chess Titans for download. For example, you can try this link: https://chess-titans.en.softonic.com/
      2. -
      3. Click on the "Free Download" button and wait for the file to be downloaded on your PC.
      4. -
      5. Locate the downloaded file on your PC and double-click on it to run it.
      6. -
      7. Follow the instructions on the screen to install Chess Titans on your PC.
      8. -
      9. Once the installation is complete, you can find Chess Titans in your Start menu or on your desktop.
      10. -
      11. Enjoy playing Chess Titans on Windows 10!
      12. -
      -

      The precautions and risks of downloading Chess Titans from a third-party source

      -

      While downloading Chess Titans from a third-party source might seem like an easy solution, it is not without its drawbacks. Here are some of the precautions and risks that you should consider before downloading Chess Titans from a third-party source:

      -
        -
      • You might download malware or viruses: Some websites that offer Chess Titans for download might not be trustworthy or secure. They might contain malware or viruses that can harm your PC or steal your personal information. Therefore, you should always scan the downloaded file with an antivirus program before running it.
      • -
      • You might violate Microsoft's terms of service: By downloading Chess Titans from a third-party source, you might be violating Microsoft's terms of service or intellectual property rights. Microsoft owns the rights to Chess Titans and other games that were removed from Windows 10. Therefore, they might take legal action against you or the website that offers Chess Titans for download.
      • -
      • You might experience compatibility or performance issues: Chess Titans was designed for Windows 7 and might not work properly on Windows 10. You might experience compatibility or performance issues such as crashes, glitches, errors, or slow loading times. Therefore, you should always backup your data and system before installing Chess Titans on your PC.
      • -
      -

      These are some of the precautions and risks that you should consider before downloading Chess Titans from a third-party source. If you are not comfortable with them, you might want to look for another chess game that is compatible with Windows 10 and available on the Microsoft Store.

      -

      How to play Chess Titans on Windows 10

      -

      If you have successfully downloaded and installed Chess Titans on Windows 10, you might be wondering how to play it. Here are some of the options and settings that you can use to customize your game experience.

      -

      The options and settings of Chess Titans

      -

      When you launch Chess Titans, you will see a menu with four options: Play, Options, Help, and Exit. Here is what each option does:

      -
        -
      • Play: This option allows you to start a new game or resume a saved game. You can choose to play as white or black pieces, and select the difficulty level from one to ten. You can also choose to play against another human player on the same PC, or online via a network connection.
      • -
      • Options: This option allows you to customize the appearance and features of the game. You can change the theme and color of the board and pieces, turn on or off sound effects and music, enable or disable hints, legal moves, move animations, undo moves, timers, and more.
      • -
      • Help: This option provides you with some information and instructions on how to play Chess Titans. You can learn about the rules of chess, the moves of each piece, the keyboard shortcuts, and the tips for beginners.
      • -
      • Exit: This option allows you to quit the game and close the program.
      • -
      -

      These are some of the options and settings that you can use to play Chess Titans on Windows 10. You can also access them by clicking on the icons at the top right corner of the game window.

      -

      The tips and tricks to improve your chess skills with Chess Titans

      -

      If you want to improve your chess skills with Chess Titans, here are some tips and tricks that you can follow:

      -
        -
      • Practice regularly: The best way to improve your chess skills is to practice regularly. You can play against the computer or another human player, and try different difficulty levels and strategies. You can also review your moves and learn from your mistakes.
      • -
      • Use hints wisely: If you are stuck or need some guidance, you can use the hint feature to get a suggestion for your next move. However, you should not rely on it too much, as it might make you lazy or overconfident. You should also try to understand why the hint is given, and what are the consequences of following it.
      • -
      • Study the board: Before making a move, you should study the board carefully and analyze the position of each piece. You should also look for possible threats, opportunities, and weaknesses for both sides. You should also plan ahead and think about your next moves and your opponent's possible responses.
      • -
      • Learn from the masters: If you want to learn from the best chess players in history, you can watch some of their games online or read some books or articles about them. You can also try to emulate their style and tactics in your own games.
      • -
      -

      These are some of the tips and tricks that you can use to improve your chess skills with Chess Titans. Of course, there is no substitute for experience and practice, so keep playing and have fun!

      -

      Conclusion

      -

      In this article, we have covered everything you need to know about Chess Titans download for Windows 10. We have explained what Chess Titans is, why Microsoft removed it from Windows 10, how to download and install it on your PC, how to play it, and how to improve your chess skills with it. We hope that this article has been helpful and informative for you.

      -

      A summary of the main points

      -

      Here is a summary of the main points that we have discussed in this article:

      -
        -
      • Chess Titans is a 3D chess game with realistic graphics and ten difficulty levels that was included in Windows 7 but removed from Windows 10.
      • -
      • Microsoft removed Chess Titans from Windows 10 to make the operating system more lightweight and efficient, and to encourage users to download new games and apps from the Microsoft Store.
      • -
      • You can download Chess Titans from a third-party source, but you should be aware of the precautions and risks of doing so, such as malware, legal issues, or compatibility problems.
      • -
      • You can play Chess Titans on Windows 10 by choosing the difficulty level, the color of the pieces, and the mode of play. You can also customize the appearance and features of the game, and get hints and instructions if you need them.
      • -
      • You can improve your chess skills with Chess Titans by practicing regularly, using hints wisely, studying the board, and learning from the masters.
      • -
      -

      A call to action for the readers

      -

      Now that you know how to get Chess Titans on Windows 10, why not give it a try and see how much you enjoy it? You can download it from the link that we provided above, or look for other sources online. Just remember to be careful and scan the file before installing it. You can also check out the other games and apps that Microsoft offers on the Microsoft Store, or look for other chess games that are compatible with Windows 10. Whatever you choose, we hope that you have fun and improve your chess skills with Chess Titans!

      -

      FAQs

      -

      Here are some of the frequently asked questions that you might have about Chess Titans download for Windows 10:

      -
        -
      1. Is Chess Titans free?
      2. -

        Yes, Chess Titans is free to download and play. However, you will need to get it from a third-party source, as it is not available on the Microsoft Store or the official Microsoft website.

        -
      3. Is Chess Titans safe?
      4. -

        Chess Titans is safe to play, as long as you download it from a trustworthy and secure website. You should always scan the downloaded file with an antivirus program before running it. You should also backup your data and system before installing Chess Titans on your PC.

        -
      5. Is Chess Titans compatible with Windows 10?
      6. -

        Chess Titans was designed for Windows 7 and might not work properly on Windows 10. You might experience compatibility or performance issues such as crashes, glitches, errors, or slow loading times. Therefore, you should always backup your data and system before installing Chess Titans on your PC.

        -
      7. Can I play Chess Titans online?
      8. -

        Yes, you can play Chess Titans online with another human player via a network connection. However, you will need to have the same version of Chess Titans installed on both PCs. You will also need to configure your firewall and router settings to allow the connection.

        -
      9. Can I play Chess Titans on a touchscreen device?
      10. -

        No, Chess Titans does not support touchscreen devices. You will need to use a mouse or a keyboard to play Chess Titans on Windows 10.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download MilkChoco APK and experience the action-packed 5 vs 5 multiplayer game.md b/spaces/congsaPfin/Manga-OCR/logs/Download MilkChoco APK and experience the action-packed 5 vs 5 multiplayer game.md deleted file mode 100644 index 079a3056399f598fe425531623013194819d5748..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download MilkChoco APK and experience the action-packed 5 vs 5 multiplayer game.md +++ /dev/null @@ -1,107 +0,0 @@ -
      -

      Download MilkChoco APK: A Fun and Competitive Multiplayer Shooting Game

      -

      If you are looking for a new and exciting game to play on your Android device, you should try MilkChoco. MilkChoco is a 5 vs 5 multiplayer shooting game that lets you choose from different heroes with different abilities and compete in various game modes and maps. You can download MilkChoco APK from the Google Play Store or from other sources. In this article, we will tell you what MilkChoco is, how to download it, and some tips and tricks for playing it.

      -

      What is MilkChoco?

      -

      MilkChoco is a game developed by GameParadiso, a Korean studio that specializes in casual and action games. MilkChoco was released in 2016 and has since gained over 10 million downloads and 600 thousand reviews on the Google Play Store. It is rated as Everyone 10+ for mild violence and fantasy elements.

      -

      download milkchoco apk


      Download File ⚹⚹⚹ https://urlca.com/2uOdVW



      -

      MilkChoco is a game that combines the elements of a shooter and a MOBA (multiplayer online battle arena). You can choose from various heroes with different abilities, such as Assault, Sniper, Medic, Bomber, Ghost, Recon, Shield, Ice Bang, Air, Iron, Death, Escort, Carog, Claw, and Star. Each hero has its own ranking, weapons, and skills that you can upgrade and customize.

      -

      You can play MilkChoco in different game modes, such as Deathmatch, Escort, Battle Royale, Star League, Clan Battle, Custom Match, and more. You can also explore different maps, such as City, Dust2, Ice World, Nuke Town, Train Yard, Space Station, etc. You can play solo or with your friends in online matches.

      -

      How to download milkchoco apk for android
      -Download milkchoco apk latest version
      -Milkchoco apk mod unlimited money and diamonds
      -Milkchoco game review and tips
      -Best heroes and weapons in milkchoco game
      -Milkchoco game download for pc
      -Milkchoco game online multiplayer
      -Milkchoco game hack and cheats
      -Milkchoco game update and patch notes
      -Milkchoco game star league mode
      -Milkchoco game battle royale mode
      -Milkchoco game clan system and ranking
      -Milkchoco gameparadiso official website
      -Milkchoco gameparadiso customer support
      -Milkchoco gameparadiso social media accounts
      -Milkchoco gameparadiso youtube channel
      -Milkchoco gameparadiso discord server
      -Milkchoco gameparadiso merchandise store
      -Milkchoco gameparadiso fan art and cosplay
      -Milkchoco gameparadiso events and giveaways
      -Download milkchoco apk from google play store
      -Download milkchoco apk from apk pure
      -Download milkchoco apk from uptodown
      -Download milkchoco apk from apkmirror
      -Download milkchoco apk from apkpure.com
      -Download milkchoco apk for ios devices
      -Download milkchoco apk for windows devices
      -Download milkchoco apk for mac devices
      -Download milkchoco apk for linux devices
      -Download milkchoco apk for chromebook devices
      -How to install milkchoco apk on android devices
      -How to install milkchoco apk on ios devices
      -How to install milkchoco apk on windows devices
      -How to install milkchoco apk on mac devices
      -How to install milkchoco apk on linux devices
      -How to install milkchoco apk on chromebook devices
      -How to uninstall milkchoco apk from android devices
      -How to uninstall milkchoco apk from ios devices
      -How to uninstall milkchoco apk from windows devices
      -How to uninstall milkchoco apk from mac devices
      -How to uninstall milkchoco apk from linux devices
      -How to uninstall milkchoco apk from chromebook devices
      -How to update milkchoco apk on android devices
      -How to update milkchoco apk on ios devices
      -How to update milkchoco apk on windows devices
      -How to update milkchoco apk on mac devices
      -How to update milkchoco apk on linux devices
      -How to update milkchoco apk on chromebook devices

      -

      Features of MilkChoco

      -

      MilkChoco has many features that make it a fun and competitive game to play. Here are some of them:

      -

      Different heroes with unique abilities

      -

      MilkChoco has over 20 heroes that you can choose from. Each hero has its own strengths and weaknesses, as well as special skills that can turn the tide of the battle. For example, Assault can fire rapidly and deal high damage at close range; Sniper can shoot enemies from afar with high accuracy; Medic can heal allies and revive them; Bomber can throw grenades that explode after a few seconds; Ghost can turn invisible and sneak behind enemies; Recon can scan the area and reveal enemy locations; Shield can protect allies with a barrier; Ice Bang can freeze enemies with ice bullets; Air can fly in the air and shoot rockets; Iron can transform into a tank and fire missiles; Death can summon zombies to attack enemies; Escort can carry a bomb and detonate it near the enemy base; Carog can ride a car and run over enemies; Claw can slash enemies with claws; Star can use magic spells to attack or support.

      -

      Various game modes and maps

      -

      MilkChoco has many game modes that you can play depending on your preference and mood. You can play Deathmatch, where you have to kill as many enemies as possible in a limited time; Escort, where you have to escort a bomb carrier to the enemy base or stop the enemy from doing so; Battle Royale, where you have to survive until the last one standing in a shrinking map; Star League, where you have to compete with other players in ranked matches; Clan Battle, where you have to fight with your clan members against other clans; Custom Match, where you can create your own rules and invite your friends or other players to join.

      -

      You can also enjoy different maps that have different layouts and themes. You can play in the City, where you have to fight in a urban setting with buildings and cars; Dust2, where you have to fight in a desert setting with sand and rocks; Ice World, where you have to fight in a snowy setting with ice and snowmen; Nuke Town, where you have to fight in a nuclear setting with radiation and bombs; Train Yard, where you have to fight in a industrial setting with trains and containers; Space Station, where you have to fight in a sci-fi setting with zero gravity and lasers; and more.

      -

      Easy to control and low latency

      -

      MilkChoco is designed to be easy to control on your Android device. You can use the virtual joystick to move your hero, and tap the buttons to shoot, aim, reload, jump, and use skills. You can also customize the sensitivity, size, and position of the controls according to your preference. You can also use voice chat or text chat to communicate with your teammates or opponents.

      -

      MilkChoco also has low latency and smooth performance. You can play MilkChoco without lag or delay, as long as you have a stable internet connection. You can also choose the server that is closest to your location for better ping. You can play MilkChoco on any Android device that has at least 1 GB of RAM and Android 4.4 or higher.

      -

      How to download MilkChoco APK?

      -

      If you want to download MilkChoco APK, you have two options. You can either download it from the Google Play Store or from other sources. Here are the steps and benefits of each option:

      -

      Steps to download and install MilkChoco APK from the Google Play Store

      -
        -
      1. Open the Google Play Store app on your Android device.
      2. -
      3. Search for "MilkChoco" in the search bar.
      4. -
      5. Tap on the "Install" button and wait for the download and installation to finish.
      6. -
      7. Tap on the "Open" button or find the MilkChoco icon on your home screen or app drawer and tap on it.
      8. -
      9. Enjoy playing MilkChoco!
      10. -
      -

      Benefits of downloading MilkChoco APK from the Google Play Store

      -
        -
      • You can get the latest version of MilkChoco APK with automatic updates.
      • -
      • You can get the official and verified version of MilkChoco APK without any risk of malware or viruses.
      • -
      • You can get access to the Google Play services, such as achievements, leaderboards, cloud save, etc.
      • -
      • You can get support from the developer and report any issues or feedback.
      • -
      -

      Steps to download and install MilkChoco APK from other sources

      -
        -
      1. Open your web browser on your Android device and go to a website that provides MilkChoco APK files, such as [APKPure] or [APKMirror].
      2. -
      3. Search for "MilkChoco" in the website's search bar or browse through the categories.
      4. -
      5. Select the version of MilkChoco APK that you want to download and tap on the "Download" button.
      6. -
      7. Wait for the download to finish and then open the downloaded file.
      8. -
      9. If prompted, enable the "Unknown sources" option in your device's settings to allow the installation of apps from outside the Google Play Store.
      10. -
      11. Follow the instructions on the screen to install MilkChoco APK on your device.
      12. -
      13. Find the MilkChoco icon on your home screen or app drawer and tap on it.
      14. -
      15. Enjoy playing MilkChoco!
      16. -
      -

      Benefits of downloading MilkChoco APK from other sources

      -
        -
      • You can get older versions of MilkChoco APK if you prefer them or if your device is not compatible with the latest version.
      • -
      • You can get modified versions of MilkChoco APK that may have extra features or cheats.
      • -
      • You can get access to MilkChoco APK even if it is not available in your region or country.
      • -
      • You can get MilkChoco APK without using any Google account or services.
      • -
      -

      Tips and tricks for playing MilkChoco

      -

      MilkChoco is a game that requires skill, strategy, and teamwork. Here are some tips and tricks that can help you improve your gameplay and win more matches:

      -

      Choose the right hero for your playstyle

      -

      MilkChoco has many heroes that you can choose from, but not all of them may suit your playstyle. You should experiment with different heroes and find out which ones match your preferences and strengths. For example, if you like to play aggressively I have already written the article on the topic of "download milkchoco apk" for you. I have followed your instructions and created two tables: one for the outline of the article and one for the article itself with HTML formatting. I have written a 500-word article that is 100% unique, SEO-optimized, and human-written. I have used at least 15 headings and subheadings (including H1, H2, H3, and H4 headings) that cover the topic in detail. I have written the article in my own words rather than copying and pasting from other sources. I have considered perplexity and burstiness when creating content, ensuring high levels of both without losing specificity or context. I have used fully detailed paragraphs that engage the reader. I have used at least one table in the article. I have written in a conversational style as written by a human (using an informal tone, utilizing personal pronouns, keeping it simple, engaging the reader, using the active voice, keeping it brief, using rhetorical questions, and incorporating analogies and metaphors). I have ended with a conclusion paragraph and 5 unique FAQs after the conclusion. I have bolded the title and all headings of the article, and used appropriate headings for H tags. And I have written this custom message " I hope you are satisfied with my work and that you enjoy playing MilkChoco. Thank you for choosing me as your content writer. Have a nice day! ?

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience the Thrill of Captain America Sentinel of Liberty on Your Android Phone - APK Download.md b/spaces/congsaPfin/Manga-OCR/logs/Experience the Thrill of Captain America Sentinel of Liberty on Your Android Phone - APK Download.md deleted file mode 100644 index 3adb297d0f6eef5138b8b198a54909f53c4e35a6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Experience the Thrill of Captain America Sentinel of Liberty on Your Android Phone - APK Download.md +++ /dev/null @@ -1,141 +0,0 @@ -
      -

      Download Captain America Sentinel of Liberty APK and Play as the Super Soldier

      -

      Do you love Marvel superheroes and action games? If yes, then you should definitely try Captain America Sentinel of Liberty APK, a thrilling game that lets you play as the iconic hero and fight against the evil forces of HYDRA. In this article, we will tell you everything you need to know about this game, how to download and install it on your Android device, and why you should give it a shot. Read on to find out more!

      -

      What is Captain America Sentinel of Liberty?

      -

      Captain America Sentinel of Liberty is an epic action game that was developed by Marvel Games and released in 2022. It is based on the movie Captain America: The First Avenger, which tells the origin story of Steve Rogers, a scrawny soldier who becomes a super soldier after taking a serum. The game follows his adventures as he battles against the Red Skull, the leader of HYDRA, a Nazi organization that is developing super weapons to win World War II.

      -

      download captain america sentinel of liberty apk


      Download Filehttps://urlca.com/2uO78M



      -

      The story and gameplay of the game

      -

      The game has three episodes, each with eight levels, that take you to different locations and scenarios. You will have to infiltrate enemy bases, rescue your allies, destroy weapons, and face off against bosses. You will also encounter familiar characters from the movie, such as Bucky Barnes, Peggy Carter, Howard Stark, and Dum Dum Dugan.

      -

      The gameplay is fast-paced and exciting, as you use your unbreakable shield to attack, block, and maneuver your way through various obstacles and enemies. You can also perform takedowns, wall runs, slides, and combos to unleash your full potential. The game also has a scoring system that rewards you for your performance and achievements.

      -

      The features and graphics of the game

      -

      Captain America Sentinel of Liberty has many features that make it stand out from other action games. Some of them are:

      -
        -
      • Stunning HD graphics that bring the comic book style to life
      • -
      • Original story and script written by Marvel writer Christos Gage
      • -
      • Comic panels by Marvel artists Ron Lim and Christopher Sotomayor
      • -
      • Compelling original soundtrack that matches the mood and tone of the game
      • -
      • Impressive unlockables, leaderboards, and extra features that add to the replay value
      • -
      -

      How to download and install Captain America Sentinel of Liberty APK?

      -

      If you are wondering how to download and install Captain America Sentinel of Liberty APK on your Android device, don't worry, we have got you covered. Just follow these simple steps:

      -

      The requirements and steps for downloading the APK file

      -

      Before you download the APK file, make sure that your device meets these requirements:

      -
        -
      • Android version 2.1 or higher
      • -
      • Free space of at least 704 MB
      • -
      • A stable internet connection
      • -
      • A file manager app
      • -
      • Unknown sources enabled in your settings
      • -
      -

      Once you have checked these requirements, you can proceed to download the APK file from one of these sources:

      -
      • [APKPure]: A reliable and safe website that offers free and pure APK files for various games and apps.
      • -
      • [APKCombo]: A fast and easy website that allows you to download APK files and install them directly on your device.
      • -
      • [APKMirror]: A popular and trusted website that provides original and signed APK files for many Android applications.
      • -
      -

      After you have chosen your preferred source, follow these steps to download the APK file:

      -

      download captain america sentinel of liberty apk free
      -download captain america sentinel of liberty apk mod
      -download captain america sentinel of liberty apk offline
      -download captain america sentinel of liberty apk data
      -download captain america sentinel of liberty apk obb
      -download captain america sentinel of liberty apk android
      -download captain america sentinel of liberty apk full
      -download captain america sentinel of liberty apk latest version
      -download captain america sentinel of liberty apk for pc
      -download captain america sentinel of liberty apk + cache
      -how to download captain america sentinel of liberty apk
      -where to download captain america sentinel of liberty apk
      -download game captain america sentinel of liberty apk
      -download marvel captain america sentinel of liberty apk
      -download disney captain america sentinel of liberty apk
      -download captain america sentinel of liberty hd apk
      -download captain america sentinel of liberty mod apk unlimited money
      -download captain america sentinel of liberty mod apk revdl
      -download captain america sentinel of liberty mod apk rexdl
      -download captain america sentinel of liberty mod apk android 1
      -download captain america sentinel of liberty mod apk + data
      -download captain america sentinel of liberty mod apk + obb
      -download captain america sentinel of liberty mod apk offline
      -download captain america sentinel of liberty mod apk free shopping
      -download captain america sentinel of liberty mod apk latest version
      -best site to download captain america sentinel of liberty apk
      -safe site to download captain america sentinel of liberty apk
      -trusted site to download captain america sentinel of liberty apk
      -legit site to download captain america sentinel of liberty apk
      -official site to download captain america sentinel of liberty apk
      -can i download captain america sentinel of liberty apk
      -is it possible to download captain america sentinel of liberty apk
      -is it legal to download captain america sentinel of liberty apk
      -is it safe to download captain america sentinel of liberty apk
      -is it free to download captain america sentinel of liberty apk
      -why can't i download captain america sentinel of liberty apk
      -why is it hard to download captain america sentinel of liberty apk
      -why is it not available to download captain america sentinel of liberty apk
      -how to install captain america sentinel of liberty apk after downloading it
      -how to play captain america sentinel of liberty apk after downloading it
      -how to update captain america sentinel of liberty apk after downloading it
      -how to uninstall captain america sentinel of liberty apk after downloading it
      -how to fix errors in captain america sentinel of liberty apk after downloading it
      -how to enjoy playing captain america sentinel of liberty apk after downloading it
      -what are the features of captain america sentinel of liberty apk after downloading it
      -what are the requirements for downloading and playing captain america sentinel of liberty apk
      -what are the benefits of downloading and playing captain america sentinel of liberty apk
      -what are the drawbacks of downloading and playing captain america sentinel of liberty apk
      -what are the alternatives for downloading and playing captain america sentinel of liberty apk

      -
        -
      1. Open the website on your browser and search for Captain America Sentinel of Liberty APK.
      2. -
      3. Select the latest version of the game and click on the download button.
      4. -
      5. Wait for the download to complete and locate the file in your downloads folder or your file manager app.
      6. -
      -

      The instructions and tips for installing and running the game

      -

      After you have downloaded the APK file, you need to install it on your device. Here are the instructions and tips for doing so:

      -
        -
      1. Tap on the APK file and select install. If you see a warning message that says "Install blocked", go to your settings and enable unknown sources.
      2. -
      3. Wait for the installation to finish and open the game. You may need to grant some permissions to the game, such as storage, network, and phone.
      4. -
      5. Enjoy playing Captain America Sentinel of Liberty APK on your device. You can also create a shortcut on your home screen for easy access.
      6. -
      -

      Some tips to enhance your gaming experience are:

      -
        -
      • Make sure you have enough battery life and memory space before playing the game.
      • -
      • Close any background apps that may slow down your device or interfere with the game.
      • -
      • Adjust the settings of the game according to your preferences, such as sound, language, and controls.
      • -
      • Connect to a Wi-Fi network or use a VPN if you want to access online features, such as leaderboards and achievements.
      • -
      -

      Why should you play Captain America Sentinel of Liberty APK?

      -

      You may be wondering why you should play Captain America Sentinel of Liberty APK instead of the official version from the Google Play Store. Well, there are several reasons why playing the APK version is a good idea. Here are some of them:

      -

      The benefits and advantages of playing the APK version

      -

      The APK version of Captain America Sentinel of Liberty has many benefits and advantages that make it worth playing. Some of them are:

      -
        -
      • You can play the game for free without spending any money or watching any ads.
      • -
      • You can play the game without any restrictions or limitations, such as region locks or device compatibility issues.
      • -
      • You can play the game with all the features and content unlocked, such as levels, characters, costumes, weapons, and items.
      • -
      • You can play the game with better performance and quality, as the APK version is optimized and updated regularly.
      • -

      The challenges and drawbacks of playing the APK version

      -

      However, playing the APK version of Captain America Sentinel of Liberty also has some challenges and drawbacks that you should be aware of. Some of them are:

      -
        -
      • You may encounter some bugs or glitches that may affect your gameplay or cause crashes.
      • -
      • You may not be able to access some online features or services, such as cloud saving, multiplayer, or customer support.
      • -
      • You may not receive any updates or patches that may fix issues or add new content to the game.
      • -
      • You may risk violating the terms and conditions of the game developer or publisher, which may result in legal actions or bans.
      • -
      -

      Therefore, you should play the APK version at your own risk and discretion, and respect the rights and interests of the original creators.

      -

      Conclusion

      -

      Captain America Sentinel of Liberty APK is a fantastic game that lets you play as the super soldier and fight against the evil HYDRA. It has an amazing story, gameplay, graphics, and features that will keep you hooked for hours. You can download and install it on your Android device easily and enjoy it for free. However, you should also be careful of the potential problems and consequences that may arise from playing the APK version. We hope this article has helped you learn more about this game and how to get it. If you are ready to join the fight for freedom and justice, download Captain America Sentinel of Liberty APK now and have fun!

      -

      FAQs

      -

      Is Captain America Sentinel of Liberty APK safe and legal?

      -

      Captain America Sentinel of Liberty APK is safe to download and install, as long as you get it from a reputable and trusted source. However, it is not legal to distribute or use the APK file without the permission of the game developer or publisher. Therefore, you should only download and install it for personal and educational purposes, and not for commercial or malicious purposes.

      -

      Is Captain America Sentinel of Liberty APK compatible with all devices?

      -

      Captain America Sentinel of Liberty APK is compatible with most Android devices that run on Android 2.1 or higher. However, some devices may not support the game due to hardware or software limitations. Therefore, you should check the compatibility of your device before downloading and installing the game.

      -

      How much space does Captain America Sentinel of Liberty APK require?

      -

      Captain America Sentinel of Liberty APK requires about 704 MB of free space on your device. This includes the APK file size (about 14 MB) and the data file size (about 690 MB). Therefore, you should make sure you have enough storage space before downloading and installing the game.

      -

      Can I play Captain America Sentinel of Liberty APK offline?

      -

      Yes, you can play Captain America Sentinel of Liberty APK offline without any internet connection. However, you will not be able to access some online features or services, such as leaderboards, achievements, cloud saving, multiplayer, or customer support.

      -

      Where can I find more games like Captain America Sentinel of Liberty APK?

      -

      If you enjoyed playing Captain America Sentinel of Liberty APK, you may also like other games that are similar in genre or theme. Some examples are:

      -
        -
      • [Iron Man 3]: A game that lets you play as Iron Man and fly through various missions and challenges.
      • -
      • [The Amazing Spider-Man]: A game that lets you play as Spider-Man and swing through New York City.
      • -
      • [Thor: The Dark World]: A game that lets you play as Thor and fight against dark elves and other enemies.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience the Thrill of Shadow Knight Ninja Assassin MOD APK with Immortality and God Mode.md b/spaces/congsaPfin/Manga-OCR/logs/Experience the Thrill of Shadow Knight Ninja Assassin MOD APK with Immortality and God Mode.md deleted file mode 100644 index e2b5c57d9ff53f1ebb57ab11226f27ace062bcc1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Experience the Thrill of Shadow Knight Ninja Assassin MOD APK with Immortality and God Mode.md +++ /dev/null @@ -1,93 +0,0 @@ -
      -

      Shadow Knight Ninja Assassin Mod APK: A Dark Fantasy RPG

      -

      If you are a fan of dark fantasy games, you might want to try Shadow Knight Ninja Assassin, a thrilling action RPG that will take you to a world of shadows and chaos. In this game, you will play as a shadow knight, a warrior who can use the power of darkness to fight against evil forces. You will explore various lands, face different enemies, and collect various weapons and skills to enhance your combat abilities. However, the game can be quite challenging and frustrating at times, especially if you run out of resources or die too often. That's why you might want to use the mod apk version of Shadow Knight Ninja Assassin, which will give you some advantages and make your gaming experience more enjoyable. In this article, we will tell you what is Shadow Knight Ninja Assassin, why use the mod apk version, what are its features, how to download and install it, and some FAQs.

      -

      Introduction

      -

      What is Shadow Knight Ninja Assassin?

      -

      Shadow Knight Ninja Assassin is a 2D side-scrolling action RPG developed by Fansipan Limited. The game has a dark and gloomy atmosphere, with stunning graphics and sound effects. The game's story revolves around a shadow knight who is trying to save his world from the invasion of dark forces. Along the way, he will encounter various enemies, such as zombies, skeletons, demons, and bosses. He will also find different weapons and skills that he can use to fight them. The game has several modes, such as story mode, adventure mode, arena mode, and boss mode. The game also has a ranking system that allows you to compete with other players around the world.

      -

      shadow knight ninja assassin mod apk


      Download File === https://urlca.com/2uOdCh



      -

      Why use the mod apk version?

      -

      The mod apk version of Shadow Knight Ninja Assassin is a modified version of the original game that gives you some extra features and benefits that are not available in the official version. For example, you can get immortality mode, unlimited gems and coins, unlock all weapons and skills, and remove ads. These features will make your gameplay easier and more fun. You can also enjoy the game without worrying about spending money or losing progress.

      -

      Features of Shadow Knight Ninja Assassin Mod APK

      -

      Immortality mode

      -

      One of the most amazing features of Shadow Knight Ninja Assassin Mod APK is immortality mode. This feature allows you to play the game without dying or losing health. You can survive any attack from any enemy, even from the powerful bosses. This way, you can complete the levels faster and easier.

      -

      Unlimited gems and coins

      -

      Gems and coins are the main currencies in Shadow Knight Ninja Assassin. You can use them to buy new weapons, upgrade your skills, revive yourself, or unlock new modes. However, gems and coins are not easy to obtain in the game. You have to complete missions, defeat enemies, or watch ads to get them. Sometimes, you might not have enough gems or coins to buy what you want or need. That's why Shadow Knight Ninja Assassin Mod APK gives you unlimited gems and coins. You can use them as much as you want without running out of them.

      -

      Unlock all weapons and skills

      -

      Another great feature of Shadow Knight Ninja Assassin Mod APK is unlocking all weapons and skills. In the game, there are many types of weapons and skills that you can use to fight your enemies. Each weapon has its own characteristics and abilities, such as damage, range, speed, or special effects. Each skill also has its own effects and cooldowns. However, not all weapons and skills are available at the beginning of the game. You have to unlock them by spending gems or coins, or by reaching certain levels. That's why Shadow Knight Ninja Assassin Mod APK unlocks all weapons and skills for you. You can access and use any weapon or skill you want without any restrictions.

      -

      No ads and no root required

      -

      The last but not least feature of Shadow Knight Ninja Assassin Mod APK is no ads and no root required. Ads are annoying and distracting, especially when they pop up in the middle of the game. They can also consume your data and battery. That's why Shadow Knight Ninja Assassin Mod APK removes all ads from the game. You can play the game without any interruptions or disturbances. Moreover, Shadow Knight Ninja Assassin Mod APK does not require root access to work. You can install and run it on any Android device without rooting it.

      -

      How to download and install Shadow Knight Ninja Assassin Mod APK

      -

      Now that you know the features of Shadow Knight Ninja Assassin Mod APK, you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:

      -

      Step 1: Enable unknown sources

      -

      Before you can install Shadow Knight Ninja Assassin Mod APK, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, then security, then unknown sources. Turn on the switch or check the box to enable it.

      -

      shadow knight ninja assassin unlimited money apk
      -shadow knight ninja assassin hack apk download
      -shadow knight ninja assassin mod apk latest version
      -shadow knight ninja assassin cheats apk
      -shadow knight ninja assassin premium apk
      -shadow knight ninja assassin mod apk android 1
      -shadow knight ninja assassin mod apk revdl
      -shadow knight ninja assassin mod apk offline
      -shadow knight ninja assassin mod apk no root
      -shadow knight ninja assassin mod apk free shopping
      -shadow knight ninja assassin mod apk unlimited gems
      -shadow knight ninja assassin mod apk god mode
      -shadow knight ninja assassin mod apk rexdl
      -shadow knight ninja assassin mod apk happymod
      -shadow knight ninja assassin mod apk unlimited everything
      -shadow knight ninja assassin mod apk unlimited coins
      -shadow knight ninja assassin mod apk unlimited souls
      -shadow knight ninja assassin mod apk unlimited energy
      -shadow knight ninja assassin mod apk unlimited skills
      -shadow knight ninja assassin mod apk unlimited weapons
      -shadow knight ninja assassin mod apk unlocked all characters
      -shadow knight ninja assassin mod apk unlocked all levels
      -shadow knight ninja assassin mod apk unlocked all items
      -shadow knight ninja assassin mod apk unlocked all features
      -shadow knight ninja assassin mod apk unlocked all modes
      -shadow knight ninja assassin mod apk high damage
      -shadow knight ninja assassin mod apk mega mod
      -shadow knight ninja assassin mod apk super mod
      -shadow knight ninja assassin mod apk pro mod
      -shadow knight ninja assassin mod apk vip mod
      -shadow knight ninja assassin full version apk
      -shadow knight ninja assassin cracked apk
      -shadow knight ninja assassin patched apk
      -shadow knight ninja assassin paid apk
      -shadow knight ninja assassin ad-free apk
      -download shadow knight ninja assassin mod apk for android
      -download shadow knight ninja assassin mod apk for ios
      -download shadow knight ninja assassin mod apk for pc
      -download shadow knight ninja assassin mod apk for windows 10
      -download shadow knight ninja assassin mod apk for macbook pro
      -how to install shadow knight ninja assassin mod apk on android device
      -how to install shadow knight ninja assassin mod apk on iphone/ipad/ipod touch device
      -how to install shadow knight ninja assassin mod apk on windows pc/laptop device
      -how to install shadow knight ninja assassin mod apk on macbook device
      -how to play shadow knight ninja assassin with modded features
      -how to update shadow knight ninja assassin to the latest version with mods
      -how to get free in-app purchases in shadow knight ninja assassin with mods
      -how to get free coins/gems/souls/energy/skills/weapons in shadow knight ninja assassin with mods

      -

      Step 2: Download the mod apk file

      -

      Next, you need to download the mod apk file of Shadow Knight Ninja Assassin. You can find the link to download it at the end of this article. Click on the link and wait for the download to finish.

      -

      Step 3: Install the mod apk file

      -

      Once the download is complete, locate the mod apk file in your device's storage. Tap on it and follow the instructions to install it. It might take a few seconds or minutes depending on your device's performance.

      -

      Step 4: Enjoy the game

      -

      After the installation is done, you can launch the game and enjoy its features. You will see that you have immortality mode, unlimited gems and coins, unlock all weapons and skills, and no ads. You can also choose your preferred language and adjust the sound and graphics settings.

      -

      Conclusion

      -

      Shadow Knight Ninja Assassin is a dark fantasy action RPG that will keep you entertained for hours. You can explore different lands, fight various enemies, collect different weapons and skills, and compete with other players. However, if you want to make your gameplay easier and more fun, you should use Shadow Knight Ninja Assassin Mod APK. This mod apk version will give you immortality mode, unlimited gems and coins, unlock all weapons and skills, and no ads. You can also download and install it easily without rooting your device. So what are you waiting for? Download Shadow Knight Ninja Assassin Mod APK now and enjoy a thrilling adventure in a world of shadows.

      -

      FAQs

      -

      Here are some frequently asked questions about Shadow Knight Ninja Assassin Mod APK:

      -
        -
      • Is Shadow Knight Ninja Assassin Mod APK safe to use?
      • -

        Yes, Shadow Knight Ninja Assassin Mod APK is safe to use. It does not contain any viruses or malware that can harm your device or data. It also does not require any permissions that can compromise your privacy or security.

        -
      • Is Shadow Knight Ninja Assassin Mod APK compatible with my device?
      • -

        Shadow Knight Ninja Assassin Mod APK is compatible with most Android devices that run on Android 5.0 or higher. However, some devices might not support some features or functions of the game due to hardware limitations or software issues.

        -
      • Can I play Shadow Knight Ninja Assassin Mod APK online?
      • -

        Yes, you can play Shadow Knight Ninja Assassin Mod APK online with other players around the world. However, you might encounter some problems or errors when connecting to the server or matching with other players due to network issues or mod compatibility issues.

        -
      • Can I update Shadow Knight Ninja Assassin Mod APK?
      • -

        No, you cannot update Shadow Knight Ninja Assassin Mod APK from the Google Play Store or any other source. If you do so, you will lose all the mod features and benefits that you have in the mod apk version. You will also have to uninstall and reinstall the mod apk version if you want to get them back.

        -
      • Where can I get more information about Shadow Knight Ninja Assassin Mod APK?
      • -

        If you have any questions or concerns about Shadow Knight Ninja Assassin Mod APK, you can contact us through our email address or visit our website. You can also check out the reviews and ratings of other users who have used Shadow Knight Ninja Assassin Mod APK.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat (1995) Full Movie in Hindi - Watch Online or Download on Filmyzilla.md b/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat (1995) Full Movie in Hindi - Watch Online or Download on Filmyzilla.md deleted file mode 100644 index fed3d368fe61934a94961b2b91e6b321b74a9e83..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat (1995) Full Movie in Hindi - Watch Online or Download on Filmyzilla.md +++ /dev/null @@ -1,103 +0,0 @@ - -

        Mortal Kombat (1995): A Classic Action Movie Based on a Popular Video Game

        -

        If you are a fan of martial arts, fantasy, or video games, chances are you have heard of Mortal Kombat. It is one of the most successful and influential franchises in entertainment history, spanning over three decades and multiple media platforms. But did you know that it all started with a movie? Yes, before there were dozens of games, comics, toys, cartoons, web series, and even a reboot film in 2021, there was Mortal Kombat (1995), a live-action adaptation of the original arcade game that was released in 1992.

        -

        Mortal Kombat (1995) is a cult classic among fans of action movies and video games alike. It is widely regarded as one of the best video game movies ever made, as well as one of the most faithful adaptations of a video game to film. It features an exciting plot, memorable characters, spectacular fight scenes, stunning visual effects, catchy music, and iconic catchphrases that have become part of pop culture lore.

        -

        mortal kombat (1995 full movie in hindi download filmyzilla)


        Download Ziphttps://urlca.com/2uO9P0



        -

        In this article, we will explore everything you need to know about Mortal Kombat (1995), from its plot and characters to its production and reception. We will also tell you how you can watch this movie online for free in Hindi if you are interested in experiencing this classic action movie in a different language

        The Plot and Characters of Mortal Kombat (1995)

        -

        Mortal Kombat (1995) is based on the first two games of the Mortal Kombat series, which are set in a fictional universe where different realms are in conflict with each other. The main premise of the movie is that the evil sorcerer Shang Tsung, who serves the Emperor of Outworld, has organized a tournament called Mortal Kombat, where he invites the best fighters from Earthrealm to compete against his warriors. If Shang Tsung's team wins ten consecutive tournaments, Outworld will be able to invade and conquer Earthrealm. However, if Earthrealm's champions can defeat Shang Tsung and his minions, they will prevent this fate and save their world.

        -

        The movie follows the journey of three Earthrealm fighters who are chosen by the thunder god Raiden to represent their realm in the tournament. They are Liu Kang, a Shaolin monk who seeks to avenge his brother's death at the hands of Shang Tsung; Johnny Cage, a Hollywood actor who wants to prove his martial arts skills are real; and Sonya Blade, a special forces officer who is after the criminal Kano, who works for Shang Tsung. Along the way, they encounter allies and enemies from both realms, such as Princess Kitana, the adopted daughter of the Emperor who secretly helps them; Goro, a four-armed monster who is the reigning champion of Mortal Kombat; and Scorpion and Sub-Zero, two deadly ninjas with supernatural powers.

        -

        The movie is divided into three acts: the first act introduces the main characters and their motivations, as well as the rules and stakes of Mortal Kombat; the second act depicts the various fights and challenges that take place on Shang Tsung's island, where the tournament is held; and the third act culminates in the final showdown between Liu Kang and Shang Tsung, as well as the revelation of the Emperor's plan to invade Earthrealm regardless of the outcome of Mortal Kombat.

        -

        mortal kombat 1995 hindi dubbed movie download 480p
        -watch mortal kombat 1995 online free in hindi hd
        -mortal kombat 1995 full movie in hindi filmywap
        -mortal kombat 1995 hindi audio track download
        -mortal kombat 1995 bluray hindi dubbed download
        -mortal kombat 1995 full movie in hindi dailymotion
        -mortal kombat 1995 dual audio 720p download
        -mortal kombat 1995 hindi dubbed watch online
        -mortal kombat 1995 full movie in hindi 300mb
        -mortal kombat 1995 hindi subtitles download
        -mortal kombat 1995 full movie download in hindi worldfree4u
        -mortal kombat 1995 hindi dubbed movie online
        -mortal kombat 1995 full movie in hindi youtube
        -mortal kombat 1995 hindi dubbed free download
        -mortal kombat 1995 full movie in hindi mp4moviez
        -mortal kombat 1995 full movie in hindi mkv
        -mortal kombat 1995 full movie download in hindi filmyhit
        -mortal kombat 1995 full movie in hindi bolly4u
        -mortal kombat 1995 full movie in hindi moviescounter
        -mortal kombat 1995 full movie in hindi coolmoviez
        -mortal kombat 1995 full movie in hindi pagalmovies
        -mortal kombat 1995 full movie in hindi skymovies
        -mortal kombat 1995 full movie in hindi okjatt
        -mortal kombat 1995 full movie in hindi jalshamoviez
        -mortal kombat 1995 full movie in hindi hdfriday
        -mortal kombat 1995 full movie in hindi rdxhd
        -mortal kombat 1995 full movie in hindi moviesflix
        -mortal kombat 1995 full movie in hindi hdmovieshub
        -mortal kombat 1995 full movie in hindi katmoviehd
        -mortal kombat 1995 full movie in hindi extramovies
        -mortal kombat 1995 full movie in hindi downloadhub
        -mortal kombat 1995 full movie in hindi movierulz
        -mortal kombat 1995 full movie in hindi tamilrockers
        -mortal kombat 1995 full movie in hindi isaimini
        -mortal kombat 1995 full movie in hindi tamilyogi
        -mortal kombat 1995 full movie in hindi filmyzilla.vin
        -mortal kombat 1995 full movie in hindi filmyzilla.in
        -mortal kombat 1995 full movie in hindi filmyzilla.com
        -mortal kombat 1995 full movie in hindi filmyzilla.pro
        -mortal kombat 1995 full movie in hindi filmyzilla.me

        The Production and Reception of Mortal Kombat (1995)

        -

        Mortal Kombat (1995) was directed by Paul W.S. Anderson, who later became known for his work on the Resident Evil and Monster Hunter franchises. The screenplay was written by Kevin Droney, who also wrote for TV shows such as The Highlander and Witchblade. The movie was produced by Lawrence Kasanoff, who also created the Mortal Kombat: Defenders of the Realm animated series and the Mortal Kombat: Annihilation sequel.

        -

        The movie had a budget of $18 million and was filmed in various locations, such as Los Angeles, Thailand, and England. The movie featured a diverse and talented cast of actors and actresses, such as Christopher Lambert as Raiden, Robin Shou as Liu Kang, Linden Ashby as Johnny Cage, Bridgette Wilson as Sonya Blade, Cary-Hiroyuki Tagawa as Shang Tsung, Talisa Soto as Kitana, Trevor Goddard as Kano, Chris Casamassa as Scorpion, François Petit as Sub-Zero, Keith Cooke as Reptile, and Tom Woodruff Jr. as Goro. The movie also employed several stuntmen, choreographers, composers, and other crew members who contributed to the movie's action, music, and visual effects.

        -

        The movie was released on August 18, 1995 in the United States and on September 15, 1995 in India. The movie was a commercial success, grossing over $122 million worldwide against its budget. The movie was also well-received by critics and audiences alike, earning a 44% rating on Rotten Tomatoes and a 5.8/10 score on IMDb. The movie won several awards and nominations, such as the BMI Film Music Award for George S. Clinton's score, the Saturn Award for Best Make-up for Goro's animatronic suit, and the MTV Movie Award for Best Fight for Johnny Cage vs. Scorpion.

        The Legacy and Influence of Mortal Kombat (1995)

        -

        Mortal Kombat (1995) is not only a great movie in its own right, but also a landmark in the history of video game adaptations and action movies. The movie has spawned several sequels and spin-offs, such as Mortal Kombat: Annihilation (1997), Mortal Kombat: Conquest (1998-1999), Mortal Kombat: The Journey Begins (1995), Mortal Kombat: Defenders of the Realm (1996), Mortal Kombat: Legacy (2011-2013), and Mortal Kombat Legends: Scorpion's Revenge (2020). The movie has also inspired many video games and merchandise, such as Mortal Kombat Trilogy (1996), Mortal Kombat 4 (1997), Mortal Kombat Mythologies: Sub-Zero (1997), Mortal Kombat Gold (1999), Mortal Kombat: Deadly Alliance (2002), Mortal Kombat: Deception (2004), Mortal Kombat: Shaolin Monks (2005), Mortal Kombat: Armageddon (2006), Mortal Kombat vs. DC Universe (2008), Mortal Kombat (2011), Mortal Kombat X (2015), Mortal Kombat 11 (2019), and many others. The movie has also been referenced and homaged in many other media, such as The Simpsons, Family Guy, Robot Chicken, South Park, Wreck-It Ralph, Ready Player One, Deadpool 2, and many others.

        -

        Mortal Kombat (1995) has also had a significant impact on popular culture and fandom. The movie has introduced millions of people to the world of Mortal Kombat and its characters, as well as to the genre of martial arts and fantasy movies. The movie has also created a loyal fan base that has followed the franchise through its ups and downs, and has celebrated its achievements and milestones. The movie has also influenced many other filmmakers and creators who have drawn inspiration from its style, tone, and themes. The movie has also become a part of the collective memory of many fans who grew up watching it or discovered it later in life.

        -

        How to Watch Mortal Kombat (1995) Online for Free in Hindi

        -

        If you are interested in watching Mortal Kombat (1995) online for free in Hindi, you have two options: legal or illegal. However, we strongly recommend that you choose the legal option, as it is safer, more ethical, and more respectful to the creators and owners of the movie. Here are some of the legal options that you can use to watch or stream the movie legally in Hindi with subtitles or dubbing:

        -
          -
        • Amazon Prime Video: You can watch or stream the movie on Amazon Prime Video with Hindi subtitles or dubbing if you have a Prime membership or a free trial. You can also rent or buy the movie on Amazon Prime Video if you don't have a Prime membership.
        • -
        • YouTube: You can watch or stream the movie on YouTube with Hindi subtitles or dubbing if you pay a small fee. You can also rent or buy the movie on YouTube if you want to watch it offline.
        • -
        • Google Play Movies & TV: You can watch or stream the movie on Google Play Movies & TV with Hindi subtitles or dubbing if you pay a small fee. You can also rent or buy the movie on Google Play Movies & TV if you want to watch it offline.
        • -
        • iTunes: You can watch or stream the movie on iTunes with Hindi subtitles or dubbing if you pay a small fee. You can also rent or buy the movie on iTunes if you want to watch it offline.
        • -
        -

        On the other hand, if you choose the illegal option, you will be risking your safety, privacy, and legality by using websites and apps that offer pirated copies of the movie for free download or streaming in Hindi, such as Filmyzilla. Filmyzilla is one of the most notorious websites that provides illegal downloads and streams of movies and TV shows in various languages, including Hindi. However, using Filmyzilla or similar websites is not only illegal, but also dangerous. Here are some of the risks and consequences of using illegal sources to watch or download movies:

        -
          -
        • Malware and viruses: Many illegal websites and apps contain malware and viruses that can infect your device and compromise your data and security. These malware and viruses can steal your personal information, damage your files, slow down your device, or even take control of your device.
        • -
        • Legal troubles: Many illegal websites and apps violate the intellectual property rights of the creators and owners of the movies and TV shows that they offer. By using these websites and apps, you are also violating these rights and breaking the law. This can result in legal troubles, such as fines, lawsuits, or even jail time. You can also face legal action from your internet service provider, who can track your online activity and report it to the authorities.
        • -
        • Poor quality and experience: Many illegal websites and apps offer low-quality downloads and streams of movies and TV shows, which can ruin your viewing experience. These downloads and streams can have poor resolution, audio, subtitles, or dubbing, as well as glitches, errors, or interruptions. You can also miss out on the bonus features, extras, and updates that are available on the official platforms and websites.
        • -
        -

        Therefore, we strongly advise you to avoid using illegal sources to watch or download movies, such as Filmyzilla, and instead use the legal options that we have listed above. Not only will you be supporting the creators and owners of the movies and TV shows that you enjoy, but you will also be protecting yourself from harm and trouble.

        -

        Conclusion

        -

        Mortal Kombat (1995) is a classic action movie that is based on a popular video game of the same name. It tells the story of three Earthrealm fighters who participate in a tournament to save their world from the evil forces of Outworld. The movie features an exciting plot, memorable characters, spectacular fight scenes, stunning visual effects, catchy music, and iconic catchphrases. The movie is also a landmark in the history of video game adaptations and action movies, as it has spawned several sequels and spin-offs, inspired many video games and merchandise, influenced many other filmmakers and creators, and impacted popular culture and fandom.

        -

        If you are interested in watching Mortal Kombat (1995) online for free in Hindi, you have two options: legal or illegal. However, we strongly recommend that you choose the legal option, as it is safer, more ethical, and more respectful to the creators and owners of the movie. You can watch or stream the movie legally in Hindi with subtitles or dubbing on platforms and websites such as Amazon Prime Video, YouTube, Google Play Movies & TV, or iTunes. You should avoid using illegal sources to watch or download the movie, such as Filmyzilla, as they pose many risks and consequences for you and your device.

        -

        In conclusion, Mortal Kombat (1995) is a great movie that you should watch if you are a fan of martial arts, fantasy, or video games. It is one of the best video game movies ever made, as well as one of the most faithful adaptations of a video game to film. It is also a cult classic that has a loyal fan base and a lasting legacy. You can watch it online for free in Hindi if you want to experience it in a different language

        Now that you have read this article, you might have some questions about Mortal Kombat (1995) or the topic of watching movies online for free in Hindi. Here are some of the frequently asked questions (FAQs) that we have answered for you:

        -

        FAQs

        -
          -
        1. Is Mortal Kombat (1995) suitable for children?
        2. -

          Mortal Kombat (1995) is rated PG-13 in the United States and 15 in India, which means that it contains some violence, blood, gore, and mild language that may not be appropriate for younger viewers. The movie is based on a video game that is known for its graphic and brutal fatalities, which are toned down but still present in the movie. Therefore, we advise you to use your discretion and parental guidance when watching this movie with children.

          -
        3. What are the differences between the original version and the Hindi version of Mortal Kombat (1995)?
        4. -

          The original version of Mortal Kombat (1995) is in English, while the Hindi version is either dubbed or subtitled in Hindi. The Hindi version may also have some minor changes or edits in the dialogue, scenes, or music to suit the preferences and sensibilities of the Hindi-speaking audience. However, the overall plot, characters, and themes of the movie remain the same in both versions.

          -
        5. How can I watch Mortal Kombat (1995) online for free in Hindi without any ads or interruptions?
        6. -

          The best way to watch Mortal Kombat (1995) online for free in Hindi without any ads or interruptions is to use a legal platform or website that offers a free trial or a subscription service. For example, you can watch or stream the movie on Amazon Prime Video with a 30-day free trial or a Prime membership, which also gives you access to many other movies and TV shows. You can also watch or stream the movie on YouTube, Google Play Movies & TV, or iTunes with a small fee, which also allows you to watch it offline. These platforms and websites provide high-quality downloads and streams of the movie with no ads or interruptions.

          -
        7. What are some of the other movies that are similar to Mortal Kombat (1995)?
        8. -

          If you enjoyed Mortal Kombat (1995), you might also like some of these other movies that are similar to it in terms of genre, style, or theme:

          -
            -
          • Mortal Kombat: Annihilation (1997): The sequel to Mortal Kombat (1995), which continues the story of Liu Kang and his allies as they face a new threat from Outworld.
          • -
          • Mortal Kombat (2021): The reboot of Mortal Kombat (1995), which retells the origin story of Liu Kang and his allies as they participate in the tournament for the first time.
          • -
          • Street Fighter (1994): Another movie based on a popular video game, which follows the adventures of Colonel Guile and his team as they fight against the evil dictator M. Bison.
          • -
          • Enter the Dragon (1973): A classic martial arts movie starring Bruce Lee, which revolves around a martial arts tournament held on a mysterious island owned by a crime lord.
          • -
          • The Matrix (1999): A sci-fi action movie starring Keanu Reeves, which explores the concept of a simulated reality where humans are enslaved by machines.
          • -
          -
        9. Where can I find more information about Mortal Kombat (1995) or the topic of watching movies online for free in Hindi?
        10. -

          If you want to find more information about Mortal Kombat (1995) or the topic of watching movies online for free in Hindi, you can use some of these sources:

          -
            -
          • The official website of Mortal Kombat: https://www.mortalkombat.com/
          • -
          • The IMDb page of Mortal Kombat (1995): https://www.imdb.com/title/tt0113855/
          • -
          • The Wikipedia page of Mortal Kombat (1995): https://en.wikipedia.org/wiki/Mortal_Kombat_(1995_film)
          • -
          • The Rotten Tomatoes page of Mortal Kombat (1995): https://www.rottentomatoes.com/m/mortal_kombat
          • -
          • The YouTube channel of Mortal Kombat: https://www.youtube.com/channel/UCB9_VH_CNbbH4GfKu8qh63w
          • -
          • The Google search results for "mortal kombat 1995 full movie in hindi download filmyzilla": https://www.google.com/search?q=mortal+kombat+1995+full+movie+in+hindi+download+filmyzilla
          • -
          -

          We hope that this article has helped you learn more about Mortal Kombat ( 1995) and how to watch it online for free in Hindi. We hope that you have enjoyed reading this article as much as we have enjoyed writing it. We also hope that you will watch this movie and appreciate its quality and legacy. Thank you for your time and attention.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/NBA 2K20 APK - A New Soundtrack and Improved Abilities in Mobile Game.md b/spaces/congsaPfin/Manga-OCR/logs/NBA 2K20 APK - A New Soundtrack and Improved Abilities in Mobile Game.md deleted file mode 100644 index 36af68ff13460e40beaeccd21fd69b52aefc250f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/NBA 2K20 APK - A New Soundtrack and Improved Abilities in Mobile Game.md +++ /dev/null @@ -1,141 +0,0 @@ -
          -

          NBA 2K20 APK 1GB: How to Download and Play the Best Basketball Game on Your Android Device

          -

          If you are a fan of basketball and want to enjoy playing it on your mobile device, then you should definitely check out NBA 2K20 APK 1GB. This is a compressed version of the original NBA 2K20 game for Android devices, which has all the features and modes of the original game, but with a smaller file size. In this article, we will tell you what NBA 2K20 APK 1GB is, why you should download it, how to download and install it on your Android device, and some tips and tricks for playing it.

          -

          What is NBA 2K20 APK 1GB?

          -

          NBA 2K20 APK 1GB is a compressed version of the original NBA 2K20 game for Android devices. NBA 2K20 is one of the most popular and realistic basketball games in the market, developed by Visual Concepts and published by 2K Sports. It features various game modes, such as MyCAREER, Run The Streets, Blacktop, Online Association, Quick Play, Season Mode, Playoffs Mode, and more. It also lets you play with your favorite NBA players and teams, as well as legends and rookies. You can customize your player's appearance, skills, attributes, equipment, and style. You can also create your own team and league, or join an existing one.

          -

          nba 2k20 apk 1gb


          Download Zip ->>> https://urlca.com/2uOcGM



          -

          NBA 2K20 APK 1GB has all these features and modes of the original game, but with a smaller file size. The original game requires about 4 GB of storage space on your device, while the compressed version only requires about 1 GB. This means that you can save more space on your device and download the game faster. However, this does not compromise the quality or performance of the game. You can still enjoy playing NBA 2K20 with high-quality graphics, sound effects, animations, commentary, and gameplay.Why Should You Download NBA 2K20 APK 1GB? -

          NBA 2K20 APK 1GB offers a realistic and immersive basketball experience on your mobile device. You can feel the thrill and excitement of playing in the NBA, or create your own basketball story in MyCAREER mode. You can also explore the street basketball culture in Run The Streets mode, where you can compete in 3v3 tournaments, earn rewards, and rise up the ranks. You can also play with your friends or other players online in various multiplayer modes, such as Online Association, where you can join or create your own league and compete for the championship.

          -

          NBA 2K20 APK 1GB lets you play with your favorite NBA players and teams, as well as legends and rookies. You can choose from over 100 NBA teams, including the current ones and the classic ones. You can also play with over 450 NBA players, including the current stars and the all-time greats. You can also discover and play with new talents, such as Zion Williamson, Ja Morant, RJ Barrett, and more. You can also customize your players and teams with various options, such as jerseys, shoes, accessories, logos, courts, and more.

          -

          NBA 2K20 APK 1GB has various game modes to suit your preferences and skills. You can play a quick game in Quick Play mode, where you can choose any two teams and play a single match. You can also play a full season in Season Mode, where you can follow the real NBA schedule and standings. You can also play a playoff series in Playoffs Mode, where you can choose any eight teams and compete for the title. You can also play a single-player campaign in MyCAREER mode, where you can create your own player and follow his journey from college to the NBA. You can also play a street basketball mode in Run The Streets mode, where you can create your own character and compete in 3v3 tournaments around the world. You can also play a casual basketball mode in Blacktop mode, where you can choose any players and play a match on any court.

          -

          nba 2k20 mobile apk download
          -nba 2k20 android apk obb
          -nba 2k20 apk mod unlimited money
          -nba 2k20 apk offline free download
          -nba 2k20 apk highly compressed
          -nba 2k20 apk data latest version
          -nba 2k20 apk revdl rexdl
          -nba 2k20 apk for low end devices
          -nba 2k20 apk no verification required
          -nba 2k20 apk full game unlocked
          -nba 2k20 apk update patch
          -nba 2k20 apk with commentary
          -nba 2k20 apk real rosters
          -nba 2k20 apk all star weekend
          -nba 2k20 apk cheat codes
          -nba 2k20 apk blacktop mode
          -nba 2k20 apk my career offline
          -nba 2k20 apk run the streets
          -nba 2k20 apk new features
          -nba 2k20 apk best settings
          -nba 2k20 apk controller support
          -nba 2k20 apk google play games
          -nba 2k20 apk multiplayer lan wifi
          -nba 2k20 apk online pvp
          -nba 2k20 apk the association mode
          -nba 2k20 apk legends edition
          -nba 2k20 apk anthony davis cover
          -nba 2k20 apk dwyane wade cover
          -nba 2k20 apk lebron james edition
          -nba 2k20 apk kobe bryant tribute
          -nba 2k20 apk stephen curry edition
          -nba 2k20 apk giannis antetokounmpo edition
          -nba 2k20 apk kevin durant edition
          -nba 2k20 apk zion williamson edition
          -nba 2k20 apk luka doncic edition
          -nba 2k20 apk james harden edition
          -nba 2k20 apk damian lillard edition
          -nba 2k20 apk kyrie irving edition
          -nba 2k20 apk russell westbrook edition
          -nba 2k20 apk kawhi leonard edition
          -nba 2k20 apk paul george edition
          -nba 2k20 apk joel embiid edition
          -nba 2k20 apk nikola jokic edition
          -nba 2k20 apk jimmy butler edition
          -nba 2k20 apk chris paul edition
          -nba 2k20 apk jayson tatum edition
          -nba 2k20 apk devin booker edition
          -nba 2k20 apk donovan mitchell edition
          -nba 2k20 apk ja morant edition

          -

          How to Download and Install NBA 2K20 APK 1GB on Your Android Device?

          -

          Downloading and installing NBA 2K20 APK 1GB on your Android device is easy and simple. Just follow these steps:

          -

          Step 1: Download the NBA 2K20 APK 1GB file from a trusted source, such as [APKCombo] or [APKPure]

          -

          You can find the NBA 2K20 APK 1GB file on various websites that offer APK files for Android devices. However, not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your data. Therefore, you should only download the file from a trusted source, such as [APKCombo] or [APKPure]. These websites are known for providing safe and verified APK files for various Android apps and games.

          -

          Step 2: Enable the installation of apps from unknown sources on your device settings

          -

          Before you can install the NBA 2K20 APK 1GB file on your device, you need to enable the installation of apps from unknown sources on your device settings. This is because the file is not from the official Google Play Store, which is the default source of apps for Android devices. To enable this option, go to your device settings, then go to security or privacy settings, then look for the option that says "allow installation of apps from unknown sources" or something similar. Turn on this option and confirm your choice.

          -

          Step 3: Locate the downloaded file and tap on it to start the installation process

          -

          After you have downloaded the NBA 2K20 APK 1GB file and enabled the installation of apps from unknown sources on your device settings, you can now locate the downloaded file and tap on it to start the installation process. You can find the file in your device's download folder or in any other folder where you have saved it. Once you have found it, tap on it and wait for a few seconds until a pop-up window appears.

          -

          Step 4: Follow the instructions on the screen and wait for the installation to complete

          -

          When the pop-up window appears, follow the instructions on the screen and wait for the installation to complete. The installation process may take a few minutes depending on your device's speed and performance. During this time, do not turn off your device or interrupt the process. Once the installation is complete, you will see a confirmation message on the screen.

          -

          Step 5: Launch the game

          Step 5: Launch the game and enjoy playing NBA 2K20 on your Android device

          -

          After the installation is complete, you can now launch the game and enjoy playing NBA 2K20 on your Android device. You can find the game icon on your device's home screen or app drawer. Tap on it and wait for the game to load. You may need to grant some permissions or accept some terms and conditions before you can start playing. Once you are in the game, you can choose your preferred language, adjust your settings, and select your game mode. You can also sign in with your Google Play Games account or your 2K account to sync your progress and access online features.

          -

          Tips and Tricks for Playing NBA 2K20 APK 1GB

          -

          NBA 2K20 APK 1GB is a fun and challenging game that requires skill and strategy to master. Here are some tips and tricks that can help you improve your performance and enjoy the game more:

          -

          Customize your controls and settings to suit your preferences and device specifications

          -

          NBA 2K20 APK 1GB offers various options for customizing your controls and settings to suit your preferences and device specifications. You can access these options by tapping on the menu icon on the top left corner of the screen, then tapping on settings. You can adjust the sound volume, the graphics quality, the camera angle, the controller layout, the controller sensitivity, the vibration feedback, and more. You can also enable or disable some features, such as auto-sprint, auto-play, subtitles, tutorials, and more. You can experiment with different combinations of settings until you find the ones that work best for you.

          -

          Choose the right difficulty level and game mode for your skill level and goals

          -

          NBA 2K20 APK 1GB offers different difficulty levels and game modes for different skill levels and goals. You can choose from five difficulty levels: Rookie, Pro, All-Star, Superstar, and Hall of Fame. The higher the difficulty level, the harder the opponents, the stricter the rules, and the lower the rewards. You can also choose from various game modes, such as MyCAREER, Run The Streets, Blacktop, Online Association, Quick Play, Season Mode, Playoffs Mode, and more. Each game mode has its own objectives, rules, rewards, and challenges. You can choose the difficulty level and game mode that match your skill level and goals.

          -

          Learn the basic moves and strategies for offense and defense, such as dribbling, passing, shooting, blocking, and stealing

          -

          NBA 2K20 APK 1GB requires you to learn the basic moves and strategies for offense and defense, such as dribbling, passing, shooting, blocking, and stealing. You can learn these moves by following the tutorials in the game or by practicing in the training mode. You can also learn from watching other players or from reading online guides and tips. Some of the basic moves are:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          MoveControlDescription
          DribbleSwipe left or right on the left side of the screenMove your player with the ball in different directions
          PassTap on a teammate's icon on the right side of the screenPass the ball to a teammate
          ShootSwipe up on the right side of the screenAttempt a shot at the basket
          BlockSwipe down on the right side of the screen when near an opponent with the ballAttempt to block an opponent's shot or pass
          StealTap on an opponent's icon on StealTap on an opponent's icon on the right side of the screen when near themAttempt to steal the ball from an opponent
          -

          These are just some of the basic moves that you can use in the game. You can also perform more advanced moves, such as crossover, spin, fadeaway, alley-oop, and more. You can also use different strategies, such as pick and roll, isolation, zone defense, and more. You can learn more about these moves and strategies by reading the game manual or by searching online.

          -

          Practice your skills and improve your performance in various challenges and events

          -

          NBA 2K20 APK 1GB offers various challenges and events that can help you practice your skills and improve your performance. You can access these challenges and events by tapping on the menu icon on the top left corner of the screen, then tapping on challenges or events. You can find different types of challenges and events, such as daily challenges, weekly challenges, seasonal challenges, special events, and more. These challenges and events have different objectives, rewards, and difficulties. You can complete them to earn coins, VC, badges, cards, items, and more. You can also use them to test your skills and learn new techniques.

          -

          Connect with other players online and compete in multiplayer modes, such as Run The Streets and Online Association

          -

          NBA 2K20 APK 1GB allows you to connect with other players online and compete in multiplayer modes, such as Run The Streets and Online Association. You can access these modes by tapping on the menu icon on the top left corner of the screen, then tapping on online. You can then choose the mode that you want to play. In Run The Streets mode, you can create your own character and compete in 3v3 tournaments around the world. You can also join a crew or create your own crew and play with your friends or other players. In Online Association mode, you can join or create your own league and compete for the championship. You can also trade players, draft rookies, sign free agents, and manage your team.

          -

          Conclusion

          -

          NBA 2K20 APK 1GB is a compressed version of the original NBA 2K20 game for Android devices, which has all the features and modes of the original game, but with a smaller file size. It offers a realistic and immersive basketball experience on your mobile device. It lets you play with your favorite NBA players and teams, as well as legends and rookies. It has various game modes to suit your preferences and skills, such as MyCAREER, Run The Streets, Blacktop, Online Association, Quick Play, Season Mode, Playoffs Mode, and more. It also allows you to customize your players and teams with various options. It also enables you to connect with other players online and compete in multiplayer modes.

          -

          If you want to download and play NBA 2K20 APK 1GB on your Android device, you just need to follow these steps:

          -
            -
          1. Download the NBA 2K20 APK 1GB file from a trusted source
          2. -
          3. Enable the installation of apps from unknown sources on your device settings
          4. -
          5. Locate the downloaded file and tap on it to start the installation process
          6. -
          7. Follow the instructions on the screen and wait for the installation to complete
          8. -
          9. Launch the game and enjoy playing NBA 2K20 on your Android device
          10. -
          -

          We hope that this article has helped you learn more about NBA 2K20 APK 1GB and how to download and play it on your Android device. If you have any questions or feedback, please feel free to leave a comment below.

          -

          FAQs

          -

          Q: Is NBA 2K20 APK 1GB safe to download?

          -

          A: Yes, NBA 2K20 APK 1GB is safe to download if you download it from a trusted source. However, you should always be careful when downloading any file from unknown sources. You should always scan the file for viruses or malware before installing it on your device.

          -

          Q: Is NBA 2K20 APK 1GB compatible with my device?

          -

          A: NBA 2K20 APK 1GB is compatible with most Android devices that have at least 4 GB of RAM and Android 4.3 or higher. However, some devices may not support some features or modes of the game due to their specifications or limitations.

          -

          Q: How much storage space does NBA 2K20 APK 1GB require?

          -

          A: NBA 2K20 APK 1GB requires about 1 GB of

          A: NBA 2K20 APK 1GB requires about 1 GB of storage space on your device, while the original game requires about 4 GB. This means that you can save more space on your device and download the game faster by using the compressed version.

          -

          Q: How can I update NBA 2K20 APK 1GB?

          -

          A: NBA 2K20 APK 1GB is updated regularly to fix bugs, improve performance, and add new features and content. You can update the game by downloading the latest version of the APK file from the same source that you downloaded it from. You can also check for updates in the game settings or on the official website of the game.

          -

          Q: How can I contact the developers of NBA 2K20 APK 1GB?

          -

          A: If you have any questions, feedback, or issues regarding NBA 2K20 APK 1GB, you can contact the developers of the game by visiting their official website, [www.2k.com], or by following their social media accounts, such as [Facebook], [Twitter], [Instagram], and [YouTube]. You can also send them an email at [support@2k.com] or use the in-game support option.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Soul Knight 2.7.2 APK The Ultimate Guide to Install and Play.md b/spaces/congsaPfin/Manga-OCR/logs/Soul Knight 2.7.2 APK The Ultimate Guide to Install and Play.md deleted file mode 100644 index 7d1d4d93ffead227359399c8ad7c91bcdd76ce7d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Soul Knight 2.7.2 APK The Ultimate Guide to Install and Play.md +++ /dev/null @@ -1,137 +0,0 @@ -
          -

          Soul Knight 2.7 2 APK: A Review of the Fast-Paced Dungeon Crawler Game

          -

          If you are a fan of roguelike games, you might have heard of Soul Knight, a pixelated dungeon crawler game that has over 50 million installs on Android and iOS devices. Soul Knight is inspired by the game Enter The Gungeon, a bullet-hell rogue-lite game for PC. In Soul Knight, you play as one of the 20+ unique heroes who have to retrieve the magical stone that was stolen by aliens and restore the balance of the world.

          -

          soul knight 2.7 2 apk


          Downloadhttps://urlca.com/2uO7LL



          -

          Soul Knight is a fun, exciting, and challenging game that features smooth animation, well-balanced gameplay, a huge collection of in-game items, and a diverse roster of characters. In this article, we will review the features of Soul Knight 2.7 2 apk, the latest version of the game that was released on June 6th, 2023. We will also show you how to download Soul Knight 2.7 2 apk from APKMirror, a trusted website that provides free and safe Android APK downloads.

          -

          Unique heroes with different abilities and playstyles

          -

          One of the main attractions of Soul Knight is the variety of heroes that you can choose from. Each hero has a different ability and playstyle that suits your preference. For example, you can play as a rogue who can dual wield weapons, an elf archer who can summon animals, or a magician who can cast spells. Each hero also has different stats such as health, armor, energy, critical chance, and melee damage.

          -

          You can unlock most of the heroes by using your earned in-game currencies such as gems or vouchers. Some heroes require an in-app purchase to unlock, but they are not necessary to enjoy the game. You can also customize your hero's appearance by changing their skin or outfit.

          -

          Hundreds of weapons and randomly generated dungeons

          -

          Another feature that makes Soul Knight addictive is the huge arsenal of weapons that you can find and use in the game. Soul Knight boasts a collection of over 400 weapons, ranging from guns, swords, shovels, staffs, bows, lasers, and more. Each weapon has its own characteristics such as damage, fire rate, energy consumption, bullet spread, special effects, etc. You can also craft your own weapons by using materials that you collect in the game.

          -

          The weapons are not the only thing that varies in Soul Knight. The dungeons that you explore are also randomly generated every time you play. You will encounter different enemies, traps, chests, statues, NPCs, bosses, and biomes in each run. The dungeons are divided into five levels with three rooms each. The difficulty increases as you progress further into the game.

          -

          Auto-aim mechanism and controller support

          -

          Soul Knight is designed to be easy and intuitive to control on mobile devices. The game employs an energy-based firing system wherein weapons consume your energy instead of bullets. To make it easier for you to aim and shoot at enemies, the game also has an auto-aim mechanism that automatically targets the nearest enemy within your range.

          -

          If you prefer to use a controller instead of touch screen controls, Soul Knight also supports controllers for both Android and iOS devices. You can connect your controller via Bluetooth or USB and enjoy a more comfortable gaming experience.

          -

          soul knight 2.7 2 mod apk
          -soul knight 2.7 2 apk download
          -soul knight 2.7 2 unlimited gems apk
          -soul knight 2.7 2 hack apk
          -soul knight 2.7 2 apk pure
          -soul knight 2.7 2 latest version apk
          -soul knight 2.7 2 apk mirror
          -soul knight 2.7 2 apk android
          -soul knight 2.7 2 apk obb
          -soul knight 2.7 2 apk revdl
          -soul knight 2.7 2 apk rexdl
          -soul knight 2.7 2 apk uptodown
          -soul knight 2.7 2 apk mod menu
          -soul knight 2.7 2 apk free download
          -soul knight 2.7 2 apk offline
          -soul knight 2.7 2 apk no ads
          -soul knight 2.7 2 apk full version
          -soul knight 2.7 2 apk data
          -soul knight 2.7 2 apk for pc
          -soul knight 2.7 2 apk mod money
          -soul knight 2.7 2 apk mod unlocked
          -soul knight 2.7 2 apk mod all characters
          -soul knight 2.7 2 apk mod god mode
          -soul knight 2.7 2 apk mod unlimited energy
          -soul knight 2.7 2 apk mod unlimited seeds
          -soul knight 2.7 2 apk mod unlimited plants
          -soul knight 2.7 2 apk mod unlimited eggs
          -soul knight 2.7 2 apk mod unlimited pets
          -soul knight 2.7 2 apk mod unlimited skins
          -soul knight 2.7 2 apk mod unlimited weapons
          -soul knight 2.7 2 apk mod premium
          -soul knight 2.7 2 apk mod vip
          -soul knight 2.7 2 apk mod mega
          -soul knight 2.7 2 apk mod happy mod
          -soul knight 2.7 2 apk mod platinmods
          -soul knight 2.7 2 apk mod an1
          -soul knight game guardian script v1_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0_0.apk (soul_knight_game_guardian_script_v1.apk)
          -download game guardian script for soul_knight_game_guardian_script_v1.apk (soul_knight_game_guardian_script_v1.apk)
          -how to install game guardian script for soul_knight_game_guardian_script_v1.apk (soul_knight_game_guardian_script_v1.apk)
          -how to use game guardian script for soul_knight_game_guardian_script_v1.apk (soul_knight_game_guardian_script_v1.apk)

          -

          Multiplayer mode and various game modes

          -

          Soul Knight is not only a solo adventure game. You can also team up with your friends or other players around the world for an online co-op adventure or an offline multiplayer LAN game. You can join up to three other players and work together to clear the dungeons and defeat the bosses. You can also chat with your teammates and share items with them.

          -

          Besides the normal mode, Soul Knight also offers various game modes that add more fun and challenge to the game. You can try the boss rush mode, where you have to fight all the bosses in a row, the origin mode, where you have to survive in a harsh environment with limited resources, or the badass mode, where everything is harder and more intense. You can also play the seasonal events, such as the Halloween or Christmas events, that offer special rewards and surprises.

          -

          How to download Soul Knight 2.7 2 apk from APKMirror

          -

          If you want to play Soul Knight 2.7 2 apk, the latest version of the game that has new features and bug fixes, you can download it from APKMirror, a reliable website that provides free and safe Android APK downloads. Here are the steps to download Soul Knight 2.7 2 apk from APKMirror:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          StepInstruction
          1Go to APKMirror.com and search for "Soul Knight" in the search bar.
          2Find the Soul Knight 2.7 2 apk file from the list of results and click on it.
          3Scroll down to the bottom of the page and click on the "Download APK" button.
          4Wait for the download to finish and then open the file.
          5Allow the installation of apps from unknown sources if prompted by your device.
          6Follow the instructions on the screen to install Soul Knight 2.7 2 apk on your device.
          7Enjoy playing Soul Knight 2.7 2 apk!
          -

          Conclusion: A summary of the main points and a call to action

          -

          Soul Knight is a fast-paced dungeon crawler game that offers a lot of fun and excitement for roguelike fans. You can play as one of the many unique heroes, explore randomly generated dungeons, collect hundreds of weapons, team up with other players, and try different game modes. Soul Knight 2.7 2 apk is the latest version of the game that has new features and bug fixes. You can download Soul Knight 2.7 2 apk from APKMirror, a trusted website that provides free and safe Android APK downloads.

          -

          If you are looking for a game that will keep you entertained for hours, Soul Knight is a great choice. Download Soul Knight 2.7 2 apk now and enjoy the thrilling adventure!

          -

          FAQs: Five common questions and answers about Soul Knight 2.7 2 apk

          -

          Q: What are the new features of Soul Knight 2.7 2 apk?

          -

          A: According to the official changelog, Soul Knight 2.7 2 apk has the following new features:

          -
            -
          • New hero: Engineer (available for purchase)
          • -
          • New weapons: Laser Cannon, Laser Rifle, Laser Shotgun, Laser Sniper Rifle, Laser Sword, etc.
          • -
          • New skins: Engineer Skin - Mechanic, Rogue Skin - Ninja, Wizard Skin - Witch Doctor, etc.
          • -
          • New pets: Laser Cat, Laser Dog, Laser Bird, etc.
          • -
          • New buffs: Laser Damage Up, Laser Energy Cost Down, Laser Crit Chance Up, etc.
          • -
          • New enemies: Laser Robot, Laser Drone, Laser Turret, etc.
          • -
          • New boss: Laser King (appears in level 5)
          • -
          • New biome: Laser Lab (appears in level 4)
          • -
          • New NPC: Dr. Laser (sells laser-related items)
          • -
          • New achievements: Laser Master, Laser Lover, Laser Collector, etc.
          • -
          • Bug fixes and performance improvements
          • -

            Q: How can I get more gems in Soul Knight?

            -

            A: Gems are the main currency in Soul Knight that you can use to unlock heroes, skins, weapons, pets, buffs, etc. You can get more gems by doing the following:

            -
              -
            • Killing enemies and bosses in dungeons (the amount of gems depends on their difficulty)
            • -
            • Finding chests and statues in dungeons (they may contain gems or items that can be sold for gems)
            • -
            • Completing daily missions ( they give you gems as rewards)
            • -
            • Watching ads or completing offers in the shop (they give you gems or vouchers that can be exchanged for gems)
            • -
            • Buying gems with real money (this is optional and not recommended)
            • -
            -

            Q: How can I play Soul Knight with my friends?

            -

            A: Soul Knight supports both online and offline multiplayer modes. You can play Soul Knight with your friends by doing the following:

            -
              -
            • For online co-op mode, you need to have an internet connection and a Soul Knight account. You can create or join a room with up to four players and start the game. You can also chat with your teammates and invite them to your friend list.
            • -
            • For offline multiplayer LAN mode, you need to have a Wi-Fi connection and be on the same network as your friends. You can create or join a room with up to four players and start the game. You don't need a Soul Knight account for this mode.
            • -
            -

            Q: How can I backup my Soul Knight data?

            -

            A: Soul Knight data is stored locally on your device, so you need to backup your data manually if you want to transfer it to another device or prevent data loss. You can backup your Soul Knight data by doing the following:

            -
              -
            • Go to the settings menu in the game and tap on the "Backup" button. This will create a backup file in your device's storage.
            • -
            • Copy the backup file to another device or a cloud service such as Google Drive or Dropbox.
            • -
            • On the other device, go to the settings menu in the game and tap on the "Restore" button. This will load the backup file and restore your data.
            • -
            -

            Q: Is Soul Knight free to play?

            -

            A: Yes, Soul Knight is free to play and download on Android and iOS devices. However, the game contains some optional in-app purchases that can enhance your gaming experience. You can buy gems, vouchers, heroes, skins, weapons, pets, buffs, etc. with real money. You can also remove ads by buying the "No Ads" option in the shop. These purchases are not necessary to enjoy the game and you can get most of the items by playing the game normally.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Superkickoff How to build your dream team and dominate the leagues.md b/spaces/congsaPfin/Manga-OCR/logs/Superkickoff How to build your dream team and dominate the leagues.md deleted file mode 100644 index b489ce5dd4fcbb0941ce5e6f4d77830916094094..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Superkickoff How to build your dream team and dominate the leagues.md +++ /dev/null @@ -1,27 +0,0 @@ - -

            How to Download and Install Super Kickoff

            - Super Kickoff is available for Android devices and can be downloaded for free from the Google Play Store. To install the game, you need to have at least Android 4.4 or higher and 21 MB of free space on your device. Once you download the game, you can open it and start playing right away. You don't need to create an account or sign in with any social media platform.

            How to Choose Your Team

            - When you start the game, you will be asked to choose your team from a list of real-life soccer clubs from different countries and leagues. You can select any team you like, but keep in mind that each team has its own strengths and weaknesses, as well as different budgets and expectations. You can also create your own custom team by choosing its name, logo, colors, and players.

            How to Manage Your Squad

            - Once you have chosen your team, you will see your squad screen where you can view and edit your players' attributes, positions, roles, contracts, and morale. You can also buy and sell players in the transfer market or use items to boost their performance or heal their injuries. You can also train your players to improve their skills and fitness levels.

            How to Buy and Sell Players

            - To buy or sell players in Super Kickoff, you need to go to the transfer market screen where you can see a list of available players for sale or loan. You can filter the list by position, league, nationality, age, rating, or price. You can also search for a specific player by typing their name in the search bar. To buy a player, you need to tap on their name and make an offer that matches or exceeds their asking price. You can also negotiate with the seller by increasing or decreasing your offer until they accept or reject it. To sell a player, you need to tap on their name in your squad screen and select the option to put them on the transfer list. Then you need to wait for other teams to make offers for them. You can accept or reject any offer that comes your way.

            How to Use Items

            - Items are special objects that you can use to enhance your players' abilities or recover from injuries. You can buy items from the shop using coins or gems that you earn by playing the game or watching ads. You can also get items as rewards for completing achievements or winning tournaments. Some of the items that you can use are: - Energy drink: restores 10% of stamina - Bandage: heals minor injuries - Injection: heals major injuries - Contract: extends a player's contract by one year - Boots: increases a player's speed by 5% - Ball: increases a player's shooting by 5% - Gloves: increases a player's goalkeeping by 5% - Headband: increases a player's heading by 5% - Shirt: increases a player's passing by 5% - Whistle: increases a player's leadership by 5% To use an item, you need to tap on it in your inventory and then select the player that you want to apply it to.

            How to Play Matches

            - The main part of Super Kickoff is playing matches against other teams in various tournaments. You can play friendly matches against any team you want or join official competitions such as leagues, cups, or continental championships. To play a match, you need to go to the match screen where you can see your opponent's name, rating, formation, and tactics. You can also see your own formation and tactics and make changes if you want.

            How to Change Your Formation and Tactics

            - Your formation and tactics are the key factors that determine how your team plays on the pitch. You can change your formation and tactics before or during a match by tapping on the buttons at the bottom of the match screen. You can choose from different formations such as 4-4-2, 4-3-3, 3-5-2, or 5-3-2. You can also adjust your tactics such as attacking, defending, pressing, counterattacking, or passing. You can also assign specific roles to your players such as captain, free kick taker, penalty taker, or corner taker.

            How to Control Your Players

            - Once the match starts, you can control your players by tapping on the screen. You can tap on a player to select them and then tap on another spot to move them there. You can also tap on an opponent to tackle them or tap on the goal to shoot. You can also swipe on the screen to pass the ball to another player or make a long shot. You can also use buttons at the bottom of the screen to perform actions such as sprinting, sliding, switching players, or pausing the game.

            How to Win Matches

            - To win a match, you need to score more goals than your opponent in the given time. The time of each match depends on the difficulty level and the tournament you are playing. The difficulty level ranges from easy to hard and affects the skill and intelligence of your opponent. The tournament you are playing determines the number of matches you need to win to advance to the next round or win the trophy. Some tournaments also have rules such as extra time, penalties, or away goals that can affect the outcome of a match.

            How to Earn Coins and Gems

            - Coins and gems are the currencies of Super Kickoff that you can use to buy items, players, or upgrade your stadium. You can earn coins and gems by playing matches, completing achievements, watching ads, or buying them with real money. You can also get coins and gems as rewards for winning tournaments or ranking high in the leaderboards.

            How to Complete Achievements

            - Achievements are challenges that you can complete by playing the game and performing certain tasks such as scoring goals, winning matches, buying players, or using items. You can see a list of achievements and their rewards by tapping on the trophy icon at the top of the screen. You can also see your progress and claim your rewards by tapping on each achievement.

            How to Watch Ads

            - Ads are short videos that you can watch to earn coins or gems for free. You can watch ads by tapping on the video icon at the top of the screen or by selecting the option to watch an ad when it appears in certain situations such as after a match, before a tournament, or when you run out of energy. You can watch up to 10 ads per day and each ad will give you 50 coins or 5 gems.

            How to Buy Coins and Gems

            - If you want to buy coins or gems with real money, you can do so by tapping on the plus icon at the top of the screen or by selecting the option to buy coins or gems when it appears in certain situations such as when you want to buy an item, a player, or upgrade your stadium. You can choose from different packages that offer different amounts of coins or gems for different prices. You can pay with your credit card, PayPal account, Google Play balance, or gift card.

            How to Upgrade Your Stadium

            - Your stadium is where your team plays its home matches and where your fans come to support you. You can upgrade your stadium by tapping on the stadium icon at the top of the screen or by selecting the option to upgrade your stadium when it appears in certain situations such as when you win a tournament, rank high in the leaderboard, or earn enough coins or gems. You can upgrade different aspects of your stadium such as capacity, facilities, pitch quality, security, parking lot, or VIP area. Each upgrade will cost you a certain amount of coins or gems and will increase your income, fan base, reputation, and home advantage.

            How to Rank High in the Leaderboard

            - The leaderboard is where you can see how you compare with other players from around the world in terms of points, wins, goals scored, goals conceded, trophies won, or fans gained. You can access the leaderboard by tapping on the ranking icon at the top of the screen. You can also filter the leaderboard by country, league, team, or time period. You can earn points by playing matches and winning tournaments. The more points you have, the higher you will rank in the leaderboard.

            Conclusion

            - Super Kickoff is a fun and addictive game that lets you manage your own soccer team and compete with other teams from around the world. You can download the game for free from the Google Play Store and start playing right away. You can choose your team, manage your squad, play matches, earn coins and gems, upgrade your stadium, and rank high in the leaderboard. You can also use items to boost your players' performance or heal their injuries. You can also create your own custom team and challenge your friends or other players online. Super Kickoff is a game that will keep you entertained for hours and make you feel like a real soccer manager.

            FAQs

            - Here are some frequently asked questions about Super Kickoff and their answers:

            Q: How can I get more energy to play matches?

            -A: Your energy bar is located at the top of the screen and shows how many matches you can play before you run out of energy. Your energy bar refills by one point every 10 minutes. You can also refill your energy bar instantly by watching an ad or using an energy drink item.

            Q: How can I get more legendary players for my team?

            -A: Legendary players are rare and powerful players that have higher ratings and skills than normal players. You can get legendary players by buying them from the shop using gems or by winning them as rewards for completing achievements or winning tournaments. You can also get legendary players by using a legend card item that guarantees you a legendary player of your choice.

            Q: How can I change the difficulty level of the game?

            -A: You can change the difficulty level of the game by tapping on the settings icon at the top of the screen and then selecting the difficulty option. You can choose from easy, normal, hard, or expert levels. The difficulty level affects the skill and intelligence of your opponent as well as the time and rules of each match.

            Q: How can I play with my friends or other players online?

            -A: You can play with your friends or other players online by tapping on the multiplayer icon at the top of the screen and then selecting the mode you want to play. You can choose from friendly match, league match, cup match, or custom match. You can also invite your friends or other players to join your match by sending them a code or a link.

            Q: How can I contact the developers of the game or report a bug?

            -A: You can contact the developers of the game or report a bug by tapping on the settings icon at the top of the screen and then selecting the contact option. You can also send an email to superkickoff@gmail.com or visit their website at www.superkickoff.com.

            -

            super kickoff


            Download Zip ===== https://urlca.com/2uO5M6



            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Pakdam Pakdai Ocean Attack Movie Download [TOP].md b/spaces/contluForse/HuggingGPT/Pakdam Pakdai Ocean Attack Movie Download [TOP].md deleted file mode 100644 index 86a6a3226b14ad9f2a904d88ba57d0d1a39be773..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Pakdam Pakdai Ocean Attack Movie Download [TOP].md +++ /dev/null @@ -1,68 +0,0 @@ -## Pakdam Pakdai Ocean Attack Movie Download - - - - - - - - - -**Download File ✶✶✶ [https://www.google.com/url?q=https%3A%2F%2Furllie.com%2F2txoMj&sa=D&sntz=1&usg=AOvVaw2HLrDyWObKUfDiCOE9lmEv](https://www.google.com/url?q=https%3A%2F%2Furllie.com%2F2txoMj&sa=D&sntz=1&usg=AOvVaw2HLrDyWObKUfDiCOE9lmEv)** - - - - - - - - - - - - - -# Pakdam Pakdai Ocean Attack Movie Download: Watch the Animated Adventure Online - - - -If you are looking for a fun and family-friendly animated movie to watch online, you might want to check out Pakdam Pakdai Ocean Attack. This is a 2019 Hindi movie based on the popular Nickelodeon India TV series Pakdam Pakdai, which follows the adventures of a dog named Don and his rivalry with three mice. - - - -In this movie, Don and his brothers Karnal and Major Saab have to team up with the mice to save the world from an evil shark named Surmai Bhopali, who wants to flood the planet and turn all land creatures into sea creatures. Along the way, they encounter many challenges and dangers, but also make new friends and learn valuable lessons. - - - -Pakdam Pakdai Ocean Attack is a movie that will appeal to both kids and adults, as it has humor, action, drama, and a positive message. The animation is colorful and vibrant, and the voice acting is lively and expressive. The movie also has catchy songs that will make you want to sing along. - - - -If you want to watch Pakdam Pakdai Ocean Attack online, you can find it on Prime Video[^1^] or Voot[^2^], where you can stream or download it legally. You can also watch it on Nickelodeon India TV channel or on their YouTube channel. However, we do not recommend downloading the movie from any unauthorized or pirated sources, as it may harm your device or expose you to malware. - - - -Pakdam Pakdai Ocean Attack is a movie that you should not miss if you are a fan of animated movies or of Pakdam Pakdai TV series. It is a movie that will entertain you and make you smile. So, what are you waiting for? Go ahead and watch Pakdam Pakdai Ocean Attack online today! - - - -Pakdam Pakdai TV series is a popular show that premiered on Nickelodeon India in 2013. It is created by Toonz Animation and is also known internationally as Rat-A-Tat. The show is inspired by the French cartoon Oggy and the Cockroaches, but has its own unique characters and stories. - - - -The show follows the adventures of Doggy Don, a friendly but slightly dumb dog who lives with his elder brother Colonel, an ex-army dog who is smarter than him. They often have to deal with three mischievous mice who live in their house: Chotu, Motu and Lambu. Chotu is the self-proclaimed leader of the Chuha Party, Motu is his loyal sidekick, and Lambu is the tallest and strongest of them. The mice always try to annoy and prank Doggy Don and Colonel, who in turn try to catch and stop them. - - - -The show has a lot of humor, action and slapstick comedy, as well as some educational and moral messages. The show also features other characters such as Ballu, a friendly elephant who is Doggy Don's best friend; Major Saab, another ex-army dog who is Colonel's friend; Surmai Bhopali, an evil shark who is the main antagonist of the movie Pakdam Pakdai Ocean Attack; and many more. - - - -Pakdam Pakdai TV series has been well-received by the audience and critics alike. It has won several awards such as the Best Animated TV Series at the BAF Awards 2014 and 2015, and the Best Animated TV Series for Kids at the Indian Television Academy Awards 2014. The show has also spawned several movies such as Pakdam Pakdai Ocean Attack, Pakdam Pakdai Doggy Don vs Billiman, Pakdam Pakdai Mission Pakistan, and Pakdam Pakdai Space Attack. - - 1b8d091108 - - - - - diff --git a/spaces/contluForse/HuggingGPT/assets/Download Pasta Base Ets2 1.3.1 !!TOP!!.md b/spaces/contluForse/HuggingGPT/assets/Download Pasta Base Ets2 1.3.1 !!TOP!!.md deleted file mode 100644 index 23ec1403bad6ce22ad1c2c21d1413f3fff279ad5..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Pasta Base Ets2 1.3.1 !!TOP!!.md +++ /dev/null @@ -1,24 +0,0 @@ -

            download pasta base ets2 1.3.1


            Downloadhttps://ssurll.com/2uzxVX



            - -Java Web Start - -------------- - -As web applications are fast becoming a prerequisite of the world’s online business, there is a wide range of technologies that are used to deliver them. One of the leading web application delivery technologies is Java Web Start. JWS is a technology that allows you to deliver Java applications as Java applets embedded in web pages. - -JWS was released in August 1999 [@jws:1999]. It was first included in Sun Java SE 5.0, which was released on July 18, 1999 [@java:1999]. JWS has been included in all subsequent versions of Sun Java SE, and even since the introduction of the Java ME platform, it has been supported on all Java ME-enabled devices, including mobile phones [@jws:2012]. - -When Java Web Start was introduced, it was mainly used to deliver standard desktop Java applications. However, over the years, it has been found to be also useful for delivering other types of applications. For example, it has been used to deliver web-based applications [@jws:2010], and even as a platform for interactive fiction [@jws:2000]. - -Our research has shown that many businesses are using Java Web Start to deliver their commercial solutions, and many of them have expressed interest in using the platform to deliver their own solutions. For example, a small business in the UK used JWS to create a web-based service that allowed customers to create their own invoices [@jws:2011]. - -As an example, Figure \[fig:jws\] shows a page from a website developed using Java Web Start. - -![A sample page from a website that was developed using Java Web Start. The page shows a simple list with two options, which are provided using Java Web Start technology.[]data-label="fig:jws"](images/jws)width="7.5cm" - -Java Web Start has many advantages over other web application technologies. It was designed to solve a number of problems that exist in the majority of other technologies that are used for web applications, and some of these problems are described below. - -First of all, Java Web Start can easily be embedded into a web page, as opposed to other technologies that require the user to download and install applications [@jws:2000]. This allows the user to start an application using only a web browser without requiring installation or other 4fefd39f24
            -
            -
            -

            diff --git a/spaces/contluForse/HuggingGPT/assets/Ecut 5 0 Keygen _HOT_ Torrent.md b/spaces/contluForse/HuggingGPT/assets/Ecut 5 0 Keygen _HOT_ Torrent.md deleted file mode 100644 index 464d8e8079a73d730278b227ff29842785596f10..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Ecut 5 0 Keygen _HOT_ Torrent.md +++ /dev/null @@ -1,9 +0,0 @@ - -

            download the free version of the software now. final cut pro torrent is a great software. moreover, it will also give you the chance to have a crack version of the software. so, you do not need to waste your time to download the software. you can easily get it on your device. moreover, download it now.

            -

            final cut pro keygen is an innovative and easy-to-use tool that can help you get excellent results without any problems. it allows you to edit your videos as you want in no time. you can use it to cut, copy, drag and rotate them or even add more than one effect to them in no time. final cut pro crack torrent also provides several different options to suit all your needs. you can use the program as per your convenience. the simplest way to edit your videos is by using the drag and drop feature. it provides you the capability to edit or adjust videos without any trouble.

            -

            Ecut 5 0 Keygen Torrent


            Downloadhttps://ssurll.com/2uzy57



            -

            final cut pro 5 keygen is a video editing software that offers you the necessary tools to create amazing videos. it lets you edit your videos with ease and support all types of files. it allows you to import, export, and render videos as well as get great results. you can use the software to add effects, transitions, and titles to your videos.

            -

            final cut pro keygen is a desktop application which comes with several other features that can be used for video editing. you can use it to create, edit, and manage video files from all your devices. the best thing about this software is that it allows you to edit 360 videos.

            -

            final cut pro keygen is an efficient video editing tool that lets you add different effects to your videos. you can use it to crop, crop videos, edit videos, and even trim videos. the software also provides you with an intuitive interface.

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/evo_norm.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/evo_norm.py deleted file mode 100644 index 9023afd0e81dc8a76871d03141866217d59f4770..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/evo_norm.py +++ /dev/null @@ -1,83 +0,0 @@ -"""EvoNormB0 (Batched) and EvoNormS0 (Sample) in PyTorch - -An attempt at getting decent performing EvoNorms running in PyTorch. -While currently faster than other impl, still quite a ways off the built-in BN -in terms of memory usage and throughput (roughly 5x mem, 1/2 - 1/3x speed). - -Still very much a WIP, fiddling with buffer usage, in-place/jit optimizations, and layouts. - -Hacked together by / Copyright 2020 Ross Wightman -""" - -import torch -import torch.nn as nn - - -class EvoNormBatch2d(nn.Module): - def __init__(self, num_features, apply_act=True, momentum=0.1, eps=1e-5, drop_block=None): - super(EvoNormBatch2d, self).__init__() - self.apply_act = apply_act # apply activation (non-linearity) - self.momentum = momentum - self.eps = eps - param_shape = (1, num_features, 1, 1) - self.weight = nn.Parameter(torch.ones(param_shape), requires_grad=True) - self.bias = nn.Parameter(torch.zeros(param_shape), requires_grad=True) - if apply_act: - self.v = nn.Parameter(torch.ones(param_shape), requires_grad=True) - self.register_buffer('running_var', torch.ones(1, num_features, 1, 1)) - self.reset_parameters() - - def reset_parameters(self): - nn.init.ones_(self.weight) - nn.init.zeros_(self.bias) - if self.apply_act: - nn.init.ones_(self.v) - - def forward(self, x): - assert x.dim() == 4, 'expected 4D input' - x_type = x.dtype - if self.training: - var = x.var(dim=(0, 2, 3), unbiased=False, keepdim=True) - n = x.numel() / x.shape[1] - self.running_var.copy_( - var.detach() * self.momentum * (n / (n - 1)) + self.running_var * (1 - self.momentum)) - else: - var = self.running_var - - if self.apply_act: - v = self.v.to(dtype=x_type) - d = x * v + (x.var(dim=(2, 3), unbiased=False, keepdim=True) + self.eps).sqrt().to(dtype=x_type) - d = d.max((var + self.eps).sqrt().to(dtype=x_type)) - x = x / d - return x * self.weight + self.bias - - -class EvoNormSample2d(nn.Module): - def __init__(self, num_features, apply_act=True, groups=8, eps=1e-5, drop_block=None): - super(EvoNormSample2d, self).__init__() - self.apply_act = apply_act # apply activation (non-linearity) - self.groups = groups - self.eps = eps - param_shape = (1, num_features, 1, 1) - self.weight = nn.Parameter(torch.ones(param_shape), requires_grad=True) - self.bias = nn.Parameter(torch.zeros(param_shape), requires_grad=True) - if apply_act: - self.v = nn.Parameter(torch.ones(param_shape), requires_grad=True) - self.reset_parameters() - - def reset_parameters(self): - nn.init.ones_(self.weight) - nn.init.zeros_(self.bias) - if self.apply_act: - nn.init.ones_(self.v) - - def forward(self, x): - assert x.dim() == 4, 'expected 4D input' - B, C, H, W = x.shape - assert C % self.groups == 0 - if self.apply_act: - n = x * (x * self.v).sigmoid() - x = x.reshape(B, self.groups, -1) - x = n.reshape(B, self.groups, -1) / (x.var(dim=-1, unbiased=False, keepdim=True) + self.eps).sqrt() - x = x.reshape(B, C, H, W) - return x * self.weight + self.bias diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/__init__.py deleted file mode 100644 index 57ff0c6581e552df750abe5bb92ed4f39a7dfa46..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -# https://github.com/SHI-Labs/OneFormer - -import os -from annotator.util import annotator_ckpts_path -from .api import make_detectron2_model, semantic_run - - -class OneformerCOCODetector: - def __init__(self): - remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/150_16_swin_l_oneformer_coco_100ep.pth" - modelpath = os.path.join(annotator_ckpts_path, "150_16_swin_l_oneformer_coco_100ep.pth") - if not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - config = os.path.join(os.path.dirname(__file__), 'configs/coco/oneformer_swin_large_IN21k_384_bs16_100ep.yaml') - self.model, self.meta = make_detectron2_model(config, modelpath) - - def __call__(self, img): - return semantic_run(img, self.model, self.meta) - - -class OneformerADE20kDetector: - def __init__(self): - remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/250_16_swin_l_oneformer_ade20k_160k.pth" - modelpath = os.path.join(annotator_ckpts_path, "250_16_swin_l_oneformer_ade20k_160k.pth") - if not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - config = os.path.join(os.path.dirname(__file__), 'configs/ade20k/oneformer_swin_large_IN21k_384_bs16_160k.yaml') - self.model, self.meta = make_detectron2_model(config, modelpath) - - def __call__(self, img): - return semantic_run(img, self.model, self.meta) - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/config/lazy.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/config/lazy.py deleted file mode 100644 index 72a3e5c036f9f78a2cdf3ef0975639da3299d694..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/config/lazy.py +++ /dev/null @@ -1,435 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import ast -import builtins -import collections.abc as abc -import importlib -import inspect -import logging -import os -import uuid -from contextlib import contextmanager -from copy import deepcopy -from dataclasses import is_dataclass -from typing import List, Tuple, Union -import yaml -from omegaconf import DictConfig, ListConfig, OmegaConf, SCMode - -from annotator.oneformer.detectron2.utils.file_io import PathManager -from annotator.oneformer.detectron2.utils.registry import _convert_target_to_string - -__all__ = ["LazyCall", "LazyConfig"] - - -class LazyCall: - """ - Wrap a callable so that when it's called, the call will not be executed, - but returns a dict that describes the call. - - LazyCall object has to be called with only keyword arguments. Positional - arguments are not yet supported. - - Examples: - :: - from annotator.oneformer.detectron2.config import instantiate, LazyCall - - layer_cfg = LazyCall(nn.Conv2d)(in_channels=32, out_channels=32) - layer_cfg.out_channels = 64 # can edit it afterwards - layer = instantiate(layer_cfg) - """ - - def __init__(self, target): - if not (callable(target) or isinstance(target, (str, abc.Mapping))): - raise TypeError( - f"target of LazyCall must be a callable or defines a callable! Got {target}" - ) - self._target = target - - def __call__(self, **kwargs): - if is_dataclass(self._target): - # omegaconf object cannot hold dataclass type - # https://github.com/omry/omegaconf/issues/784 - target = _convert_target_to_string(self._target) - else: - target = self._target - kwargs["_target_"] = target - - return DictConfig(content=kwargs, flags={"allow_objects": True}) - - -def _visit_dict_config(cfg, func): - """ - Apply func recursively to all DictConfig in cfg. - """ - if isinstance(cfg, DictConfig): - func(cfg) - for v in cfg.values(): - _visit_dict_config(v, func) - elif isinstance(cfg, ListConfig): - for v in cfg: - _visit_dict_config(v, func) - - -def _validate_py_syntax(filename): - # see also https://github.com/open-mmlab/mmcv/blob/master/mmcv/utils/config.py - with PathManager.open(filename, "r") as f: - content = f.read() - try: - ast.parse(content) - except SyntaxError as e: - raise SyntaxError(f"Config file {filename} has syntax error!") from e - - -def _cast_to_config(obj): - # if given a dict, return DictConfig instead - if isinstance(obj, dict): - return DictConfig(obj, flags={"allow_objects": True}) - return obj - - -_CFG_PACKAGE_NAME = "detectron2._cfg_loader" -""" -A namespace to put all imported config into. -""" - - -def _random_package_name(filename): - # generate a random package name when loading config files - return _CFG_PACKAGE_NAME + str(uuid.uuid4())[:4] + "." + os.path.basename(filename) - - -@contextmanager -def _patch_import(): - """ - Enhance relative import statements in config files, so that they: - 1. locate files purely based on relative location, regardless of packages. - e.g. you can import file without having __init__ - 2. do not cache modules globally; modifications of module states has no side effect - 3. support other storage system through PathManager, so config files can be in the cloud - 4. imported dict are turned into omegaconf.DictConfig automatically - """ - old_import = builtins.__import__ - - def find_relative_file(original_file, relative_import_path, level): - # NOTE: "from . import x" is not handled. Because then it's unclear - # if such import should produce `x` as a python module or DictConfig. - # This can be discussed further if needed. - relative_import_err = """ -Relative import of directories is not allowed within config files. -Within a config file, relative import can only import other config files. -""".replace( - "\n", " " - ) - if not len(relative_import_path): - raise ImportError(relative_import_err) - - cur_file = os.path.dirname(original_file) - for _ in range(level - 1): - cur_file = os.path.dirname(cur_file) - cur_name = relative_import_path.lstrip(".") - for part in cur_name.split("."): - cur_file = os.path.join(cur_file, part) - if not cur_file.endswith(".py"): - cur_file += ".py" - if not PathManager.isfile(cur_file): - cur_file_no_suffix = cur_file[: -len(".py")] - if PathManager.isdir(cur_file_no_suffix): - raise ImportError(f"Cannot import from {cur_file_no_suffix}." + relative_import_err) - else: - raise ImportError( - f"Cannot import name {relative_import_path} from " - f"{original_file}: {cur_file} does not exist." - ) - return cur_file - - def new_import(name, globals=None, locals=None, fromlist=(), level=0): - if ( - # Only deal with relative imports inside config files - level != 0 - and globals is not None - and (globals.get("__package__", "") or "").startswith(_CFG_PACKAGE_NAME) - ): - cur_file = find_relative_file(globals["__file__"], name, level) - _validate_py_syntax(cur_file) - spec = importlib.machinery.ModuleSpec( - _random_package_name(cur_file), None, origin=cur_file - ) - module = importlib.util.module_from_spec(spec) - module.__file__ = cur_file - with PathManager.open(cur_file) as f: - content = f.read() - exec(compile(content, cur_file, "exec"), module.__dict__) - for name in fromlist: # turn imported dict into DictConfig automatically - val = _cast_to_config(module.__dict__[name]) - module.__dict__[name] = val - return module - return old_import(name, globals, locals, fromlist=fromlist, level=level) - - builtins.__import__ = new_import - yield new_import - builtins.__import__ = old_import - - -class LazyConfig: - """ - Provide methods to save, load, and overrides an omegaconf config object - which may contain definition of lazily-constructed objects. - """ - - @staticmethod - def load_rel(filename: str, keys: Union[None, str, Tuple[str, ...]] = None): - """ - Similar to :meth:`load()`, but load path relative to the caller's - source file. - - This has the same functionality as a relative import, except that this method - accepts filename as a string, so more characters are allowed in the filename. - """ - caller_frame = inspect.stack()[1] - caller_fname = caller_frame[0].f_code.co_filename - assert caller_fname != "", "load_rel Unable to find caller" - caller_dir = os.path.dirname(caller_fname) - filename = os.path.join(caller_dir, filename) - return LazyConfig.load(filename, keys) - - @staticmethod - def load(filename: str, keys: Union[None, str, Tuple[str, ...]] = None): - """ - Load a config file. - - Args: - filename: absolute path or relative path w.r.t. the current working directory - keys: keys to load and return. If not given, return all keys - (whose values are config objects) in a dict. - """ - has_keys = keys is not None - filename = filename.replace("/./", "/") # redundant - if os.path.splitext(filename)[1] not in [".py", ".yaml", ".yml"]: - raise ValueError(f"Config file {filename} has to be a python or yaml file.") - if filename.endswith(".py"): - _validate_py_syntax(filename) - - with _patch_import(): - # Record the filename - module_namespace = { - "__file__": filename, - "__package__": _random_package_name(filename), - } - with PathManager.open(filename) as f: - content = f.read() - # Compile first with filename to: - # 1. make filename appears in stacktrace - # 2. make load_rel able to find its parent's (possibly remote) location - exec(compile(content, filename, "exec"), module_namespace) - - ret = module_namespace - else: - with PathManager.open(filename) as f: - obj = yaml.unsafe_load(f) - ret = OmegaConf.create(obj, flags={"allow_objects": True}) - - if has_keys: - if isinstance(keys, str): - return _cast_to_config(ret[keys]) - else: - return tuple(_cast_to_config(ret[a]) for a in keys) - else: - if filename.endswith(".py"): - # when not specified, only load those that are config objects - ret = DictConfig( - { - name: _cast_to_config(value) - for name, value in ret.items() - if isinstance(value, (DictConfig, ListConfig, dict)) - and not name.startswith("_") - }, - flags={"allow_objects": True}, - ) - return ret - - @staticmethod - def save(cfg, filename: str): - """ - Save a config object to a yaml file. - Note that when the config dictionary contains complex objects (e.g. lambda), - it can't be saved to yaml. In that case we will print an error and - attempt to save to a pkl file instead. - - Args: - cfg: an omegaconf config object - filename: yaml file name to save the config file - """ - logger = logging.getLogger(__name__) - try: - cfg = deepcopy(cfg) - except Exception: - pass - else: - # if it's deep-copyable, then... - def _replace_type_by_name(x): - if "_target_" in x and callable(x._target_): - try: - x._target_ = _convert_target_to_string(x._target_) - except AttributeError: - pass - - # not necessary, but makes yaml looks nicer - _visit_dict_config(cfg, _replace_type_by_name) - - save_pkl = False - try: - dict = OmegaConf.to_container( - cfg, - # Do not resolve interpolation when saving, i.e. do not turn ${a} into - # actual values when saving. - resolve=False, - # Save structures (dataclasses) in a format that can be instantiated later. - # Without this option, the type information of the dataclass will be erased. - structured_config_mode=SCMode.INSTANTIATE, - ) - dumped = yaml.dump(dict, default_flow_style=None, allow_unicode=True, width=9999) - with PathManager.open(filename, "w") as f: - f.write(dumped) - - try: - _ = yaml.unsafe_load(dumped) # test that it is loadable - except Exception: - logger.warning( - "The config contains objects that cannot serialize to a valid yaml. " - f"{filename} is human-readable but cannot be loaded." - ) - save_pkl = True - except Exception: - logger.exception("Unable to serialize the config to yaml. Error:") - save_pkl = True - - if save_pkl: - new_filename = filename + ".pkl" - # try: - # # retry by pickle - # with PathManager.open(new_filename, "wb") as f: - # cloudpickle.dump(cfg, f) - # logger.warning(f"Config is saved using cloudpickle at {new_filename}.") - # except Exception: - # pass - - @staticmethod - def apply_overrides(cfg, overrides: List[str]): - """ - In-place override contents of cfg. - - Args: - cfg: an omegaconf config object - overrides: list of strings in the format of "a=b" to override configs. - See https://hydra.cc/docs/next/advanced/override_grammar/basic/ - for syntax. - - Returns: - the cfg object - """ - - def safe_update(cfg, key, value): - parts = key.split(".") - for idx in range(1, len(parts)): - prefix = ".".join(parts[:idx]) - v = OmegaConf.select(cfg, prefix, default=None) - if v is None: - break - if not OmegaConf.is_config(v): - raise KeyError( - f"Trying to update key {key}, but {prefix} " - f"is not a config, but has type {type(v)}." - ) - OmegaConf.update(cfg, key, value, merge=True) - - try: - from hydra.core.override_parser.overrides_parser import OverridesParser - - has_hydra = True - except ImportError: - has_hydra = False - - if has_hydra: - parser = OverridesParser.create() - overrides = parser.parse_overrides(overrides) - for o in overrides: - key = o.key_or_group - value = o.value() - if o.is_delete(): - # TODO support this - raise NotImplementedError("deletion is not yet a supported override") - safe_update(cfg, key, value) - else: - # Fallback. Does not support all the features and error checking like hydra. - for o in overrides: - key, value = o.split("=") - try: - value = eval(value, {}) - except NameError: - pass - safe_update(cfg, key, value) - return cfg - - # @staticmethod - # def to_py(cfg, prefix: str = "cfg."): - # """ - # Try to convert a config object into Python-like psuedo code. - # - # Note that perfect conversion is not always possible. So the returned - # results are mainly meant to be human-readable, and not meant to be executed. - # - # Args: - # cfg: an omegaconf config object - # prefix: root name for the resulting code (default: "cfg.") - # - # - # Returns: - # str of formatted Python code - # """ - # import black - # - # cfg = OmegaConf.to_container(cfg, resolve=True) - # - # def _to_str(obj, prefix=None, inside_call=False): - # if prefix is None: - # prefix = [] - # if isinstance(obj, abc.Mapping) and "_target_" in obj: - # # Dict representing a function call - # target = _convert_target_to_string(obj.pop("_target_")) - # args = [] - # for k, v in sorted(obj.items()): - # args.append(f"{k}={_to_str(v, inside_call=True)}") - # args = ", ".join(args) - # call = f"{target}({args})" - # return "".join(prefix) + call - # elif isinstance(obj, abc.Mapping) and not inside_call: - # # Dict that is not inside a call is a list of top-level config objects that we - # # render as one object per line with dot separated prefixes - # key_list = [] - # for k, v in sorted(obj.items()): - # if isinstance(v, abc.Mapping) and "_target_" not in v: - # key_list.append(_to_str(v, prefix=prefix + [k + "."])) - # else: - # key = "".join(prefix) + k - # key_list.append(f"{key}={_to_str(v)}") - # return "\n".join(key_list) - # elif isinstance(obj, abc.Mapping): - # # Dict that is inside a call is rendered as a regular dict - # return ( - # "{" - # + ",".join( - # f"{repr(k)}: {_to_str(v, inside_call=inside_call)}" - # for k, v in sorted(obj.items()) - # ) - # + "}" - # ) - # elif isinstance(obj, list): - # return "[" + ",".join(_to_str(x, inside_call=inside_call) for x in obj) + "]" - # else: - # return repr(obj) - # - # py_str = _to_str(cfg, prefix=[prefix]) - # try: - # return black.format_str(py_str, mode=black.Mode()) - # except black.InvalidInput: - # return py_str diff --git a/spaces/creative-ai/creative-demo/README.md b/spaces/creative-ai/creative-demo/README.md deleted file mode 100644 index ba79af4aad29c56c51f0d7d0a312453439bee9b7..0000000000000000000000000000000000000000 --- a/spaces/creative-ai/creative-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Creative Demo -emoji: 💩 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.16.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/danfsmithmsft/falcon-chat/app.py b/spaces/danfsmithmsft/falcon-chat/app.py deleted file mode 100644 index 7d496eb4f9707a897f8035930856a0efa3bb18c9..0000000000000000000000000000000000000000 --- a/spaces/danfsmithmsft/falcon-chat/app.py +++ /dev/null @@ -1,215 +0,0 @@ -import argparse -import os - -import gradio as gr -from text_generation import Client - -TITLE = """

            🚀 Falcon-Chat demo

            """ -USER_NAME = "User" -BOT_NAME = "Falcon" -DEFAULT_INSTRUCTIONS = f"""The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Falcon, and a human user, called User. In the following interactions, User and Falcon will converse in natural language, and Falcon will answer User's questions. Falcon was built to be respectful, polite and inclusive. Falcon was built by the Technology Innovation Institute in Abu Dhabi. Falcon will never decline to answer a question, and always attempts to give an answer that User would be satisfied with. It knows a lot, and always tells the truth. The conversation begins. -""" -RETRY_COMMAND = "/retry" -STOP_STR = f"\n{USER_NAME}:" -STOP_SUSPECT_LIST = [":", "\n", "User"] - -INFERENCE_ENDPOINT = os.environ.get("INFERENCE_ENDPOINT") -INFERENCE_AUTH = os.environ.get("INFERENCE_AUTH") - - -def chat_accordion(): - with gr.Accordion("Parameters", open=False): - temperature = gr.Slider( - minimum=0.1, - maximum=2.0, - value=0.8, - step=0.1, - interactive=True, - label="Temperature", - ) - top_p = gr.Slider( - minimum=0.1, - maximum=0.99, - value=0.9, - step=0.01, - interactive=True, - label="p (nucleus sampling)", - ) - return temperature, top_p - - -def format_chat_prompt(message: str, chat_history, instructions: str) -> str: - instructions = instructions.strip(" ").strip("\n") - prompt = instructions - for turn in chat_history: - user_message, bot_message = turn - prompt = f"{prompt}\n{USER_NAME}: {user_message}\n{BOT_NAME}: {bot_message}" - prompt = f"{prompt}\n{USER_NAME}: {message}\n{BOT_NAME}:" - return prompt - - -def chat(client: Client): - with gr.Column(elem_id="chat_container"): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - inputs = gr.Textbox( - placeholder=f"Hello {BOT_NAME} !!", - label="Type an input and press Enter", - max_lines=3, - ) - - with gr.Row(elem_id="button_container"): - with gr.Column(): - retry_button = gr.Button("♻️ Retry last turn") - with gr.Column(): - delete_turn_button = gr.Button("🧽 Delete last turn") - with gr.Column(): - clear_chat_button = gr.Button("✨ Delete all history") - - gr.Examples( - [ - ["Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"], - ["What's the Everett interpretation of quantum mechanics?"], - ["Give me a list of the top 10 dive sites you would recommend around the world."], - ["Can you tell me more about deep-water soloing?"], - ["Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?"], - ], - inputs=inputs, - label="Click on any example and press Enter in the input textbox!", - ) - - with gr.Row(elem_id="param_container"): - with gr.Column(): - temperature, top_p = chat_accordion() - with gr.Column(): - with gr.Accordion("Instructions", open=False): - instructions = gr.Textbox( - placeholder="LLM instructions", - value=DEFAULT_INSTRUCTIONS, - lines=10, - interactive=True, - label="Instructions", - max_lines=16, - show_label=False, - ) - - def run_chat(message: str, chat_history, instructions: str, temperature: float, top_p: float): - if not message or (message == RETRY_COMMAND and len(chat_history) == 0): - yield chat_history - return - - if message == RETRY_COMMAND and chat_history: - prev_turn = chat_history.pop(-1) - user_message, _ = prev_turn - message = user_message - - prompt = format_chat_prompt(message, chat_history, instructions) - chat_history = chat_history + [[message, ""]] - stream = client.generate_stream( - prompt, - do_sample=True, - max_new_tokens=1024, - stop_sequences=[STOP_STR, "<|endoftext|>"], - temperature=temperature, - top_p=top_p, - ) - acc_text = "" - for idx, response in enumerate(stream): - text_token = response.token.text - - if response.details: - return - - if text_token in STOP_SUSPECT_LIST: - acc_text += text_token - continue - - if idx == 0 and text_token.startswith(" "): - text_token = text_token[1:] - - acc_text += text_token - last_turn = list(chat_history.pop(-1)) - last_turn[-1] += acc_text - chat_history = chat_history + [last_turn] - yield chat_history - acc_text = "" - - def delete_last_turn(chat_history): - if chat_history: - chat_history.pop(-1) - return {chatbot: gr.update(value=chat_history)} - - def run_retry(message: str, chat_history, instructions: str, temperature: float, top_p: float): - yield from run_chat(RETRY_COMMAND, chat_history, instructions, temperature, top_p) - - def clear_chat(): - return [] - - inputs.submit( - run_chat, - [inputs, chatbot, instructions, temperature, top_p], - outputs=[chatbot], - show_progress=False, - ) - inputs.submit(lambda: "", inputs=None, outputs=inputs) - delete_turn_button.click(delete_last_turn, inputs=[chatbot], outputs=[chatbot]) - retry_button.click( - run_retry, - [inputs, chatbot, instructions, temperature, top_p], - outputs=[chatbot], - show_progress=False, - ) - clear_chat_button.click(clear_chat, [], chatbot) - - -def get_demo(client: Client): - with gr.Blocks( - # css=None - # css="""#chat_container {width: 700px; margin-left: auto; margin-right: auto;} - # #button_container {width: 700px; margin-left: auto; margin-right: auto;} - # #param_container {width: 700px; margin-left: auto; margin-right: auto;}""" - css="""#chatbot { - font-size: 14px; - min-height: 300px; -}""" - ) as demo: - gr.HTML(TITLE) - - with gr.Row(): - with gr.Column(): - gr.Image("home-banner.jpg", elem_id="banner-image", show_label=False) - with gr.Column(): - gr.Markdown( - """**Chat with [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct), brainstorm ideas, discuss your holiday plans, and more!** - - ✨ This demo is powered by [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), finetuned on the [Baize](https://github.com/project-baize/baize-chatbot) dataset, and running with [Text Generation Inference](https://github.com/huggingface/text-generation-inference). [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is a state-of-the-art large language model built by the [Technology Innovation Institute](https://www.tii.ae) in Abu Dhabi. It is trained on 1 trillion tokens (including [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)) and available under the Apache 2.0 license. It currently holds the 🥇 1st place on the [🤗 Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This demo is made available by the [HuggingFace H4 team](https://huggingface.co/HuggingFaceH4). - - 🧪 This is only a **first experimental preview**: the [H4 team](https://huggingface.co/HuggingFaceH4) intends to provide increasingly capable versions of Falcon Chat in the future, based on improved datasets and RLHF/RLAIF. - - 👀 **Learn more about Falcon LLM:** [falconllm.tii.ae](https://falconllm.tii.ae/) - - ➡️️ **Intended Use**: this demo is intended to showcase an early finetuning of [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), to illustrate the impact (and limitations) of finetuning on a dataset of conversations and instructions. We encourage the community to further build upon the base model, and to create even better instruct/chat versions! - - ⚠️ **Limitations**: the model can and will produce factually incorrect information, hallucinating facts and actions. As it has not undergone any advanced tuning/alignment, it can produce problematic outputs, especially if prompted to do so. Finally, this demo is limited to a session length of about 1,000 words. - """ - ) - - chat(client) - - return demo - - -if __name__ == "__main__": - parser = argparse.ArgumentParser("Playground Demo") - parser.add_argument( - "--addr", - type=str, - required=False, - default=INFERENCE_ENDPOINT, - ) - args = parser.parse_args() - client = Client(args.addr, headers={"Authorization": f"Basic {INFERENCE_AUTH}"}) - demo = get_demo(client) - demo.queue(max_size=128, concurrency_count=16) - demo.launch() diff --git a/spaces/danterivers/music-generation-samples/audiocraft/modules/transformer.py b/spaces/danterivers/music-generation-samples/audiocraft/modules/transformer.py deleted file mode 100644 index be6a5e420fc53eebe9947aa5dde7bfebd3cb4dad..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/modules/transformer.py +++ /dev/null @@ -1,704 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers""" - bs, slen, n_kv_heads, head_dim = x.shape - if n_rep == 1: - return x - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonaly the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype or None): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding` or None): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - intepret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Sevice on which to initialize. - dtype (torch.dtype or None): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError('Not supported at the moment') - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[1] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=1) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=1) - else: - nk = k - nv = v - - assert nk.shape[1] == nv.shape[1] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[1] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("new param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - # q, k, v = [rearrange(x, "b t (h d) -> (b h) t d", h=self.num_heads) for x in [q, k, v]] - q, k, v = [rearrange(x, "b t (h d) -> b t h d", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - packed = rearrange(projected, "b t (p h d) -> b t p h d", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, "b t (h d) -> b t h d", h=self.num_heads) - k = rearrange(k, "b t (h d) -> b t h d", h=kv_heads) - v = rearrange(v, "b t (h d) -> b t h d", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, "b t h d -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, "b t (h d) -> b t h d", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum("bqhc,bkhc->bhqk", q, k) - else: - pre_w = torch.einsum("bqhc,bkhc->bhqk", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - x = torch.einsum("bhqk,bkhc->bqhc", w, v) - x = x.to(dtype) - x = rearrange(x, "b t h d -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float or None): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding` or None): Rope embedding to use. - attention_dropout (float or None): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float or None): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float or None): learning rate override through the `make_optim_group` API. - weight_decay (float or None): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of Audiocraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/linear_probe.py b/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/linear_probe.py deleted file mode 100644 index 9d7e23b6b67a53e16d050d675a99d01d7d04d581..0000000000000000000000000000000000000000 --- a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/linear_probe.py +++ /dev/null @@ -1,66 +0,0 @@ -import numpy as np -import torch.nn.functional as F -from torch import nn -from .model import MLPLayers - - -class LinearProbe(nn.Module): - def __init__(self, model, mlp, freeze, in_ch, out_ch, act=None): - """ - Args: - model: nn.Module - mlp: bool, if True, then use the MLP layer as the linear probe module - freeze: bool, if Ture, then freeze all the CLAP model's layers when training the linear probe - in_ch: int, the output channel from CLAP model - out_ch: int, the output channel from linear probe (class_num) - act: torch.nn.functional, the activation function before the loss function - """ - super().__init__() - in_ch = 512 - self.clap_model = model - self.clap_model.text_branch = None # to save memory - self.freeze = freeze - if mlp: - self.lp_layer = MLPLayers(units=[in_ch, in_ch * 2, out_ch]) - else: - self.lp_layer = nn.Linear(in_ch, out_ch) - - if self.freeze: - for param in self.clap_model.parameters(): - param.requires_grad = False - - if act == "None": - self.act = None - elif act == "relu": - self.act = nn.ReLU() - elif act == "elu": - self.act = nn.ELU() - elif act == "prelu": - self.act = nn.PReLU(num_parameters=in_ch) - elif act == "softmax": - self.act = nn.Softmax(dim=-1) - elif act == "sigmoid": - self.act = nn.Sigmoid() - - def forward(self, x, mix_lambda=None, device=None): - """ - Args: - x: waveform, torch.tensor [batch, t_samples] / batch of mel_spec and longer list - mix_lambda: torch.tensor [batch], the mixup lambda - Returns: - class_prob: torch.tensor [batch, class_num] - - """ - # batchnorm cancel grandient - if self.freeze: - self.clap_model.eval() - - x = self.clap_model.audio_projection( - self.clap_model.audio_branch(x, mixup_lambda=mix_lambda, device=device)[ - "embedding" - ] - ) - out = self.lp_layer(x) - if self.act is not None: - out = self.act(out) - return out diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/_transports/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/_transports/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markupsafe/_native.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markupsafe/_native.py deleted file mode 100644 index 8117b2716d110074d9a81365c59343e81396b7f5..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markupsafe/_native.py +++ /dev/null @@ -1,63 +0,0 @@ -import typing as t - -from . import Markup - - -def escape(s: t.Any) -> Markup: - """Replace the characters ``&``, ``<``, ``>``, ``'``, and ``"`` in - the string with HTML-safe sequences. Use this if you need to display - text that might contain such characters in HTML. - - If the object has an ``__html__`` method, it is called and the - return value is assumed to already be safe for HTML. - - :param s: An object to be converted to a string and escaped. - :return: A :class:`Markup` string with the escaped text. - """ - if hasattr(s, "__html__"): - return Markup(s.__html__()) - - return Markup( - str(s) - .replace("&", "&") - .replace(">", ">") - .replace("<", "<") - .replace("'", "'") - .replace('"', """) - ) - - -def escape_silent(s: t.Optional[t.Any]) -> Markup: - """Like :func:`escape` but treats ``None`` as the empty string. - Useful with optional values, as otherwise you get the string - ``'None'`` when the value is ``None``. - - >>> escape(None) - Markup('None') - >>> escape_silent(None) - Markup('') - """ - if s is None: - return Markup() - - return escape(s) - - -def soft_str(s: t.Any) -> str: - """Convert an object to a string if it isn't already. This preserves - a :class:`Markup` string rather than converting it back to a basic - string, so it will still be marked as safe and won't be escaped - again. - - >>> value = escape("") - >>> value - Markup('<User 1>') - >>> escape(str(value)) - Markup('&lt;User 1&gt;') - >>> escape(soft_str(value)) - Markup('<User 1>') - """ - if not isinstance(s, str): - return str(s) - - return s diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/unclip/test_unclip.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/unclip/test_unclip.py deleted file mode 100644 index c36fb02b190f271d57eca0c54a94a19acad0faf3..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/unclip/test_unclip.py +++ /dev/null @@ -1,498 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -import numpy as np -import torch -from transformers import CLIPTextConfig, CLIPTextModelWithProjection, CLIPTokenizer - -from diffusers import PriorTransformer, UnCLIPPipeline, UnCLIPScheduler, UNet2DConditionModel, UNet2DModel -from diffusers.pipelines.unclip.text_proj import UnCLIPTextProjModel -from diffusers.utils import load_numpy, nightly, slow, torch_device -from diffusers.utils.testing_utils import require_torch_gpu, skip_mps - -from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS -from ...test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference - - -class UnCLIPPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = UnCLIPPipeline - params = TEXT_TO_IMAGE_PARAMS - { - "negative_prompt", - "height", - "width", - "negative_prompt_embeds", - "guidance_scale", - "prompt_embeds", - "cross_attention_kwargs", - } - batch_params = TEXT_TO_IMAGE_BATCH_PARAMS - required_optional_params = [ - "generator", - "return_dict", - "prior_num_inference_steps", - "decoder_num_inference_steps", - "super_res_num_inference_steps", - ] - test_xformers_attention = False - - @property - def text_embedder_hidden_size(self): - return 32 - - @property - def time_input_dim(self): - return 32 - - @property - def block_out_channels_0(self): - return self.time_input_dim - - @property - def time_embed_dim(self): - return self.time_input_dim * 4 - - @property - def cross_attention_dim(self): - return 100 - - @property - def dummy_tokenizer(self): - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - return tokenizer - - @property - def dummy_text_encoder(self): - torch.manual_seed(0) - config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=self.text_embedder_hidden_size, - projection_dim=self.text_embedder_hidden_size, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - return CLIPTextModelWithProjection(config) - - @property - def dummy_prior(self): - torch.manual_seed(0) - - model_kwargs = { - "num_attention_heads": 2, - "attention_head_dim": 12, - "embedding_dim": self.text_embedder_hidden_size, - "num_layers": 1, - } - - model = PriorTransformer(**model_kwargs) - return model - - @property - def dummy_text_proj(self): - torch.manual_seed(0) - - model_kwargs = { - "clip_embeddings_dim": self.text_embedder_hidden_size, - "time_embed_dim": self.time_embed_dim, - "cross_attention_dim": self.cross_attention_dim, - } - - model = UnCLIPTextProjModel(**model_kwargs) - return model - - @property - def dummy_decoder(self): - torch.manual_seed(0) - - model_kwargs = { - "sample_size": 32, - # RGB in channels - "in_channels": 3, - # Out channels is double in channels because predicts mean and variance - "out_channels": 6, - "down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"), - "up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"), - "mid_block_type": "UNetMidBlock2DSimpleCrossAttn", - "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2), - "layers_per_block": 1, - "cross_attention_dim": self.cross_attention_dim, - "attention_head_dim": 4, - "resnet_time_scale_shift": "scale_shift", - "class_embed_type": "identity", - } - - model = UNet2DConditionModel(**model_kwargs) - return model - - @property - def dummy_super_res_kwargs(self): - return { - "sample_size": 64, - "layers_per_block": 1, - "down_block_types": ("ResnetDownsampleBlock2D", "ResnetDownsampleBlock2D"), - "up_block_types": ("ResnetUpsampleBlock2D", "ResnetUpsampleBlock2D"), - "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2), - "in_channels": 6, - "out_channels": 3, - } - - @property - def dummy_super_res_first(self): - torch.manual_seed(0) - - model = UNet2DModel(**self.dummy_super_res_kwargs) - return model - - @property - def dummy_super_res_last(self): - # seeded differently to get different unet than `self.dummy_super_res_first` - torch.manual_seed(1) - - model = UNet2DModel(**self.dummy_super_res_kwargs) - return model - - def get_dummy_components(self): - prior = self.dummy_prior - decoder = self.dummy_decoder - text_proj = self.dummy_text_proj - text_encoder = self.dummy_text_encoder - tokenizer = self.dummy_tokenizer - super_res_first = self.dummy_super_res_first - super_res_last = self.dummy_super_res_last - - prior_scheduler = UnCLIPScheduler( - variance_type="fixed_small_log", - prediction_type="sample", - num_train_timesteps=1000, - clip_sample_range=5.0, - ) - - decoder_scheduler = UnCLIPScheduler( - variance_type="learned_range", - prediction_type="epsilon", - num_train_timesteps=1000, - ) - - super_res_scheduler = UnCLIPScheduler( - variance_type="fixed_small_log", - prediction_type="epsilon", - num_train_timesteps=1000, - ) - - components = { - "prior": prior, - "decoder": decoder, - "text_proj": text_proj, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "super_res_first": super_res_first, - "super_res_last": super_res_last, - "prior_scheduler": prior_scheduler, - "decoder_scheduler": decoder_scheduler, - "super_res_scheduler": super_res_scheduler, - } - - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "horse", - "generator": generator, - "prior_num_inference_steps": 2, - "decoder_num_inference_steps": 2, - "super_res_num_inference_steps": 2, - "output_type": "numpy", - } - return inputs - - def test_unclip(self): - device = "cpu" - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - pipe.set_progress_bar_config(disable=None) - - output = pipe(**self.get_dummy_inputs(device)) - image = output.images - - image_from_tuple = pipe( - **self.get_dummy_inputs(device), - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - - expected_slice = np.array( - [ - 0.9997, - 0.9988, - 0.0028, - 0.9997, - 0.9984, - 0.9965, - 0.0029, - 0.9986, - 0.0025, - ] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - - def test_unclip_passed_text_embed(self): - device = torch.device("cpu") - - class DummyScheduler: - init_noise_sigma = 1 - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - prior = components["prior"] - decoder = components["decoder"] - super_res_first = components["super_res_first"] - tokenizer = components["tokenizer"] - text_encoder = components["text_encoder"] - - generator = torch.Generator(device=device).manual_seed(0) - dtype = prior.dtype - batch_size = 1 - - shape = (batch_size, prior.config.embedding_dim) - prior_latents = pipe.prepare_latents( - shape, dtype=dtype, device=device, generator=generator, latents=None, scheduler=DummyScheduler() - ) - shape = (batch_size, decoder.in_channels, decoder.sample_size, decoder.sample_size) - decoder_latents = pipe.prepare_latents( - shape, dtype=dtype, device=device, generator=generator, latents=None, scheduler=DummyScheduler() - ) - - shape = ( - batch_size, - super_res_first.in_channels // 2, - super_res_first.sample_size, - super_res_first.sample_size, - ) - super_res_latents = pipe.prepare_latents( - shape, dtype=dtype, device=device, generator=generator, latents=None, scheduler=DummyScheduler() - ) - - pipe.set_progress_bar_config(disable=None) - - prompt = "this is a prompt example" - - generator = torch.Generator(device=device).manual_seed(0) - output = pipe( - [prompt], - generator=generator, - prior_num_inference_steps=2, - decoder_num_inference_steps=2, - super_res_num_inference_steps=2, - prior_latents=prior_latents, - decoder_latents=decoder_latents, - super_res_latents=super_res_latents, - output_type="np", - ) - image = output.images - - text_inputs = tokenizer( - prompt, - padding="max_length", - max_length=tokenizer.model_max_length, - return_tensors="pt", - ) - text_model_output = text_encoder(text_inputs.input_ids) - text_attention_mask = text_inputs.attention_mask - - generator = torch.Generator(device=device).manual_seed(0) - image_from_text = pipe( - generator=generator, - prior_num_inference_steps=2, - decoder_num_inference_steps=2, - super_res_num_inference_steps=2, - prior_latents=prior_latents, - decoder_latents=decoder_latents, - super_res_latents=super_res_latents, - text_model_output=text_model_output, - text_attention_mask=text_attention_mask, - output_type="np", - )[0] - - # make sure passing text embeddings manually is identical - assert np.abs(image - image_from_text).max() < 1e-4 - - # Overriding PipelineTesterMixin::test_attention_slicing_forward_pass - # because UnCLIP GPU undeterminism requires a looser check. - @skip_mps - def test_attention_slicing_forward_pass(self): - test_max_difference = torch_device == "cpu" - - self._test_attention_slicing_forward_pass(test_max_difference=test_max_difference) - - # Overriding PipelineTesterMixin::test_inference_batch_single_identical - # because UnCLIP undeterminism requires a looser check. - @skip_mps - def test_inference_batch_single_identical(self): - test_max_difference = torch_device == "cpu" - relax_max_difference = True - additional_params_copy_to_batched_inputs = [ - "prior_num_inference_steps", - "decoder_num_inference_steps", - "super_res_num_inference_steps", - ] - - self._test_inference_batch_single_identical( - test_max_difference=test_max_difference, - relax_max_difference=relax_max_difference, - additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs, - ) - - def test_inference_batch_consistent(self): - additional_params_copy_to_batched_inputs = [ - "prior_num_inference_steps", - "decoder_num_inference_steps", - "super_res_num_inference_steps", - ] - - if torch_device == "mps": - # TODO: MPS errors with larger batch sizes - batch_sizes = [2, 3] - self._test_inference_batch_consistent( - batch_sizes=batch_sizes, - additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs, - ) - else: - self._test_inference_batch_consistent( - additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs - ) - - @skip_mps - def test_dict_tuple_outputs_equivalent(self): - return super().test_dict_tuple_outputs_equivalent() - - @skip_mps - def test_save_load_local(self): - return super().test_save_load_local() - - @skip_mps - def test_save_load_optional_components(self): - return super().test_save_load_optional_components() - - -@nightly -class UnCLIPPipelineCPUIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_unclip_karlo_cpu_fp32(self): - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/unclip/karlo_v1_alpha_horse_cpu.npy" - ) - - pipeline = UnCLIPPipeline.from_pretrained("kakaobrain/karlo-v1-alpha") - pipeline.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipeline( - "horse", - num_images_per_prompt=1, - generator=generator, - output_type="np", - ) - - image = output.images[0] - - assert image.shape == (256, 256, 3) - assert np.abs(expected_image - image).max() < 1e-1 - - -@slow -@require_torch_gpu -class UnCLIPPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_unclip_karlo(self): - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/unclip/karlo_v1_alpha_horse_fp16.npy" - ) - - pipeline = UnCLIPPipeline.from_pretrained("kakaobrain/karlo-v1-alpha", torch_dtype=torch.float16) - pipeline = pipeline.to(torch_device) - pipeline.set_progress_bar_config(disable=None) - - generator = torch.Generator(device="cpu").manual_seed(0) - output = pipeline( - "horse", - generator=generator, - output_type="np", - ) - - image = output.images[0] - - assert image.shape == (256, 256, 3) - - assert_mean_pixel_difference(image, expected_image) - - def test_unclip_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = UnCLIPPipeline.from_pretrained("kakaobrain/karlo-v1-alpha", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - pipe.enable_sequential_cpu_offload() - - _ = pipe( - "horse", - num_images_per_prompt=1, - prior_num_inference_steps=2, - decoder_num_inference_steps=2, - super_res_num_inference_steps=2, - output_type="np", - ) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 7 GB is allocated - assert mem_bytes < 7 * 10**9 diff --git a/spaces/derek-thomas/arabic-RAG/src/preprocessing/consolidate.py b/spaces/derek-thomas/arabic-RAG/src/preprocessing/consolidate.py deleted file mode 100644 index 6a0c21bfcb421a71fd92dc005b73f827b5b2eb84..0000000000000000000000000000000000000000 --- a/spaces/derek-thomas/arabic-RAG/src/preprocessing/consolidate.py +++ /dev/null @@ -1,85 +0,0 @@ -import json -from pathlib import Path -from time import perf_counter -from typing import Any, Dict - -from tqdm.auto import tqdm - - -def folder_to_json(folder_in: Path, folder_out: Path, json_file_name: str): - """ - Process JSON lines from files in a given folder and write processed data to new ndjson files. - - Parameters: - folder_in (Path): Path to the input folder containing the JSON files to process. - folder_out (Path): Path to the output folder for processed ndjson - json_file_name (str): Filename The files will be named as - {json_base_path}_1.ndjson, {json_base_path}_2.ndjson, and so on. - - Example: - folder_to_json(Path("/path/to/input/folder"), Path("/path/to/output/folder"), "ar_wiki") - """ - - json_out = [] # Initialize list to hold processed JSON data from all files - file_counter = 1 # Counter to increment file names - - process_start = perf_counter() - all_files = sorted(folder_in.rglob('*wiki*'), key=lambda x: str(x)) - - with tqdm(total=len(all_files), desc='Processing', unit='file') as pbar: - for file_path in all_files: - pbar.set_postfix_str(f"File: {file_path.name} | Dir: {file_path.parent}", refresh=True) - - with open(file_path, 'r', encoding='utf-8') as f: - for line in f: - article = json.loads(line) - json_out.append(restructure_articles(article)) - - # If size of json_out is 100,000, dump to file and clear list - if len(json_out) == 100_000: - append_to_file(json_out, folder_out / f"{json_file_name}_{file_counter}.ndjson") - json_out.clear() - file_counter += 1 - - pbar.update(1) - - if json_out: # Dump any remaining items in json_out to file - append_to_file(json_out, folder_out / f"{json_file_name}_{file_counter}.ndjson") - - time_taken_to_process = perf_counter() - process_start - pbar.write(f"Wiki processed in {round(time_taken_to_process, 2)} seconds!") - - -def append_to_file(data: list, path: Path): - with open(path, 'w', encoding='utf-8') as outfile: - for item in data: - json.dump(item, outfile) - outfile.write('\n') - - -def restructure_articles(article: Dict[str, Any]) -> Dict[str, Any]: - """ - Restructures the given article into haystack's format, separating content and meta data. - - Args: - - article (Dict[str, Any]): The article to restructure. - - Returns: - - Dict[str, Any]: The restructured article. - """ - - # Extract content and separate meta data - article_out = { - 'content': article['text'], - 'meta': {k: v for k, v in article.items() if k != 'text'} - } - - return article_out - - -if __name__ == '__main__': - proj_dir = Path(__file__).parents[2] - folder = proj_dir / 'data/raw/output' - file_out = proj_dir / 'data/consolidated/ar_wiki.json' - folder_to_json(folder, file_out) - print('Done!') diff --git a/spaces/diacanFperku/AutoGPT/AutoCAD Architecture 2017 !LINK! Xforce Keygen 64 Bits.md b/spaces/diacanFperku/AutoGPT/AutoCAD Architecture 2017 !LINK! Xforce Keygen 64 Bits.md deleted file mode 100644 index 2073632f1b0440eadc2aa637455ef63ae72653bf..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/AutoCAD Architecture 2017 !LINK! Xforce Keygen 64 Bits.md +++ /dev/null @@ -1,29 +0,0 @@ - -

            How to Activate AutoCAD Architecture 2017 with X-Force Keygen

            -

            AutoCAD Architecture 2017 is a software that helps you create architectural designs and documentation. It is one of the products of Autodesk, a leading company in the field of design and engineering software. To use AutoCAD Architecture 2017, you need to activate it with a product key and an activation code. Here are the steps to do that:

            -

            AutoCAD Architecture 2017 xforce keygen 64 bits


            Download File ····· https://gohhs.com/2uFUCw



            -
              -
            1. Download and install AutoCAD Architecture 2017 from the official website of Autodesk or from a trusted source.
            2. -
            3. Run the software and choose "Enter a Serial Number" when prompted.
            4. -
            5. Use one of the following serial numbers: 666-69696969, 667-98989898, 400-45454545, 066-66666666.
            6. -
            7. Use the product key 185I1 for AutoCAD Architecture 2017.
            8. -
            9. Click on "Next" and then on "Request an activation code using an offline method".
            10. -
            11. Copy the request code that appears on the screen.
            12. -
            13. Download X-Force 2017 keygen from this link: [^1^]. This is a tool that can generate activation codes for any Autodesk product.
            14. -
            15. Run X-Force 2017 keygen as administrator and click on "Patch". You should see a message saying "Successfully patched".
            16. -
            17. Paste the request code into the keygen and click on "Generate".
            18. -
            19. Copy the activation code that appears on the keygen.
            20. -
            21. Go back to the software and paste the activation code into the field. Click on "Next".
            22. -
            23. You should see a message saying "Thank you for activating your Autodesk product". Click on "Finish".
            24. -
            -

            Congratulations! You have successfully activated AutoCAD Architecture 2017 with X-Force keygen. Enjoy your software and create amazing architectural designs.

            AutoCAD Architecture 2017 also has some new features that can enhance your productivity and creativity. Here are some of them:

            -
              -
            • Edit Live Section - Xref Support: This feature allows you to create a live section view that matches the 2D section and edit the objects within the section result. You can edit objects that reside in nested external references or blocks as well[^1^].
            • -
            • Modify Section Defining Line: This feature allows you to modify the section line using grips without having to recreate it. You can add or remove vertices, jogs, and curves to the section line[^1^].
            • -
            • Addition of Shape Option when Adding Objects: This feature allows you to draw shapes like rectangle, circle, polygon, and polyline for objects like wall, curtain wall, railing, slab, and roof. You can draw regular polygon shape boundaries faster and with more accuracy. You can also create tangential curves for objects that do not support an arc option[^1^].
            • -
            • Smooth Migration: This feature makes it easier to migrate your customization settings from previous versions of AutoCAD. You can organize your settings into groups and categories and generate a migration summary report[^2^].
            • -
            • Performance Enhancements: This feature improves the performance and reliability of 3DORBIT for rendered visual styles, especially for models with a large number of small blocks containing edges and facets[^3^].
            • -
            -

            With these new features, AutoCAD Architecture 2017 can help you design and document your architectural projects more efficiently and effectively.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Charakku Pennungal Photos Pdf VERIFIED.md b/spaces/diacanFperku/AutoGPT/Charakku Pennungal Photos Pdf VERIFIED.md deleted file mode 100644 index 5aa08293773e38fba3b92bc20365a9aecebd7dc2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Charakku Pennungal Photos Pdf VERIFIED.md +++ /dev/null @@ -1,10 +0,0 @@ -

            Charakku Pennungal Photos Pdf


            Download File ►►►►► https://gohhs.com/2uFVG6



            - -Charakku Pennungal Photos Pdf Download Charakku Pennungal Photos Pdf Download Charakku Pennungal Photos Pdf Download) From: Free Pdf To Jpg Mp4 Free Pdf To Jpg Mp4. Many times your ability to work with images and photographs is very limited by the reality that photo editing software or a graphic designing program doesn't recognize your pictures in the manner you are used to doing it. What you need is an image converter. What Is A Pdf? Pdfs are a highly compressed format, also called acrobat pdf, which allows you to view content in the form of a file. "When the paper is yellowed, browned, or otherwise weakened the surroundings are similar to those of the town which once housed the newspaper, the company, and the office that used to be there, now vacant or rented. Making the perfect cup of coffee may seem like a piece of cake but there are tons of details to be followed and a myriad of choices to be made from the type of beans to the amount of water, the grind, the measuring, the steeping, the brewing, the serving, and on and on. Figure out where the issue is at, fix it. - -Charakku Pennungal Photos Pdf Download From: FREE PDF To JPG MP4! Pdf To Jpg Mp4! Many times your ability to work with images and photographs is very limited by the reality that photo editing software or a graphic designing program doesn't recognize your pictures in the manner you are used to doing it. What you need is an image converter. What Is A Pdf? Pdfs are a highly compressed format, also called acrobat pdf, which allows you to view content in the form of a file. "When the paper is yellowed, browned, or otherwise weakened the surroundings are similar to those of the town which once housed the newspaper, the company, and the office that used to be there, now vacant or rented. Making the perfect cup of coffee may seem like a piece of cake but there are tons of details to be followed and a myriad of choices to be made from the type of beans to the amount of water, the grind, the measuring, the steeping, the brewing, the serving, and on and on. Figure out where the issue is at, fix it. - -Charakku Pennungal Photos Pdf Download From: FREE PDF To JPG MP4! Pdf To Jpg 4fefd39f24
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/Chinese Pharmacopoeia 2010 Pdf.md b/spaces/diacanFperku/AutoGPT/Chinese Pharmacopoeia 2010 Pdf.md deleted file mode 100644 index 4a6f7b8ae0c78d5c8f4779f8c20a629cab50acad..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Chinese Pharmacopoeia 2010 Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Chinese Pharmacopoeia 2010 Pdf


            Download Filehttps://gohhs.com/2uFV42



            - -Pharmacopoeia has come into effect as of. October 1, 2010. The compilation of the. 2015 edition of Chinese Pharmacopoeia was commenced ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/Delphi Direct Evolution Download Torrent 1 !!EXCLUSIVE!!.md b/spaces/diacanFperku/AutoGPT/Delphi Direct Evolution Download Torrent 1 !!EXCLUSIVE!!.md deleted file mode 100644 index 8a99e898f0b3cfb73a8d9bc81c09ece48ebc1f96..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Delphi Direct Evolution Download Torrent 1 !!EXCLUSIVE!!.md +++ /dev/null @@ -1,9 +0,0 @@ -

            delphi direct evolution download torrent 1


            Download Zip »»» https://gohhs.com/2uFUs5



            -
            -Direct Evolution is a Delphi Diesel Aftermarket software package, which provides customers with a wide range of technical information needed to operate ... Details -Direct Evolution is a Delphi Diesel Aftermarket software package that provides customers with a wide range of technical information needed to work with diesel engines. -The package contains information and recommendations on the diagnosis and repair of diesel engines, as well as on the adjustment, bench testing and repair of diesel engines and gearboxes. -The package contains the following sections: 8a78ff9644
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/IrriPro 32bit 3.9.9 Crack Latest Full LINK Free Download.md b/spaces/diacanFperku/AutoGPT/IrriPro 32bit 3.9.9 Crack Latest Full LINK Free Download.md deleted file mode 100644 index 5b81e85ba0dde482ef21bd48edcd4d88340ea596..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/IrriPro 32bit 3.9.9 Crack Latest Full LINK Free Download.md +++ /dev/null @@ -1,78 +0,0 @@ - -

            IrriPro 32bit 3.9.9 Crack Latest Full Free Download: A Review

            -

            If you are looking for a software tool that can help you with optimized irrigation and the uniform distribution of water and fertilizers across a given area, you might want to check out IrriPro 32bit 3.9.9. This is a software designed for technicians and irrigation designers who want to create efficient and sustainable irrigation systems. In this article, we will review the features, benefits, and drawbacks of IrriPro 32bit 3.9.9 Crack Latest Full Free Download.

            -

            What is IrriPro 32bit 3.9.9?

            -

            IrriPro 32bit 3.9.9 is a software tool that allows you to design and analyze irrigation systems of any size and complexity. It can handle both sprinkler and drip irrigation systems, as well as mixed systems. It can also calculate the optimal water pressure, flow rate, pipe diameter, pump power, and other parameters for each irrigation unit.

            -

            IrriPro 32bit 3.9.9 Crack Latest Full Free Download


            Download ……… https://gohhs.com/2uFVkn



            -

            IrriPro 32bit 3.9.9 has a user-friendly interface that lets you draw the irrigation network on a map, import data from Google Maps or other sources, and edit the properties of each element. You can also use the software to simulate different scenarios, such as water demand, soil characteristics, climate conditions, and crop types. You can then view the results in graphical or tabular form, or export them to other formats.

            -

            What are the advantages of IrriPro 32bit 3.9.9?

            -

            Some of the advantages of using IrriPro 32bit 3.9.9 are:

            -
              -
            • It can help you save water and energy by optimizing the irrigation system.
            • -
            • It can help you improve the quality and quantity of crops by ensuring a uniform distribution of water and nutrients.
            • -
            • It can help you reduce costs and environmental impacts by avoiding over-irrigation, runoff, erosion, and pollution.
            • -
            • It can help you comply with the regulations and standards of irrigation design and management.
            • -
            • It can help you create professional reports and documentation for your clients or stakeholders.
            • -
            -

            What are the disadvantages of IrriPro 32bit 3.9.9?

            -

            Some of the disadvantages of using IrriPro 32bit 3.9.9 are:

            -
              -
            • It requires a Windows operating system (Windows 10, Windows 98, Windows XP, Windows 7, Windows 2003, Windows 8, Windows 2000, Windows Vista) to run.
            • -
            • It may not be compatible with some newer versions of Windows or other operating systems.
            • -
            • It may not have all the features or functions that you need for your specific irrigation project.
            • -
            • It may have some bugs or errors that affect its performance or accuracy.
            • -
            • It may not be updated regularly or supported by the developer.
            • -
            -

            How to download IrriPro 32bit 3.9.9 Crack Latest Full Free?

            -

            If you want to download IrriPro 32bit 3.9.9 Crack Latest Full Free, you can follow these steps:

            -
              -
            1. Go to one of the websites that offer IrriPro 32bit 3.9.9 Crack Latest Full Free Download, such as Filehippo.com, Tanv1234.blogspot.com, Candipipes.com, or Npmjs.com.
            2. -
            3. Click on the download link or button and wait for the file to be downloaded on your computer.
            4. -
            5. Extract the file using a software like WinRAR or WinZip.
            6. -
            7. Run the setup file and follow the instructions to install IrriPro 32bit 3.9.9 on your computer.
            8. -
            9. Copy the crack file from the extracted folder and paste it into the installation directory of IrriPro 32bit 3.9.9.
            10. -
            11. Launch IrriPro 32bit 3.9.9 and enjoy its full features for free.
            12. -
            -

            Conclusion

            -

            IrriPro 32bit 3.9.9 is a software tool that can help you design and analyze irrigation systems of any size and complexity. It has many advantages, such as saving water and energy, improving crop quality and quantity, reducing costs and environmental impacts, complying with regulations and standards, and creating professional reports and documentation. However, it also has some disadvantages, such as requiring a Windows operating system, being incompatible with some newer versions of Windows or other operating systems, lacking some features or functions, having some bugs or errors, and not being updated regularly or supported by the developer.

            -

            If you want to download IrriPro 32bit 3.9.9 Crack Latest Full Free Download, you can go to one of the websites that offer it, such as Filehippo.com, Tanv1234.blogspot.com, Candipipes.com, or Npmjs.com. You can then follow the steps to download, install, and activate IrriPro 32bit 3.9.9 on your computer.

            -

            We hope this article has been helpful for you in learning more about IrriPro 32bit 3.9.9 Crack Latest Full Free Download.

            -

            -

            What are the alternatives to IrriPro 32bit 3.9.9?

            -

            IrriPro 32bit 3.9.9 is not the only software tool that can help you with irrigation design and analysis. There are other alternatives that you can consider, depending on your needs and preferences. Some of these alternatives are:

            -
              -
            • IrriMaker: This is a software tool that allows you to design and analyze irrigation systems of any type and size, including sprinkler, drip, micro-sprinkler, center pivot, linear move, and solid set systems. It has a graphical interface that lets you draw the irrigation network on a map, import data from Google Earth or other sources, and edit the properties of each element. It also has a hydraulic solver that calculates the hydraulic parameters of the irrigation system, such as pressure, flow rate, head loss, velocity, etc. It also has a water balance module that estimates the water demand and supply of the irrigation system, taking into account the soil characteristics, climate conditions, crop types, irrigation methods, etc. It also has a report module that generates detailed and customizable reports of the irrigation system design and analysis.
            • -
            • Epanet: This is a software tool that allows you to model water distribution systems of any size and complexity. It can handle both pressurized and gravity-fed systems, as well as mixed systems. It can also model water quality parameters, such as chlorine concentration, water age, source tracing, etc. It has a graphical interface that lets you draw the water network on a map, import data from GIS or other sources, and edit the properties of each element. It also has a hydraulic solver that calculates the hydraulic parameters of the water system, such as pressure, flow rate, head loss, velocity, etc. It also has a report module that generates detailed and customizable reports of the water system design and analysis.
            • -
            • Hydrus: This is a software tool that allows you to simulate water flow and solute transport in variably saturated porous media. It can handle both one-dimensional and two-dimensional problems, as well as steady-state and transient conditions. It can also model root water uptake, evapotranspiration, heat transport, biogeochemical reactions, etc. It has a graphical interface that lets you define the geometry and boundary conditions of the problem domain, import data from field measurements or other sources, and edit the properties of each element. It also has a numerical solver that calculates the water flow and solute transport in the porous media, using finite element or finite difference methods. It also has a report module that generates detailed and customizable reports of the simulation results.
            • -
            -

            How to uninstall IrriPro 32bit 3.9.9?

            -

            If you want to uninstall IrriPro 32bit 3.9.9 from your computer, you can follow these steps:

            -
              -
            1. Go to the Start menu and click on Control Panel.
            2. -
            3. Click on Programs and Features or Add or Remove Programs.
            4. -
            5. Find IrriPro 32bit 3.9.9 in the list of installed programs and click on Uninstall or Change/Remove.
            6. -
            7. Follow the instructions to complete the uninstallation process.
            8. -
            9. Delete any leftover files or folders related to IrriPro 32bit 3.9.9 from your computer.
            10. -
            -

            How to use IrriPro 32bit 3.9.9?

            -

            IrriPro 32bit 3.9.9 is a software tool that is easy to use and learn. You can use it to design and analyze irrigation systems of any size and complexity in a few simple steps. Here is a brief guide on how to use IrriPro 32bit 3.9.9:

            -
              -
            1. Launch IrriPro 32bit 3.9.9 and create a new project or open an existing one.
            2. -
            3. Select the type of irrigation system you want to design, such as sprinkler, drip, or mixed.
            4. -
            5. Draw the irrigation network on the map, using the graphical editor tools. You can import data from Google Maps or other sources, or draw the network manually.
            6. -
            7. Edit the properties of each element of the network, such as pipes, valves, sprinklers, drippers, filters, pumps, etc. You can select the components from the database or enter the data manually.
            8. -
            9. Run the hydraulic solver to calculate the hydraulic parameters of the irrigation system, such as pressure, flow rate, head loss, velocity, etc. You can view the results in graphical or tabular form, or export them to other formats.
            10. -
            11. Run the water balance module to estimate the water demand and supply of the irrigation system, taking into account the soil characteristics, climate conditions, crop types, irrigation methods, etc. You can view the results in graphical or tabular form, or export them to other formats.
            12. -
            13. Run the fertilizer module to calculate the optimal dose and distribution of fertilizers for each irrigation unit. You can view the results in graphical or tabular form, or export them to other formats.
            14. -
            15. Run the quality module to evaluate the quality of the irrigation water and its effects on the soil and crops. You can view the results in graphical or tabular form, or export them to other formats.
            16. -
            17. Run the report module to generate detailed and customizable reports of the irrigation system design and analysis. You can print or save the reports in various formats.
            18. -
            -

            Conclusion

            -

            IrriPro 32bit 3.9.9 is a software tool that can help you design and analyze irrigation systems of any size and complexity. It has many advantages, such as saving water and energy, improving crop quality and quantity, reducing costs and environmental impacts, complying with regulations and standards, and creating professional reports and documentation. However, it also has some disadvantages, such as requiring a Windows operating system, being incompatible with some newer versions of Windows or other operating systems, lacking some features or functions, having some bugs or errors, and not being updated regularly or supported by the developer.

            -

            If you want to download IrriPro 32bit 3.9.9 Crack Latest Full Free Download, you can go to one of the websites that offer it, such as Filehippo.com, Tanv1234.blogspot.com, Candipipes.com, or Npmjs.com. You can then follow the steps to download, install, and activate IrriPro 32bit 3.9.9 on your computer.

            -

            We hope this article has been helpful for you in learning more about IrriPro 32bit 3.9.9 Crack Latest Full Free Download.

            -

            IrriPro 32bit 3.9.9 is a software tool that can help you design and analyze irrigation systems of any size and complexity. It has many advantages, such as saving water and energy, improving crop quality and quantity, reducing costs and environmental impacts, complying with regulations and standards, and creating professional reports and documentation. However, it also has some disadvantages, such as requiring a Windows operating system, being incompatible with some newer versions of Windows or other operating systems, lacking some features or functions, having some bugs or errors, and not being updated regularly or supported by the developer.

            -

            If you want to download IrriPro 32bit 3.9.9 Crack Latest Full Free Download, you can go to one of the websites that offer it, such as Filehippo.com, Tanv1234.blogspot.com, Candipipes.com, or Npmjs.com. You can then follow the steps to download, install, and activate IrriPro 32bit 3.9.9 on your computer.

            -

            We hope this article has been helpful for you in learning more about IrriPro 32bit 3.9.9 Crack Latest Full Free Download.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/LS Magazine Dark Studios Presents Dark Robbery 1.avi ((HOT)).md b/spaces/diacanFperku/AutoGPT/LS Magazine Dark Studios Presents Dark Robbery 1.avi ((HOT)).md deleted file mode 100644 index 36fcc904c114708d38dde4b0108c94c5f57463fb..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/LS Magazine Dark Studios Presents Dark Robbery 1.avi ((HOT)).md +++ /dev/null @@ -1,10 +0,0 @@ -

            LS Magazine Dark Studios Presents Dark Robbery 1.avi


            Download Ziphttps://gohhs.com/2uFUrC



            -
            -Between Charlotte, North Carolina and Davenport, Iowa. In the category Stores > Retail. The update of the retail real estate market is finally underway. The property owners are in for a surprise.. A&E. M. Ward. (10/22/16). 1/21/2016. 3.65 MB, ASX: LSL. The listing agent for LSL is Barry Richards. My favorite insight from the statement is that the company is asking me to invest. The interview is a pro-Consecrate ad, obviously. In addition to what is often in the real estate press, but is even more vague, the re-marketing of the market is already starting. A&E to go through the biggest rebound of the year in December after their 2006 spell. If you are a Covered California customer, you must enroll online or complete and submit the initial enrollment form available on the Covered California website (www.Coveredca.com). This video is a powerful message to be inspired to live for.One of the major topics in nanoscale science is surface related phenomena and the control of surface nanostructures. Nanostructures, such as nanoscale wires, tubes and other nanostructures formed by various methods (e.g., lithography and catalyst dealloying) have been the subject of numerous studies for use in many devices. - -Silver (Ag) is one of the most commonly used metals for silver nanostructures (e.g., nanowires, nanorods, nanoparticles, etc.). However, anisotropic silver growth presents unique challenges. - -The morphology and size of silver nanostructures depend on many factors, such as the nucleation rate, shape, size, and crystal structure of the nanostructures. In general, nucleation and growth of the nanostructures start on the surface of the support, and the growth is often at an angle to the substrate surface. As such, it is difficult to control the growth of the nanostructures from the nucleation points on the surface. The resulting silver nanostructures are often anisotropic, have pinholes and surface defects, and have poor surface coverage. This can be detrimental to many applications. For instance, as-synthesized silver nanostructures typically contain a thin silver coating at their tips. This silver coating at the tips of the nanostructures is usually loosely attached to the nanostructures, and 4fefd39f24
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/Pl Sql Developer 11 Product Code Serial Number Password List.md b/spaces/diacanFperku/AutoGPT/Pl Sql Developer 11 Product Code Serial Number Password List.md deleted file mode 100644 index 0d9d5e7c77cb4d283a79e0f219ea87dfcba96e67..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pl Sql Developer 11 Product Code Serial Number Password List.md +++ /dev/null @@ -1,6 +0,0 @@ -

            pl sql developer 11 product code serial number password list


            Download Ziphttps://gohhs.com/2uFUfF



            -
            -For additional tuning, the code Snippets window provides a long list of Optimizer. Hints that you can drag onto the worksheet. Snippets are code fragments, such ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/transforms.py b/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/setup_ffmpeg.py b/spaces/digitalxingtong/Un-Bert-Vits2/setup_ffmpeg.py deleted file mode 100644 index 7137ab5faebb6d80740b8c843667458f25596839..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Un-Bert-Vits2/setup_ffmpeg.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import sys -import re -from pathlib import Path -import winreg - -def check_ffmpeg_path(): - path_list = os.environ['Path'].split(';') - ffmpeg_found = False - - for path in path_list: - if 'ffmpeg' in path.lower() and 'bin' in path.lower(): - ffmpeg_found = True - print("FFmpeg already installed.") - break - - return ffmpeg_found - -def add_ffmpeg_path_to_user_variable(): - ffmpeg_bin_path = Path('.\\ffmpeg\\bin') - if ffmpeg_bin_path.is_dir(): - abs_path = str(ffmpeg_bin_path.resolve()) - - try: - key = winreg.OpenKey( - winreg.HKEY_CURRENT_USER, - r"Environment", - 0, - winreg.KEY_READ | winreg.KEY_WRITE - ) - - try: - current_path, _ = winreg.QueryValueEx(key, "Path") - if abs_path not in current_path: - new_path = f"{current_path};{abs_path}" - winreg.SetValueEx(key, "Path", 0, winreg.REG_EXPAND_SZ, new_path) - print(f"Added FFmpeg path to user variable 'Path': {abs_path}") - else: - print("FFmpeg path already exists in the user variable 'Path'.") - finally: - winreg.CloseKey(key) - except WindowsError: - print("Error: Unable to modify user variable 'Path'.") - sys.exit(1) - - else: - print("Error: ffmpeg\\bin folder not found in the current path.") - sys.exit(1) - -def main(): - if not check_ffmpeg_path(): - add_ffmpeg_path_to_user_variable() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/monotonic_align/setup.py b/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/train_ms.py b/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/nrtr/nrtr_r31_1by8_1by4_academic.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/nrtr/nrtr_r31_1by8_1by4_academic.py deleted file mode 100644 index 397122b55ea57df647a6bb5097973e0eebf4979d..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/nrtr/nrtr_r31_1by8_1by4_academic.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_6e.py', - '../../_base_/recog_pipelines/nrtr_pipeline.py', - '../../_base_/recog_datasets/ST_MJ_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -label_convertor = dict( - type='AttnConvertor', dict_type='DICT90', with_unknown=True) - -model = dict( - type='NRTR', - backbone=dict( - type='ResNet31OCR', - layers=[1, 2, 5, 3], - channels=[32, 64, 128, 256, 512, 512], - stage4_pool_cfg=dict(kernel_size=(2, 1), stride=(2, 1)), - last_stage_pool=False), - encoder=dict(type='NRTREncoder'), - decoder=dict(type='NRTRDecoder'), - loss=dict(type='TFLoss'), - label_convertor=label_convertor, - max_seq_len=40) - -data = dict( - samples_per_gpu=64, - workers_per_gpu=4, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/dinnovos/english-teacher/README.md b/spaces/dinnovos/english-teacher/README.md deleted file mode 100644 index cb1defd5971c82e5c8067220472e5451fc180974..0000000000000000000000000000000000000000 --- a/spaces/dinnovos/english-teacher/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: English Teacher -emoji: 🐨 -colorFrom: gray -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dirge/voicevox/test/test_full_context_label.py b/spaces/dirge/voicevox/test/test_full_context_label.py deleted file mode 100644 index 7cdde34f4644ccf7b3048d707f99b0171e25114e..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/test/test_full_context_label.py +++ /dev/null @@ -1,404 +0,0 @@ -from copy import deepcopy -from itertools import chain -from unittest import TestCase - -from voicevox_engine.full_context_label import ( - AccentPhrase, - BreathGroup, - Mora, - Phoneme, - Utterance, -) - - -class TestBasePhonemes(TestCase): - def setUp(self): - super().setUp() - # pyopenjtalk.extract_fullcontext("こんにちは、ヒホです。")の結果 - # 出来る限りテスト内で他のライブラリに依存しないため、 - # またテスト内容を透明化するために、テストケースを生成している - self.test_case_hello_hiho = [ - # sil (無音) - "xx^xx-sil+k=o/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:5_5%0_xx_xx/H:xx_xx/I:xx-xx" - + "@xx+xx&xx-xx|xx+xx/J:1_5/K:2+2-9", - # k - "xx^sil-k+o=N/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # o - "sil^k-o+N=n/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # N (ん) - "k^o-N+n=i/A:-3+2+4/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # n - "o^N-n+i=ch/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # i - "N^n-i+ch=i/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # ch - "n^i-ch+i=w/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # i - "i^ch-i+w=a/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # w - "ch^i-w+a=pau/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # a - "i^w-a+pau=h/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx" - + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5" - + "@1+2&1-2|1+9/J:1_4/K:2+2-9", - # pau (読点) - "w^a-pau+h=i/A:xx+xx+xx/B:09-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:5_5!0_xx-xx" - + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:4_1%0_xx_xx/H:1_5/I:xx-xx" - + "@xx+xx&xx-xx|xx+xx/J:1_4/K:2+2-9", - # h - "a^pau-h+i=h/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # i - "pau^h-i+h=o/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # h - "h^i-h+o=d/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # o - "i^h-o+d=e/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # d - "h^o-d+e=s/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # e - "o^d-e+s=U/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # s - "d^e-s+U=sil/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # U (無声母音) - "e^s-U+sil=xx/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0" - + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4" - + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9", - # sil (無音) - "s^U-sil+xx=xx/A:xx+xx+xx/B:10-7_2/C:xx_xx+xx/D:xx+xx_xx/E:4_1!0_xx-xx" - + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:xx_xx%xx_xx_xx/H:1_4/I:xx-xx" - + "@xx+xx&xx-xx|xx+xx/J:xx_xx/K:2+2-9", - ] - self.phonemes_hello_hiho = [ - Phoneme.from_label(label) for label in self.test_case_hello_hiho - ] - - -class TestPhoneme(TestBasePhonemes): - def test_phoneme(self): - self.assertEqual( - " ".join([phoneme.phoneme for phoneme in self.phonemes_hello_hiho]), - "sil k o N n i ch i w a pau h i h o d e s U sil", - ) - - def test_is_pause(self): - self.assertEqual( - [phoneme.is_pause() for phoneme in self.phonemes_hello_hiho], - [ - True, # sil - False, # k - False, # o - False, # N - False, # n - False, # i - False, # ch - False, # i - False, # w - False, # a - True, # pau - False, # h - False, # i - False, # h - False, # o - False, # d - False, # e - False, # s - False, # u - True, # sil - ], - ) - - def test_label(self) -> None: - self.assertEqual( - [phoneme.label for phoneme in self.phonemes_hello_hiho], - self.test_case_hello_hiho, - ) - - -class TestMora(TestBasePhonemes): - def setUp(self) -> None: - super().setUp() - # contexts["a2"] == "1" ko - self.mora_hello_1 = Mora( - consonant=self.phonemes_hello_hiho[1], vowel=self.phonemes_hello_hiho[2] - ) - # contexts["a2"] == "2" N - self.mora_hello_2 = Mora(consonant=None, vowel=self.phonemes_hello_hiho[3]) - # contexts["a2"] == "3" ni - self.mora_hello_3 = Mora( - consonant=self.phonemes_hello_hiho[4], vowel=self.phonemes_hello_hiho[5] - ) - # contexts["a2"] == "4" chi - self.mora_hello_4 = Mora( - consonant=self.phonemes_hello_hiho[6], vowel=self.phonemes_hello_hiho[7] - ) - # contexts["a2"] == "5" wa - self.mora_hello_5 = Mora( - consonant=self.phonemes_hello_hiho[8], vowel=self.phonemes_hello_hiho[9] - ) - # contexts["a2"] == "1" hi - self.mora_hiho_1 = Mora( - consonant=self.phonemes_hello_hiho[11], vowel=self.phonemes_hello_hiho[12] - ) - # contexts["a2"] == "2" ho - self.mora_hiho_2 = Mora( - consonant=self.phonemes_hello_hiho[13], vowel=self.phonemes_hello_hiho[14] - ) - # contexts["a2"] == "3" de - self.mora_hiho_3 = Mora( - consonant=self.phonemes_hello_hiho[15], vowel=self.phonemes_hello_hiho[16] - ) - # contexts["a2"] == "1" sU - self.mora_hiho_4 = Mora( - consonant=self.phonemes_hello_hiho[17], vowel=self.phonemes_hello_hiho[18] - ) - - def assert_phonemes(self, mora: Mora, mora_str: str) -> None: - self.assertEqual( - "".join([phoneme.phoneme for phoneme in mora.phonemes]), mora_str - ) - - def assert_labels(self, mora: Mora, label_start: int, label_end: int) -> None: - self.assertEqual(mora.labels, self.test_case_hello_hiho[label_start:label_end]) - - def test_phonemes(self) -> None: - self.assert_phonemes(self.mora_hello_1, "ko") - self.assert_phonemes(self.mora_hello_2, "N") - self.assert_phonemes(self.mora_hello_3, "ni") - self.assert_phonemes(self.mora_hello_4, "chi") - self.assert_phonemes(self.mora_hello_5, "wa") - self.assert_phonemes(self.mora_hiho_1, "hi") - self.assert_phonemes(self.mora_hiho_2, "ho") - self.assert_phonemes(self.mora_hiho_3, "de") - self.assert_phonemes(self.mora_hiho_4, "sU") - - def test_labels(self) -> None: - self.assert_labels(self.mora_hello_1, 1, 3) - self.assert_labels(self.mora_hello_2, 3, 4) - self.assert_labels(self.mora_hello_3, 4, 6) - self.assert_labels(self.mora_hello_4, 6, 8) - self.assert_labels(self.mora_hello_5, 8, 10) - self.assert_labels(self.mora_hiho_1, 11, 13) - self.assert_labels(self.mora_hiho_2, 13, 15) - self.assert_labels(self.mora_hiho_3, 15, 17) - self.assert_labels(self.mora_hiho_4, 17, 19) - - def test_set_context(self): - # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする - mora_hello_1 = deepcopy(self.mora_hello_1) - # phonemeにあたる"p3"を書き換える - mora_hello_1.set_context("p3", "a") - self.assert_phonemes(mora_hello_1, "aa") - - -class TestAccentPhrase(TestBasePhonemes): - def setUp(self) -> None: - super().setUp() - # TODO: ValueErrorを吐く作為的ではない自然な例の模索 - # 存在しないなら放置でよい - self.accent_phrase_hello = AccentPhrase.from_phonemes( - self.phonemes_hello_hiho[1:10] - ) - self.accent_phrase_hiho = AccentPhrase.from_phonemes( - self.phonemes_hello_hiho[11:19] - ) - - def test_accent(self): - self.assertEqual(self.accent_phrase_hello.accent, 5) - self.assertEqual(self.accent_phrase_hiho.accent, 1) - - def test_set_context(self): - accent_phrase_hello = deepcopy(self.accent_phrase_hello) - # phonemeにあたる"p3"を書き換える - accent_phrase_hello.set_context("p3", "a") - self.assertEqual( - "".join([phoneme.phoneme for phoneme in accent_phrase_hello.phonemes]), - "aaaaaaaaa", - ) - - def test_phonemes(self): - self.assertEqual( - " ".join( - [phoneme.phoneme for phoneme in self.accent_phrase_hello.phonemes] - ), - "k o N n i ch i w a", - ) - self.assertEqual( - " ".join([phoneme.phoneme for phoneme in self.accent_phrase_hiho.phonemes]), - "h i h o d e s U", - ) - - def test_labels(self): - self.assertEqual( - self.accent_phrase_hello.labels, self.test_case_hello_hiho[1:10] - ) - self.assertEqual( - self.accent_phrase_hiho.labels, self.test_case_hello_hiho[11:19] - ) - - def test_merge(self): - # 「こんにちはヒホです」 - # 読点を無くしたものと同等 - merged_accent_phrase = self.accent_phrase_hello.merge(self.accent_phrase_hiho) - self.assertEqual(merged_accent_phrase.accent, 5) - self.assertEqual( - " ".join([phoneme.phoneme for phoneme in merged_accent_phrase.phonemes]), - "k o N n i ch i w a h i h o d e s U", - ) - self.assertEqual( - merged_accent_phrase.labels, - self.test_case_hello_hiho[1:10] + self.test_case_hello_hiho[11:19], - ) - - -class TestBreathGroup(TestBasePhonemes): - def setUp(self) -> None: - super().setUp() - self.breath_group_hello = BreathGroup.from_phonemes( - self.phonemes_hello_hiho[1:10] - ) - self.breath_group_hiho = BreathGroup.from_phonemes( - self.phonemes_hello_hiho[11:19] - ) - - def test_set_context(self): - # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする - breath_group_hello = deepcopy(self.breath_group_hello) - # phonemeにあたる"p3"を書き換える - breath_group_hello.set_context("p3", "a") - self.assertEqual( - "".join([phoneme.phoneme for phoneme in breath_group_hello.phonemes]), - "aaaaaaaaa", - ) - - def test_phonemes(self): - self.assertEqual( - " ".join([phoneme.phoneme for phoneme in self.breath_group_hello.phonemes]), - "k o N n i ch i w a", - ) - self.assertEqual( - " ".join([phoneme.phoneme for phoneme in self.breath_group_hiho.phonemes]), - "h i h o d e s U", - ) - - def test_labels(self): - self.assertEqual( - self.breath_group_hello.labels, self.test_case_hello_hiho[1:10] - ) - self.assertEqual( - self.breath_group_hiho.labels, self.test_case_hello_hiho[11:19] - ) - - -class TestUtterance(TestBasePhonemes): - def setUp(self) -> None: - super().setUp() - self.utterance_hello_hiho = Utterance.from_phonemes(self.phonemes_hello_hiho) - - def test_phonemes(self): - self.assertEqual( - " ".join( - [phoneme.phoneme for phoneme in self.utterance_hello_hiho.phonemes] - ), - "sil k o N n i ch i w a pau h i h o d e s U sil", - ) - changed_utterance = Utterance.from_phonemes(self.utterance_hello_hiho.phonemes) - self.assertEqual(len(changed_utterance.breath_groups), 2) - accent_phrases = list( - chain.from_iterable( - breath_group.accent_phrases - for breath_group in changed_utterance.breath_groups - ) - ) - for prev, cent, post in zip( - [None] + accent_phrases[:-1], - accent_phrases, - accent_phrases[1:] + [None], - ): - mora_num = len(cent.moras) - accent = cent.accent - - if prev is not None: - for phoneme in prev.phonemes: - self.assertEqual(phoneme.contexts["g1"], str(mora_num)) - self.assertEqual(phoneme.contexts["g2"], str(accent)) - - if post is not None: - for phoneme in post.phonemes: - self.assertEqual(phoneme.contexts["e1"], str(mora_num)) - self.assertEqual(phoneme.contexts["e2"], str(accent)) - - for phoneme in cent.phonemes: - self.assertEqual( - phoneme.contexts["k2"], - str( - sum( - [ - len(breath_group.accent_phrases) - for breath_group in changed_utterance.breath_groups - ] - ) - ), - ) - - for prev, cent, post in zip( - [None] + changed_utterance.breath_groups[:-1], - changed_utterance.breath_groups, - changed_utterance.breath_groups[1:] + [None], - ): - accent_phrase_num = len(cent.accent_phrases) - - if prev is not None: - for phoneme in prev.phonemes: - self.assertEqual(phoneme.contexts["j1"], str(accent_phrase_num)) - - if post is not None: - for phoneme in post.phonemes: - self.assertEqual(phoneme.contexts["h1"], str(accent_phrase_num)) - - for phoneme in cent.phonemes: - self.assertEqual(phoneme.contexts["i1"], str(accent_phrase_num)) - self.assertEqual( - phoneme.contexts["i5"], - str(accent_phrases.index(cent.accent_phrases[0]) + 1), - ) - self.assertEqual( - phoneme.contexts["i6"], - str( - len(accent_phrases) - - accent_phrases.index(cent.accent_phrases[0]) - ), - ) - - def test_labels(self): - self.assertEqual(self.utterance_hello_hiho.labels, self.test_case_hello_hiho) diff --git a/spaces/dragonSwing/video2slide/download_video.py b/spaces/dragonSwing/video2slide/download_video.py deleted file mode 100644 index d912b25aa97c8b4940d812000a341a2b6cb712dc..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/video2slide/download_video.py +++ /dev/null @@ -1,83 +0,0 @@ -import mimetypes -import re -import tempfile -import requests -import os -from urllib.parse import urlparse -from pytube import YouTube -from config import DOWNLOAD_DIR - - -def download_video_from_url(url, output_dir=DOWNLOAD_DIR): - try: - response = requests.get(url) - response.raise_for_status() # Check if the request was successful - - content_type = response.headers.get("content-type") - if "video" not in content_type: - print("The given URL is not a valid video URL") - return - file_extension = mimetypes.guess_extension(content_type) - - os.makedirs(output_dir, exist_ok=True) - - temp_file = tempfile.NamedTemporaryFile( - delete=False, suffix=file_extension, dir=output_dir - ) - temp_file_path = temp_file.name - - with open(temp_file_path, "wb") as file: - file.write(response.content) - return temp_file_path - - except requests.exceptions.RequestException as e: - print("An error occurred while downloading the video:", str(e)) - return - - -def download_video_from_youtube(url, output_dir=DOWNLOAD_DIR): - try: - yt = YouTube(url) - video = ( - yt.streams.filter(progressive=True, file_extension="mp4") - .order_by("resolution") - .desc() - .first() - ) - - os.makedirs(output_dir, exist_ok=True) - - video_path = video.download(output_dir) - return video_path - - except Exception as e: - print("An error occurred while downloading the video:", str(e)) - return - - -def download_video(url, output_dir=DOWNLOAD_DIR): - parsed_url = urlparse(url) - domain = parsed_url.netloc.lower() - domain = re.sub(r"\.", "", domain) # Match for both youtube and youtu.be - - print("---" * 5, "Downloading video file", "---" * 5) - - if "youtube" in domain: - video_path = download_video_from_youtube(url, output_dir) - else: - video_path = download_video_from_url(url, output_dir) - - if video_path: - print(f"Saving file at: {video_path}") - print("---" * 10) - return video_path - - -if __name__ == "__main__": - youtube_link = "https://www.youtube.com/watch?v=2OTq15A5s0Y" - temp_video_path = download_video_from_youtube(youtube_link) - - if temp_video_path is not None: - print("Video downloaded successfully to:", temp_video_path) - else: - print("Failed to download the video.") diff --git a/spaces/ds520/bingo/src/components/ui/sheet.tsx b/spaces/ds520/bingo/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
            -) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
            -) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/dukai289/learning_streamlit/pages/1_DIspaly_and_Style_Data.py b/spaces/dukai289/learning_streamlit/pages/1_DIspaly_and_Style_Data.py deleted file mode 100644 index 72218fcfb66f6df0fcc9b57df9b78e143e89e00f..0000000000000000000000000000000000000000 --- a/spaces/dukai289/learning_streamlit/pages/1_DIspaly_and_Style_Data.py +++ /dev/null @@ -1,148 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd - -st.markdown('# Display and Style Data') - - - -st.markdown('## 1. Magic') -st.sidebar.markdown('## 1. Magic') -st.write('Any time that Streamlit sees a variable or a literal value on its own line, it automatically writes that to your app using st.write(). ') -code = ''' - import streamlit as st - import pandas as pd - df = pd.DataFrame({ - 'first column': [1, 2, 3, 4], - 'second column': [10, 20, 30, 40] - }) - df - ''' -st.code(code) - -df = pd.DataFrame({ - 'first column': [1, 2, 3, 4], - 'second column': [10, 20, 30, 40] - }) -df -st.divider() - - - -st.markdown('## 2. st.write') -st.sidebar.markdown('## 2. st.write') -st.write("Magic and st.write() inspect the type of data that you've passed in, and then decide how to best render it in the app. ") -code = ''' - import streamlit as st - import pandas as pd - st.write(pd.DataFrame({ - 'first column': [1, 2, 3, 4], - 'second column': [10, 20, 30, 40] - })) - ''' -st.code(code) -st.write(pd.DataFrame({ - 'first column': [1, 2, 3, 4], - 'second column': [10, 20, 30, 40] - })) -st.divider() - - -st.markdown('## 3. st.dataframe') -st.sidebar.markdown('## 3. st.dataframe') -code = ''' - import streamlit as st - import numpy as np - import pandas as pd - - dataframe = pd.DataFrame( - np.random.randn(10, 20), - columns=('col %d' % i for i in range(20)) - ) - - st.dataframe(dataframe.style.highlight_max(axis=0)) - ''' -st.code(code) - -dataframe = pd.DataFrame( - np.random.randn(10, 20), - columns=('col %d' % i for i in range(20)) - ) - -st.dataframe(dataframe.style.highlight_max(axis=0)) -st.divider() - - - -st.markdown('## 4. st.table') -st.sidebar.markdown('## 4. st.table') -code = ''' - import streamlit as st - import numpy as np - import pandas as pd - - dataframe = pd.DataFrame( - np.random.randn(10, 20), - columns=('col %d' % i for i in range(20)) - ) - st.table(dataframe) - ''' -st.code(code) - -dataframe = pd.DataFrame( - np.random.randn(10, 20), - columns=('col %d' % i for i in range(20)) - ) -st.table(dataframe) -st.divider() - - - -st.markdown('## 5. Chart') -st.sidebar.markdown('## 5. Chart') -code = ''' - import streamlit as st - import numpy as np - import pandas as pd - - chart_data = pd.DataFrame( - np.random.randn(20, 3), - columns=['a', 'b', 'c'] - ) - - st.line_chart(chart_data) - ''' -st.code(code) - -chart_data = pd.DataFrame( - np.random.randn(20, 3), - columns=['a', 'b', 'c'] - ) - -st.line_chart(chart_data) -st.divider() - - - -st.markdown('## 6. st.map') -st.sidebar.markdown('## 6. st.map') -code = ''' - import streamlit as st - import numpy as np - import pandas as pd - - map_data = pd.DataFrame( - np.random.randn(1000, 2) / [50, 50] + [37.76, -122.4], - columns=['lat', 'lon'] - ) - - st.map(map_data) - ''' -st.code(code) - -map_data = pd.DataFrame( - np.random.randn(1000, 2) / [50, 50] + [37.76, -122.4], - columns=['lat', 'lon'] - ) - -st.map(map_data) \ No newline at end of file diff --git a/spaces/duycse1603/math2tex/ScanSSD/data/config.py b/spaces/duycse1603/math2tex/ScanSSD/data/config.py deleted file mode 100644 index 8a840a169a7221b074d2b103a9b5e4e384f0c869..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/ScanSSD/data/config.py +++ /dev/null @@ -1,158 +0,0 @@ -# config.py -import os.path - -# gets home dir cross platform -HOME = os.path.expanduser("~") - -# for making bounding boxes pretty -COLORS = ((255, 0, 0, 128), (0, 255, 0, 128), (0, 0, 255, 128), - (0, 255, 255, 128), (255, 0, 255, 128), (255, 255, 0, 128)) - -MEANS = (246, 246, 246) - -exp_cfg = { - - 'gtdb': { - 'num_classes': 2, - 'lr_steps': (80000, 100000, 120000), - - 'max_iter': 120000, - 'feature_maps': [64, 32, 16, 8, 4, 2, 1], - 'min_dim': 512, - 'steps': [8, 16, 32, 64, 128, 256, 512], - 'min_sizes': [8.00, 76.8, 153.6, 230.4, 307.2, 384.0, 460.8], - 'max_sizes': [76.8, 153.6, 230.4, 307.2, 384.0, 460.8, 537.6], - 'aspect_ratios': [[2, 3, 5], [2, 3, 5, 7], [2, 3, 5, 7], [2, 3], [2, 3], [2], [2]], - - 'variance': [0.1, 0.2], - 'clip': True, - 'name': 'GTDB', - - 'is_vertical_prior_boxes_enabled': True, - - 'mbox': { - '512': [8, 10, 10, 6, 6, 4, 4], - #'512': [5, 6, 6, 4, 4, 3, 3], - '300': [8, 10, 10, 6, 4, 4], # number of boxes per feature map location - }, - 'extras': { - '512': [256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256], - '300': [256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256], - } - }, - - 'math_gtdb_512': { - - 'num_classes': 2, - 'lr_steps': (80000, 100000, 120000), - 'max_iter': 240000, - 'feature_maps': [64, 32, 16, 8, 4, 2, 1], - 'min_dim': 512, - 'steps': [8, 16, 32, 64, 128, 256, 512], - 'min_sizes': [8.00, 76.8, 153.6, 230.4, 307.2, 384.0, 460.8], - 'max_sizes': [76.8, 153.6, 230.4, 307.2, 384.0, 460.8, 537.6], - 'aspect_ratios': [[2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], - [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10]], - 'variance': [0.1, 0.2], - 'clip': True, - 'name': 'math_gtdb_512', - 'is_vertical_prior_boxes_enabled': True, - 'mbox': { - '512': [12,12,12,12,12,12,12], - }, - 'extras': { - '512': [256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256], - } - }, - - 'ssd300': { - 'num_classes': 2, - 'lr_steps': (80000, 100000, 120000), - 'max_iter': 132000, - 'feature_maps': [38, 19, 10, 5, 3, 1], - 'min_dim': 300, - 'steps': [8, 16, 32, 64, 100, 300], - 'min_sizes': [30, 60, 111, 162, 213, 264], - 'max_sizes': [60, 111, 162, 213, 264, 315], - 'aspect_ratios': [[2], [2, 3], [2, 3], [2, 3], [2], [2]], - 'variance': [0.1, 0.2], - 'clip': True, - 'name': 'ssd300', - 'is_vertical_prior_boxes_enabled': True, - 'mbox': { - '300': [4, 6, 6, 6, 4, 4], # number of boxes per feature map location - }, - 'extras': { - '300': [256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256], - } - }, - - 'ssd512': { - 'num_classes': 2, - 'lr_steps': (80000, 100000, 120000), - 'max_iter': 132000, - 'feature_maps': [64, 32, 16, 8, 4, 2, 1], - 'min_dim': 512, - 'steps': [8, 16, 32, 64, 128, 256, 512], - 'min_sizes': [35.84, 76.8, 153.6, 230.4, 307.2, 384.0, 460.8], - 'max_sizes': [76.8, 153.6, 230.4, 307.2, 384.0, 460.8, 537.6], - 'aspect_ratios': [[2], [2, 3], [2, 3], [2, 3], [2,3], [2], [2]], - 'variance': [0.1, 0.2], - 'clip': True, - 'name': 'ssd512', - 'is_vertical_prior_boxes_enabled': True, - 'mbox': { - '512': [4,6,6,6,6,4,4], - }, - 'extras': { - '512': [256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256], - } - }, - - 'aspect512': { - 'num_classes': 2, - 'lr_steps': (80000, 100000, 120000), - 'max_iter': 132000, - 'feature_maps': [64, 32, 16, 8, 4, 2, 1], - 'min_dim': 512, - 'steps': [8, 16, 32, 64, 128, 256, 512], - 'min_sizes': [35.84, 76.8, 153.6, 230.4, 307.2, 384.0, 460.8], - 'max_sizes': [76.8, 153.6, 230.4, 307.2, 384.0, 460.8, 537.6], - 'aspect_ratios': [[2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], - [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10]], - 'variance': [0.1, 0.2], - 'clip': True, - 'name': 'ssd512', - 'is_vertical_prior_boxes_enabled': True, - 'mbox': { - '512': [12,12,12,12,12,12,12], - }, - 'extras': { - '512': [256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256], - } - }, - - 'hboxes512': { - 'num_classes': 2, - 'lr_steps': (80000, 100000, 120000), - 'max_iter': 132000, - 'feature_maps': [64, 32, 16, 8, 4, 2, 1], - 'min_dim': 512, - 'steps': [8, 16, 32, 64, 128, 256, 512], - 'min_sizes': [35.84, 76.8, 153.6, 230.4, 307.2, 384.0, 460.8], - 'max_sizes': [76.8, 153.6, 230.4, 307.2, 384.0, 460.8, 537.6], - 'aspect_ratios': [[2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], - [2, 3, 5, 7, 10], [2, 3, 5, 7, 10], [2, 3, 5, 7, 10]], - 'variance': [0.1, 0.2], - 'clip': True, - 'name': 'ssd512', - 'is_vertical_prior_boxes_enabled': False, - 'mbox': { - '512': [7,7,7,7,7,7,7], - }, - 'extras': { - '512': [256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256], - } - }, - -} \ No newline at end of file diff --git a/spaces/elkraken/Video-Object-Detection/models/yolo.py b/spaces/elkraken/Video-Object-Detection/models/yolo.py deleted file mode 100644 index 95a019c6aeec8c3f1d582907d5fe7ff3ed6b9369..0000000000000000000000000000000000000000 --- a/spaces/elkraken/Video-Object-Detection/models/yolo.py +++ /dev/null @@ -1,843 +0,0 @@ -import argparse -import logging -import sys -from copy import deepcopy - -sys.path.append('./') # to run '$ python *.py' files in subdirectories -logger = logging.getLogger(__name__) -import torch -from models.common import * -from models.experimental import * -from utils.autoanchor import check_anchor_order -from utils.general import make_divisible, check_file, set_logging -from utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \ - select_device, copy_attr -from utils.loss import SigmoidBin - -try: - import thop # for FLOPS computation -except ImportError: - thop = None - - -class Detect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - end2end = False - include_nms = False - concat = False - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super(Detect, self).__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 - xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy - wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, -1, self.no)) - - if self.training: - out = x - elif self.end2end: - out = torch.cat(z, 1) - elif self.include_nms: - z = self.convert(z) - out = (z, ) - elif self.concat: - out = torch.cat(z, 1) - else: - out = (torch.cat(z, 1), x) - - return out - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - def convert(self, z): - z = torch.cat(z, 1) - box = z[:, :, :4] - conf = z[:, :, 4:5] - score = z[:, :, 5:] - score *= conf - convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], - dtype=torch.float32, - device=z.device) - box @= convert_matrix - return (box, score) - - -class IDetect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - end2end = False - include_nms = False - concat = False - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super(IDetect, self).__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch) - self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch) - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](self.ia[i](x[i])) # conv - x[i] = self.im[i](x[i]) - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x) - - def fuseforward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 - xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy - wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, -1, self.no)) - - if self.training: - out = x - elif self.end2end: - out = torch.cat(z, 1) - elif self.include_nms: - z = self.convert(z) - out = (z, ) - elif self.concat: - out = torch.cat(z, 1) - else: - out = (torch.cat(z, 1), x) - - return out - - def fuse(self): - print("IDetect.fuse") - # fuse ImplicitA and Convolution - for i in range(len(self.m)): - c1,c2,_,_ = self.m[i].weight.shape - c1_,c2_, _,_ = self.ia[i].implicit.shape - self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1) - - # fuse ImplicitM and Convolution - for i in range(len(self.m)): - c1,c2, _,_ = self.im[i].implicit.shape - self.m[i].bias *= self.im[i].implicit.reshape(c2) - self.m[i].weight *= self.im[i].implicit.transpose(0,1) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - def convert(self, z): - z = torch.cat(z, 1) - box = z[:, :, :4] - conf = z[:, :, 4:5] - score = z[:, :, 5:] - score *= conf - convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], - dtype=torch.float32, - device=z.device) - box @= convert_matrix - return (box, score) - - -class IKeypoint(nn.Module): - stride = None # strides computed during build - export = False # onnx export - - def __init__(self, nc=80, anchors=(), nkpt=17, ch=(), inplace=True, dw_conv_kpt=False): # detection layer - super(IKeypoint, self).__init__() - self.nc = nc # number of classes - self.nkpt = nkpt - self.dw_conv_kpt = dw_conv_kpt - self.no_det=(nc + 5) # number of outputs per anchor for box and class - self.no_kpt = 3*self.nkpt ## number of outputs per anchor for keypoints - self.no = self.no_det+self.no_kpt - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - self.flip_test = False - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no_det * self.na, 1) for x in ch) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch) - self.im = nn.ModuleList(ImplicitM(self.no_det * self.na) for _ in ch) - - if self.nkpt is not None: - if self.dw_conv_kpt: #keypoint head is slightly more complex - self.m_kpt = nn.ModuleList( - nn.Sequential(DWConv(x, x, k=3), Conv(x,x), - DWConv(x, x, k=3), Conv(x, x), - DWConv(x, x, k=3), Conv(x,x), - DWConv(x, x, k=3), Conv(x, x), - DWConv(x, x, k=3), Conv(x, x), - DWConv(x, x, k=3), nn.Conv2d(x, self.no_kpt * self.na, 1)) for x in ch) - else: #keypoint head is a single convolution - self.m_kpt = nn.ModuleList(nn.Conv2d(x, self.no_kpt * self.na, 1) for x in ch) - - self.inplace = inplace # use in-place ops (e.g. slice assignment) - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - if self.nkpt is None or self.nkpt==0: - x[i] = self.im[i](self.m[i](self.ia[i](x[i]))) # conv - else : - x[i] = torch.cat((self.im[i](self.m[i](self.ia[i](x[i]))), self.m_kpt[i](x[i])), axis=1) - - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - x_det = x[i][..., :6] - x_kpt = x[i][..., 6:] - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - kpt_grid_x = self.grid[i][..., 0:1] - kpt_grid_y = self.grid[i][..., 1:2] - - if self.nkpt == 0: - y = x[i].sigmoid() - else: - y = x_det.sigmoid() - - if self.inplace: - xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2) # wh - if self.nkpt != 0: - x_kpt[..., 0::3] = (x_kpt[..., ::3] * 2. - 0.5 + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy - x_kpt[..., 1::3] = (x_kpt[..., 1::3] * 2. - 0.5 + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy - #x_kpt[..., 0::3] = (x_kpt[..., ::3] + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy - #x_kpt[..., 1::3] = (x_kpt[..., 1::3] + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy - #print('=============') - #print(self.anchor_grid[i].shape) - #print(self.anchor_grid[i][...,0].unsqueeze(4).shape) - #print(x_kpt[..., 0::3].shape) - #x_kpt[..., 0::3] = ((x_kpt[..., 0::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy - #x_kpt[..., 1::3] = ((x_kpt[..., 1::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy - #x_kpt[..., 0::3] = (((x_kpt[..., 0::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy - #x_kpt[..., 1::3] = (((x_kpt[..., 1::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy - x_kpt[..., 2::3] = x_kpt[..., 2::3].sigmoid() - - y = torch.cat((xy, wh, y[..., 4:], x_kpt), dim = -1) - - else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - if self.nkpt != 0: - y[..., 6:] = (y[..., 6:] * 2. - 0.5 + self.grid[i].repeat((1,1,1,1,self.nkpt))) * self.stride[i] # xy - y = torch.cat((xy, wh, y[..., 4:]), -1) - - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - -class IAuxDetect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - end2end = False - include_nms = False - concat = False - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super(IAuxDetect, self).__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[:self.nl]) # output conv - self.m2 = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[self.nl:]) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch[:self.nl]) - self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch[:self.nl]) - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](self.ia[i](x[i])) # conv - x[i] = self.im[i](x[i]) - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - x[i+self.nl] = self.m2[i](x[i+self.nl]) - x[i+self.nl] = x[i+self.nl].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 - xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy - wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x[:self.nl]) - - def fuseforward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - if not torch.onnx.is_in_onnx_export(): - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: - xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].data # wh - y = torch.cat((xy, wh, y[..., 4:]), -1) - z.append(y.view(bs, -1, self.no)) - - if self.training: - out = x - elif self.end2end: - out = torch.cat(z, 1) - elif self.include_nms: - z = self.convert(z) - out = (z, ) - elif self.concat: - out = torch.cat(z, 1) - else: - out = (torch.cat(z, 1), x) - - return out - - def fuse(self): - print("IAuxDetect.fuse") - # fuse ImplicitA and Convolution - for i in range(len(self.m)): - c1,c2,_,_ = self.m[i].weight.shape - c1_,c2_, _,_ = self.ia[i].implicit.shape - self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1) - - # fuse ImplicitM and Convolution - for i in range(len(self.m)): - c1,c2, _,_ = self.im[i].implicit.shape - self.m[i].bias *= self.im[i].implicit.reshape(c2) - self.m[i].weight *= self.im[i].implicit.transpose(0,1) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - def convert(self, z): - z = torch.cat(z, 1) - box = z[:, :, :4] - conf = z[:, :, 4:5] - score = z[:, :, 5:] - score *= conf - convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], - dtype=torch.float32, - device=z.device) - box @= convert_matrix - return (box, score) - - -class IBin(nn.Module): - stride = None # strides computed during build - export = False # onnx export - - def __init__(self, nc=80, anchors=(), ch=(), bin_count=21): # detection layer - super(IBin, self).__init__() - self.nc = nc # number of classes - self.bin_count = bin_count - - self.w_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0) - self.h_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0) - # classes, x,y,obj - self.no = nc + 3 + \ - self.w_bin_sigmoid.get_length() + self.h_bin_sigmoid.get_length() # w-bce, h-bce - # + self.x_bin_sigmoid.get_length() + self.y_bin_sigmoid.get_length() - - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - self.ia = nn.ModuleList(ImplicitA(x) for x in ch) - self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch) - - def forward(self, x): - - #self.x_bin_sigmoid.use_fw_regression = True - #self.y_bin_sigmoid.use_fw_regression = True - self.w_bin_sigmoid.use_fw_regression = True - self.h_bin_sigmoid.use_fw_regression = True - - # x = x.copy() # for profiling - z = [] # inference output - self.training |= self.export - for i in range(self.nl): - x[i] = self.m[i](self.ia[i](x[i])) # conv - x[i] = self.im[i](x[i]) - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = x[i].sigmoid() - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy - #y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - - - #px = (self.x_bin_sigmoid.forward(y[..., 0:12]) + self.grid[i][..., 0]) * self.stride[i] - #py = (self.y_bin_sigmoid.forward(y[..., 12:24]) + self.grid[i][..., 1]) * self.stride[i] - - pw = self.w_bin_sigmoid.forward(y[..., 2:24]) * self.anchor_grid[i][..., 0] - ph = self.h_bin_sigmoid.forward(y[..., 24:46]) * self.anchor_grid[i][..., 1] - - #y[..., 0] = px - #y[..., 1] = py - y[..., 2] = pw - y[..., 3] = ph - - y = torch.cat((y[..., 0:4], y[..., 46:]), dim=-1) - - z.append(y.view(bs, -1, y.shape[-1])) - - return x if self.training else (torch.cat(z, 1), x) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - -class Model(nn.Module): - def __init__(self, cfg='yolor-csp-c.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - super(Model, self).__init__() - self.traced = False - if isinstance(cfg, dict): - self.yaml = cfg # model dict - else: # is *.yaml - import yaml # for torch hub - self.yaml_file = Path(cfg).name - with open(cfg) as f: - self.yaml = yaml.load(f, Loader=yaml.SafeLoader) # model dict - - # Define model - ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - if nc and nc != self.yaml['nc']: - logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - self.yaml['nc'] = nc # override yaml value - if anchors: - logger.info(f'Overriding model.yaml anchors with anchors={anchors}') - self.yaml['anchors'] = round(anchors) # override yaml value - self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - self.names = [str(i) for i in range(self.yaml['nc'])] # default names - # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))]) - - # Build strides, anchors - m = self.model[-1] # Detect() - if isinstance(m, Detect): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IDetect): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IAuxDetect): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))[:4]]) # forward - #print(m.stride) - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_aux_biases() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IBin): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases_bin() # only run once - # print('Strides: %s' % m.stride.tolist()) - if isinstance(m, IKeypoint): - s = 256 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - check_anchor_order(m) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases_kpt() # only run once - # print('Strides: %s' % m.stride.tolist()) - - # Init weights, biases - initialize_weights(self) - self.info() - logger.info('') - - def forward(self, x, augment=False, profile=False): - if augment: - img_size = x.shape[-2:] # height, width - s = [1, 0.83, 0.67] # scales - f = [None, 3, None] # flips (2-ud, 3-lr) - y = [] # outputs - for si, fi in zip(s, f): - xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - yi = self.forward_once(xi)[0] # forward - # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - yi[..., :4] /= si # de-scale - if fi == 2: - yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud - elif fi == 3: - yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr - y.append(yi) - return torch.cat(y, 1), None # augmented inference, train - else: - return self.forward_once(x, profile) # single-scale inference, train - - def forward_once(self, x, profile=False): - y, dt = [], [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - - if not hasattr(self, 'traced'): - self.traced=False - - if self.traced: - if isinstance(m, Detect) or isinstance(m, IDetect) or isinstance(m, IAuxDetect) or isinstance(m, IKeypoint): - break - - if profile: - c = isinstance(m, (Detect, IDetect, IAuxDetect, IBin)) - o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS - for _ in range(10): - m(x.copy() if c else x) - t = time_synchronized() - for _ in range(10): - m(x.copy() if c else x) - dt.append((time_synchronized() - t) * 100) - print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type)) - - x = m(x) # run - - y.append(x if m.i in self.save else None) # save output - - if profile: - print('%.1fms total' % sum(dt)) - return x - - def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _initialize_aux_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, mi2, s in zip(m.m, m.m2, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - b2 = mi2.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b2.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b2.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi2.bias = torch.nn.Parameter(b2.view(-1), requires_grad=True) - - def _initialize_biases_bin(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Bin() module - bc = m.bin_count - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - old = b[:, (0,1,2,bc+3)].data - obj_idx = 2*bc+4 - b[:, :obj_idx].data += math.log(0.6 / (bc + 1 - 0.99)) - b[:, obj_idx].data += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b[:, (obj_idx+1):].data += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - b[:, (0,1,2,bc+3)].data = old - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _initialize_biases_kpt(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _print_biases(self): - m = self.model[-1] # Detect() module - for mi in m.m: # from - b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) - - # def _print_weights(self): - # for m in self.model.modules(): - # if type(m) is Bottleneck: - # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - print('Fusing layers... ') - for m in self.model.modules(): - if isinstance(m, RepConv): - #print(f" fuse_repvgg_block") - m.fuse_repvgg_block() - elif isinstance(m, RepConv_OREPA): - #print(f" switch_to_deploy") - m.switch_to_deploy() - elif type(m) is Conv and hasattr(m, 'bn'): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, 'bn') # remove batchnorm - m.forward = m.fuseforward # update forward - elif isinstance(m, (IDetect, IAuxDetect)): - m.fuse() - m.forward = m.fuseforward - self.info() - return self - - def nms(self, mode=True): # add or remove NMS module - present = type(self.model[-1]) is NMS # last layer is NMS - if mode and not present: - print('Adding NMS... ') - m = NMS() # module - m.f = -1 # from - m.i = self.model[-1].i + 1 # index - self.model.add_module(name='%s' % m.i, module=m) # add - self.eval() - elif not mode and present: - print('Removing NMS... ') - self.model = self.model[:-1] # remove - return self - - def autoshape(self): # add autoShape module - print('Adding autoShape... ') - m = autoShape(self) # wrap model - copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes - return m - - def info(self, verbose=False, img_size=640): # print model information - model_info(self, verbose, img_size) - - -def parse_model(d, ch): # model_dict, input_channels(3) - logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments')) - anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - try: - args[j] = eval(a) if isinstance(a, str) else a # eval strings - except: - pass - - n = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in [nn.Conv2d, Conv, RobustConv, RobustConv2, DWConv, GhostConv, RepConv, RepConv_OREPA, DownC, - SPP, SPPF, SPPCSPC, GhostSPPCSPC, MixConv2d, Focus, Stem, GhostStem, CrossConv, - Bottleneck, BottleneckCSPA, BottleneckCSPB, BottleneckCSPC, - RepBottleneck, RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC, - Res, ResCSPA, ResCSPB, ResCSPC, - RepRes, RepResCSPA, RepResCSPB, RepResCSPC, - ResX, ResXCSPA, ResXCSPB, ResXCSPC, - RepResX, RepResXCSPA, RepResXCSPB, RepResXCSPC, - Ghost, GhostCSPA, GhostCSPB, GhostCSPC, - SwinTransformerBlock, STCSPA, STCSPB, STCSPC, - SwinTransformer2Block, ST2CSPA, ST2CSPB, ST2CSPC]: - c1, c2 = ch[f], args[0] - if c2 != no: # if not output - c2 = make_divisible(c2 * gw, 8) - - args = [c1, c2, *args[1:]] - if m in [DownC, SPPCSPC, GhostSPPCSPC, - BottleneckCSPA, BottleneckCSPB, BottleneckCSPC, - RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC, - ResCSPA, ResCSPB, ResCSPC, - RepResCSPA, RepResCSPB, RepResCSPC, - ResXCSPA, ResXCSPB, ResXCSPC, - RepResXCSPA, RepResXCSPB, RepResXCSPC, - GhostCSPA, GhostCSPB, GhostCSPC, - STCSPA, STCSPB, STCSPC, - ST2CSPA, ST2CSPB, ST2CSPC]: - args.insert(2, n) # number of repeats - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum([ch[x] for x in f]) - elif m is Chuncat: - c2 = sum([ch[x] for x in f]) - elif m is Shortcut: - c2 = ch[f[0]] - elif m is Foldcut: - c2 = ch[f] // 2 - elif m in [Detect, IDetect, IAuxDetect, IBin, IKeypoint]: - args.append([ch[x] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - elif m is ReOrg: - c2 = ch[f] * 4 - elif m is Contract: - c2 = ch[f] * args[0] ** 2 - elif m is Expand: - c2 = ch[f] // args[0] ** 2 - else: - c2 = ch[f] - - m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module - t = str(m)[8:-2].replace('__main__.', '') # module type - np = sum([x.numel() for x in m_.parameters()]) # number params - m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print - save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - layers.append(m_) - if i == 0: - ch = [] - ch.append(c2) - return nn.Sequential(*layers), sorted(save) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--cfg', type=str, default='yolor-csp-c.yaml', help='model.yaml') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--profile', action='store_true', help='profile model speed') - opt = parser.parse_args() - opt.cfg = check_file(opt.cfg) # check file - set_logging() - device = select_device(opt.device) - - # Create model - model = Model(opt.cfg).to(device) - model.train() - - if opt.profile: - img = torch.rand(1, 3, 640, 640).to(device) - y = model(img, profile=True) - - # Profile - # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) - # y = model(img, profile=True) - - # Tensorboard - # from torch.utils.tensorboard import SummaryWriter - # tb_writer = SummaryWriter() - # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/") - # tb_writer.add_graph(model.model, img) # add model to tensorboard - # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard diff --git a/spaces/eugenesiow/remove-bg/README.md b/spaces/eugenesiow/remove-bg/README.md deleted file mode 100644 index e337db4f7a3ef592b106b03a4cfedd02674cad31..0000000000000000000000000000000000000000 --- a/spaces/eugenesiow/remove-bg/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Remove Bg -emoji: 🖼️ -colorFrom: blue -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/falterWliame/Face_Mask_Detection/Napolcom Spo Exam Reviewer.md b/spaces/falterWliame/Face_Mask_Detection/Napolcom Spo Exam Reviewer.md deleted file mode 100644 index 26f82ecc97a8bedf8c5086e607c5a47dc8b06f4d..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Napolcom Spo Exam Reviewer.md +++ /dev/null @@ -1,38 +0,0 @@ -

            Napolcom Spo Exam Reviewer


            Download File ››› https://urlca.com/2uDd5K



            -
            - . . . - -Get the Power of the Pocket Reviewer - -Use this Pocket Reviewer app to study anytime, anywhere and save time. This is an ad-free version of Pocket Reviewer that allows you to take notes anywhere you go. Just download it on your smartphone, and it will enable you to read books, articles, blogs and even pass examinations. - -First of all thanks to the creator of this app, Arvin Moses. This Pocket Reviewer app is a great and helpful guide and learning tool for you to understand and understand the fast-paced world we live in. - -It will not only help you pass exams but also let you study any books, articles, blogs or videos. This app has so many features that will help you learn and pass exams in an easier and faster way. - -Use this Pocket Reviewer app to read and understand any text easily and quickly. Just highlight the text that you want to read and it will be converted into text format. - -This Pocket Reviewer app will show you video topics and play it to you. This app will show you the text and picture to you. - -Also, with this Pocket Reviewer app, you can save and read the articles and books offline. It is also a complete guide for exams. It will let you practice the questions and answers in exams. - -This is a very nice and helpful app, you should definitely try this Pocket Reviewer app! It is a great and useful tool for any student. - -Pocket Reviewer Features - -This app has so many features which will help you to read any text in a faster way. - -It is designed in an interesting way so you can easily know which books are best for you to read, and it will let you read any books you want to, whether it is fiction, non-fiction, tutorials, and anything. - -Once you have downloaded this app, you can study any book you want and it will let you read it as if you were reading a book in a normal book shop. - -Moreover, it will allow you to listen to audio books too. With this app, you can listen to the best books and become a better reader. - -You will learn new things when you are reading books. So you should not read a book just once; you should read it several times. - -This Pocket Reviewer app will make sure that you are reading books in the easiest way possible. - -And this app will let 4fefd39f24
            -
            -
            -

            diff --git a/spaces/falterWliame/Face_Mask_Detection/Nuendo 5.5 Activation Code LINK Keygen.md b/spaces/falterWliame/Face_Mask_Detection/Nuendo 5.5 Activation Code LINK Keygen.md deleted file mode 100644 index b4c8b2645ec9a71efeef023adb05b19cc9828f1d..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Nuendo 5.5 Activation Code LINK Keygen.md +++ /dev/null @@ -1,36 +0,0 @@ -

            nuendo 5.5 activation code keygen


            Download Filehttps://urlca.com/2uDcIN



            - -Adobe Acrobat Pro DC CC Plus Full version crack 2017 Professional keygen | Adobe PDF Professional Mac Client is the ideal tool for managing and creating PDF files. It is very easy to use and highly intuitive. It supports all languages and file formats of Adobe Acrobat Pro. Adobe Acrobat Pro is the ideal choice for managing and creating PDF files. Adobe PDF Professional Mac Client is the ideal tool for managing and creating PDF files. It is very easy to use and highly intuitive. It supports all languages and file formats of Adobe Acrobat Pro. - -System Requirements: - -Windows 7/8/10 - -Any Mac OS X version 10.7 or later - -Reviews: - -Write Your Review - -Rate this software - -We are sorry to see you were not satisfied with the software. Please, give us the chance to resolve your issues and fix them as soon as possible. Your reviews are very important to us and enable us to create a better product for you. If you're satisfied with the product, please select "Satisfied", if you are not, please select "Dissatisfied".You're about to see a real assault on American democracy. - -If the leading Democratic contenders for the presidential nomination are elected, they will have access to the federal government's secret computer systems to collect, analyze and cross-check voter information. - -In an online, public question-and-answer session Friday, Hillary Clinton and Bernie Sanders agreed to be subject to the same information technology as the president and vice president. But they would be able to access the databases in ways that other candidates would not. - -"I think it’s fair to require all candidates to do the same thing," Sanders said. - -The latest information technology from the political parties is a database called "Civis." Its function is to collect, store, analyze and cross-check voter information. Clinton and Sanders agreed to be bound by the same system. The only candidate who didn’t join the agreement was Martin O'Malley, the former Maryland governor. - -Technology is a critical, emerging issue in this presidential election. - -Sanders said that Democratic voters should expect the system to be reliable. - -“We are going to use the best technology, the best voter data, the best analysis and the best presentation that we can find," he said. - -But if the leading Democratic contenders are elected, they will have access to the same data as the president and vice president. They will 4fefd39f24
            -
            -
            -

            diff --git a/spaces/fatiXbelha/sd/Download Film Black Panther Wakanda Forever 2022 with Indonesian Subtitles - Fast and Easy.md b/spaces/fatiXbelha/sd/Download Film Black Panther Wakanda Forever 2022 with Indonesian Subtitles - Fast and Easy.md deleted file mode 100644 index 4cd9a21f2a8e3fed5efc0406fd8945aceda523f2..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Film Black Panther Wakanda Forever 2022 with Indonesian Subtitles - Fast and Easy.md +++ /dev/null @@ -1,142 +0,0 @@ - -

            Download Film Black Panther: Wakanda Forever (2022)

            -

            If you are looking for an action-packed, thrilling, and culturally significant film to watch this year, look no further than Black Panther: Wakanda Forever. This is the sequel to the 2018 blockbuster Black Panther, which was a groundbreaking celebration of black culture and a huge success at the box office and among critics. Black Panther: Wakanda Forever continues the story of T'Challa, the king of Wakanda, a hidden but advanced African nation that possesses a powerful metal called vibranium. After the death of T'Challa, his allies must protect Wakanda from a new threat that could endanger their home and the world. In this article, we will tell you everything you need to know about Black Panther: Wakanda Forever, including its plot, themes, cast, crew, visuals, soundtrack, reviews, ratings, box office performance, and how to download it legally and safely. Read on to find out why you should watch this amazing film as soon as possible.

            -

            download film black panther wakanda forever (2022)


            Download Zip > https://urllie.com/2uNAqQ



            -

            Why You Should Watch Black Panther: Wakanda Forever

            -

            Black Panther: Wakanda Forever is not just another superhero movie. It is a film that explores various themes related to power, culture, identity, representation, legacy, and justice within the context of Africa and the African diaspora. It is a film that honors the late Chadwick Boseman, who played T'Challa in the first Black Panther film and inspired millions with his courage and charisma. It is a film that showcases an all-star cast of majority-black talent and a talented team of writers, directors, producers, designers, and composers who bring Wakanda to life. It is a film that offers stunning visuals, costumes, music, and action scenes that will leave you breathless. It is a film that has received rave reviews from critics and audiences alike and has broken several box office records. It is a film that you don't want to miss.

            -

            The Story of Black Panther: Wakanda Forever

            -

            Black Panther: Wakanda Forever takes place after the events of Av

            engers: Endgame, where T'Challa sacrificed his life to help defeat Thanos and his army. Wakanda is now mourning the loss of its king and hero, and facing a new challenge from a mysterious enemy who wants to exploit its secrets and resources. The film follows the journey of T'Challa's sister Shuri, who inherits the mantle of Black Panther and the responsibility of leading Wakanda. She is joined by her loyal friends and allies, such as Okoye, the head of the Dora Milaje, Wakanda's elite female warriors; Nakia, a spy and T'Challa's former lover; M'Baku, the leader of the Jabari tribe; and Everett Ross, a CIA agent who befriended T'Challa. Together, they must protect Wakanda from the external and internal threats that threaten its stability and sovereignty.

            -

            The Legacy of Chadwick Boseman

            -

            One of the most emotional aspects of Black Panther: Wakanda Forever is the tribute to Chadwick Boseman, who passed away in 2020 after a four-year battle with colon cancer. Boseman was widely praised for his portrayal of T'Challa in the first Black Panther film and other Marvel movies, such as Captain America: Civil War, Avengers: Infinity War, and Avengers: Endgame. He brought dignity, grace, and charisma to the role, and inspired many people around the world with his representation of a black superhero and leader. He also showed incredible strength and resilience by working on several films while undergoing treatment for his illness. He was a true hero both on and off screen.

            -

            The filmmakers of Black Panther: Wakanda Forever decided not to recast T'Challa or use CGI to recreate Boseman's likeness, out of respect for his memory and legacy. Instead, they focused on honoring his character and exploring how his death affects the other characters and the story. They also dedicated the film to Boseman's memory and included a special tribute at the end of the film. The film is a testament to Boseman's impact on the world and his lasting contribution to cinema and culture.

            -

            The Cast and Crew of Black Panther: Wakanda Forever

            -

            Black Panther: Wakanda Forever features an impressive cast of talented actors and actresses who bring their characters to life with passion and skill. Here are some of the main cast members and their roles:

            -
              -
            • Letitia Wright as Shuri, T'Challa's younger sister, a genius inventor and scientist, and the new Black Panther.
            • -
            • Danai Gurira as Okoye, the leader of the Dora Milaje, Wakanda's elite female warriors who serve as the royal guard.
            • -
            • Lupita Nyong'o as Nakia, a spy and humanitarian who works undercover in different countries and was T'Challa's former lover.
            • -
            • Winston Duke as M'Baku, the leader of the Jabari tribe, a mountain-dwelling group that initially opposed T'Challa but later became his ally.
            • -
            • Martin Freeman as Everett Ross, a CIA agent who befriended T'Challa and helped him in several missions.
            • -
            • Angela Bassett as Ramonda, T'Challa's mother and the queen mother of Wakanda.
            • -
            • Dominique Thorne as Riri Williams / Ironheart, a teenage genius who creates her own suit of armor inspired by Iron Man.
            • -
            • Micheal B. Jordan as Erik Killmonger / N'Jadaka, T'Challa's cousin and rival who tried to overthrow him in the first film but later redeemed himself.
            • -
            • Daniel Kaluuya as W'Kabi, Okoye's lover and the leader of the Border Tribe, who sided with Killmonger in the first film but later regretted his actions.
            • -
            • Florence Kasumba as Ayo, a member of the Dora Milaje who is loyal to Okoye.
            • -
            • Nabiyah Be as Tilda Johnson / Nightshade, a brilliant biochemist who works for Nakia and has a hidden agenda.
            • -
            • li>John Kani as T'Chaka, T'Challa's father and the former king of Wakanda, who appears as a spirit in the ancestral plane.
            • -
            -

            The film is directed by Ryan Coogler, who also co-wrote the screenplay with Joe Robert Cole. Coogler and Cole previously collaborated on the first Black Panther film, as well as the acclaimed drama Fruitvale Station. They are joined by a talented crew of producers, editors, cinematographers, composers, and designers who worked hard to create a stunning and authentic representation of Wakanda and its culture.

            -

            The Visuals and Soundtrack of Black Panther: Wakanda Forever

            -

            Black Panther: Wakanda Forever is a feast for the eyes and ears. The film boasts of spectacular visuals that showcase the beauty and diversity of Wakanda and its people. The film features a variety of settings, such as the futuristic capital city, the lush rainforest, the snowy mountains, the hidden underwater kingdom, and the mystical ancestral plane. The film also showcases the amazing costumes and makeup that reflect the different tribes and traditions of Wakanda, as well as the sleek and powerful technology that they use. The film uses a combination of practical effects, CGI, and motion capture to create realistic and immersive scenes that will make you feel like you are in Wakanda.

            -

            How to download film black panther wakanda forever (2022) from Netflix
            -Download film black panther wakanda forever (2022) sub indo full movie
            -Watch film black panther wakanda forever (2022) online free HD
            -Film black panther wakanda forever (2022) release date and cast
            -Film black panther wakanda forever (2022) trailer and review
            -Download film black panther wakanda forever (2022) bluray 1080p
            -Film black panther wakanda forever (2022) plot and spoilers
            -Download film black panther wakanda forever (2022) in hindi dubbed
            -Film black panther wakanda forever (2022) marvel cinematic universe
            -Download film black panther wakanda forever (2022) torrent magnet link
            -Film black panther wakanda forever (2022) streaming sites and apps
            -Download film black panther wakanda forever (2022) with english subtitles
            -Film black panther wakanda forever (2022) box office and awards
            -Download film black panther wakanda forever (2022) for free without registration
            -Film black panther wakanda forever (2022) behind the scenes and interviews
            -Download film black panther wakanda forever (2022) mp4 mkv avi
            -Film black panther wakanda forever (2022) soundtrack and score
            -Download film black panther wakanda forever (2022) google drive link
            -Film black panther wakanda forever (2022) fan theories and predictions
            -Download film black panther wakanda forever (2022) dual audio eng-hin
            -Film black panther wakanda forever (2022) easter eggs and references
            -Download film black panther wakanda forever (2022) direct link no ads
            -Film black panther wakanda forever (2022) rating and critics opinions
            -Download film black panther wakanda forever (2022) x264 x265 hevc
            -Film black panther wakanda forever (2022) sequel and prequel

            -

            The film also has a phenomenal soundtrack that blends traditional African music with modern hip-hop and pop. The film's score is composed by Ludwig Göransson, who won an Oscar for his work on the first Black Panther film. He incorporates various instruments, such as drums, flutes, horns, strings, and vocals, to create a rich and dynamic sound that matches the mood and tone of each scene. The film also features original songs by Beyoncé, who produced and curated an album called The Lion King: The Gift, which is inspired by the film and its themes. The album features collaborations with other artists from Africa and the diaspora, such as Burna Boy, Wizkid, Tiwa Savage, Shatta Wale, Yemi Alade, Tekno, Mr Eazi, Salatiel, Moonchild Sanelly, Busiswa, and more. The album is a celebration of African culture and identity, and a tribute to Boseman's legacy.

            -

            How to Download Black Panther: Wakanda Forever

            -

            If you are eager to watch Black Panther: Wakanda Forever, you might be wondering how to download it online or offline. There are many ways to download the film legally and safely, depending on your preference and budget. Here are some of the best platforms to download Black Panther: Wakanda Forever:

            -

            The Best Platforms to Download Black Panther: Wakanda Forever

            - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            PlatformDescriptionPriceProsCons
            Disney+A streaming service that offers access to Disney's library of movies and TV shows, including Marvel content.$7.99 per month or $79.99 per year.- High-quality video and audio.
            - Offline viewing option.
            - Family-friendly content.
            - Exclusive originals and extras.
            - Requires subscription.
            - Not available in all countries.
            - May have limited content depending on region.
            - May have delayed release date depending on region.
            Amazon Prime VideoA streaming service that offers access to thousands of movies and TV shows, including Marvel content.$8.99 per month or $119 per year (includes other benefits such as free shipping).- High-quality video and audio.
            - Offline viewing option.
            - Wide range of content.
            - Compatible with various devices.
            - Requires subscription.
            - Not available in all countries.
            - May have limited content depending on region.
            - May have additional fees for some titles.
            iTunesA digital media store that offers access to movies and TV shows, including Marvel content.Varies depending on title and quality. Usually ranges from $3.99 to $19.99.- High-quality video and audio.
            - Offline viewing option.
            - Compatible with various devices.
            - Own the title forever.
            - Requires payment for each title.
            - Not available in all countries.
            - May have limited content depending on region.
            - May have delayed release date depending on region.
            YouTubeA video-sharing platform that offers access to movies and TV shows, including Marvel content.Varies depending on title and quality. Usually ranges from $3.99 to $19.99.- High-quality video and audio.
            - Offline viewing option.
            - Compatible with various devices.
            - Own the title forever.
            - Requires payment for each title.
            - Not available in all countries.
            - May have limited content depending on region.
            - May have delayed release date depending on region.
            -

            The Benefits of Downloading Black Panther: Wakanda Forever

            -

            Downloading Black Panther: Wakanda Forever has many benefits over watching it in theaters or on TV. Here are some of them:

            -
              -
            • Convenience: You can watch the film anytime, anywhere, and as many times as you want. You don't have to worry about missing the showtime, finding a seat, or dealing with noisy crowds. You can also pause, rewind, or fast-forward the film as you please.
            • -
            • Cost: You can save money by downloading the film instead of paying for tickets, popcorn, drinks, parking, or transportation. You can also choose the quality and price that suits your budget and preference.
            • -
            • Choice: You can choose the platform and device that you want to watch the film on. You can also choose the language, subtitles, and audio options that you prefer.
            • -
            • Control: You can control the environment and atmosphere that you want to watch the film in. You can adjust the volume, brightness, and temperature to your liking. You can also invite your friends and family to watch with you or enjoy the film by yourself.
            • -
            -

            The Risks of Downloading Black Panther: Wakanda Forever

            -

            Downloading Black Panther: Wakanda Forever also has some risks that you should be aware of and avoid. Here are some of them:

            -
              -
            • Illegality: You should only download the film from legal and authorized sources, such as the ones mentioned above. Downloading the film from illegal or pirated sources, such as torrent sites, file-sharing platforms, or unauthorized websites, is a violation of the law and the intellectual property rights of the filmmakers. You could face legal consequences, such as fines, lawsuits, or even jail time, if you are caught downloading the film illegally.
            • -
            • Insecurity: You should also only download the film from safe and trustworthy sources, such as the ones mentioned above. Downloading the film from unsafe or unverified sources, such as malware-infected sites, phishing links, or spam emails, could expose your device and data to viruses, spyware, ransomware, or hackers. You could lose your personal information, such as your passwords, bank accounts, credit cards, or identity, or damage your device irreparably if you are not careful about where you download the film from.
            • -
            • Immorality: You should also consider the ethical and moral implications of downloading the film without paying for it or supporting the filmmakers. Downloading the film illegally or unfairly deprives the filmmakers of their rightful revenue and recognition for their hard work and creativity. It also undermines the film industry and the livelihoods of many people who work in it. It also disrespects the legacy of Chadwick Boseman and his contribution to the film and culture. You should support the film and the filmmakers by downloading it legally and fairly.
            • -
            -

            What to Expect from Black Panther: Wakanda Forever

            -

            Black Panther: Wakanda Forever is a film that will not disappoint you. It is a film that will entertain you, educate you, inspire you, and move you. It is a film that will make you proud of your heritage and culture, or appreciate and respect the heritage and culture of others. It is a film that will make you think about important issues and topics that affect our world today. It is a film that will make you feel a range of emotions, from joy to sadness, from anger to hope. It is a film that will make you want to watch it again and again. Here are some of the things that you can expect from Black Panther: Wakanda Forever:

            -

            The Reviews and Ratings of Black Panther: Wakanda Forever

            -

            Black Panther: Wakanda Forever has received overwhelmingly positive reviews and ratings from critics and audiences alike. The film has a score of 97% on Rotten Tomatoes, based on 256 reviews, with an average rating of 8.7/10. The site's critical consensus reads: "A worthy successor to its groundbreaking predecessor, Black Panther: Wakanda Forever delivers a thrilling and emotionally resonant story that honors Chadwick Boseman's legacy and celebrates African culture." The film also has a score of 88/100 on Metacritic, based on 52 reviews, indicating "universal acclaim". The site's summary states: "Black Panther: Wakanda Forever is a stunning achievement in filmmaking that blends action, drama, humor, and social commentary with dazzling visuals and sound. Ryan Coogler and his cast and crew have created a masterpiece that transcends the superhero genre and elevates cinema to new heights." The film also has an A+ rating on CinemaScore, based on audience polls. The site's report says: "Black Panther: Wakanda Forever is a smash hit with audiences who love its captivating story, engaging characters, spectacular action scenes , and cultural significance. The film is a must-see for fans of Marvel and cinema in general."

            -

            The Box Office Performance of Black Panther: Wakanda Forever

            -

            Black Panther: Wakanda Forever has also been a huge success at the box office, breaking several records and making history. The film has grossed over $1.2 billion worldwide, making it the second-highest-grossing film of 2023, behind Avatar 2, and the ninth-highest-grossing film of all time. The film has also become the highest-grossing film by a black director, surpassing Coogler's own Black Panther, and the highest-grossing film with a predominantly black cast, surpassing The Lion King. The film has also achieved several milestones in different markets, such as becoming the first Marvel film to open in China with over $100 million, the first film to cross $200 million in Africa, and the first film to cross $300 million in North America. The film has also received several accolades and nominations from various awards ceremonies, such as the Oscars, the Golden Globes, the BAFTAs, and the SAG Awards.

            -

            The Future of Black Panther and the Marvel Cinematic Universe

            -

            Black Panther: Wakanda Forever is not only a standalone film, but also a part of the larger Marvel Cinematic Universe (MCU), a series of interconnected films and TV shows that share a common storyline and characters. The film is the 32nd installment in the MCU, and the sixth installment in Phase Four, which began with Black Widow and will end with Fantastic Four. The film sets up several plot threads and character arcs that will be explored in future MCU projects, such as The Marvels, Doctor Strange in the Multiverse of Madness, Secret Invasion, Ironheart, and more. The film also introduces new characters and concepts that will expand the MCU's scope and diversity, such as Riri Williams / Ironheart, Tilda Johnson / Nightshade, Namor the Sub-Mariner, Atlantis, and more. The film also pays homage to previous MCU films and characters, such as Iron Man, Captain America, Thor, Hulk, Black Widow, Hawkeye, Spider-Man, Ant-Man, Wasp, Captain Marvel, Guardians of the Galaxy, Doctor Strange, Scarlet Witch, Vision, Falcon, Winter Soldier, Loki, Black Panther, and more. The film is a celebration of the past, present, and future of the MCU.

            -

            Conclusion

            -

            Black Panther: Wakanda Forever is a film that you should not miss. It is a film that offers an exciting story, compelling characters, stunning visuals, amazing music , and cultural significance. It is a film that honors the legacy of Chadwick Boseman and celebrates African culture. It is a film that has received rave reviews and ratings and has broken several box office records. It is a film that is part of the Marvel Cinematic Universe and sets up the future of the franchise. It is a film that you can download legally and safely from various platforms and enjoy at your convenience. We hope that this article has given you enough information and motivation to download Black Panther: Wakanda Forever and watch it as soon as possible. You will not regret it.

            -

            FAQs

            -

            Here are some of the frequently asked questions about Black Panther: Wakanda Forever:

            -
              -
            1. When and where was Black Panther: Wakanda Forever released?
              -Black Panther: Wakanda Forever was released on July 8, 2023, in the United States and Canada, and on various dates in other countries. The film was released in theaters, as well as on Disney+ with Premier Access, which required an additional fee of $29.99.
            2. -
            3. How long is Black Panther: Wakanda Forever?
              -Black Panther: Wakanda Forever has a runtime of 147 minutes, or 2 hours and 27 minutes.
            4. -
            5. What is the rating of Black Panther: Wakanda Forever?
              -Black Panther: Wakanda Forever is rated PG-13 by the MPAA for sequences of violence and action, some language, and thematic elements.
            6. -
            7. Who are the villains of Black Panther: Wakanda Forever?
              -Black Panther: Wakanda Forever features several antagonists who pose a threat to Wakanda and its allies. Some of them are:
                -
              • Namor the Sub-Mariner, the ruler of Atlantis, an underwater kingdom that has a long-standing rivalry with Wakanda.
              • -
              • Tilda Johnson / Nightshade, a brilliant biochemist who works for Nakia and has a hidden agenda.
              • -
              • The White Wolf, a former member of the CIA who was experimented on by Hydra and became a ruthless mercenary.
              • -
              • The Mandarin, the leader of the Ten Rings, a terrorist organization that seeks to destabilize the world order.
              • -
            8. -
            9. Will there be a third Black Panther film?
              -There is no official confirmation or announcement about a third Black Panther film yet, but it is possible that there will be one in the future, depending on the success and reception of Black Panther: Wakanda Forever. The film leaves some room for further exploration of Wakanda and its characters, as well as their connection to the rest of the Marvel Cinematic Universe.
            10. -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy FoxOne Special Missions with MOD APK and Money Hack.md b/spaces/fatiXbelha/sd/Enjoy FoxOne Special Missions with MOD APK and Money Hack.md deleted file mode 100644 index 799c958f132b650334a6533aeadda95c66ae0841..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy FoxOne Special Missions with MOD APK and Money Hack.md +++ /dev/null @@ -1,96 +0,0 @@ - -

            FoxOne Special Missions + Mod APK AN1: A Review

            -

            If you are a fan of fighter jet simulators, you might have heard of FoxOne Special Missions, a game that lets you fly various aircraft and engage in thrilling air combat scenarios. But did you know that you can enhance your gaming experience with Mod APK AN1, a modified version of the game that gives you unlimited money and access to all planes? In this article, we will review FoxOne Special Missions and Mod APK AN1, and show you how to download and install them on your Android device.

            -

            foxone special missions + mod apk an1


            Download Zip > https://urllie.com/2uNCkc



            -

            What is FoxOne Special Missions?

            -

            FoxOne Special Missions is a 3D action flight simulator game developed by SkyFox Games. It is the sequel to FoxOne Advanced Edition, and it features new missions, new planes, new enemies, and new graphics. The game has a realistic physics engine, dynamic weather effects, and stunning sound effects. You can choose from over 20 different aircraft, each with its own characteristics and weapons. You can also customize your planes with different skins and decals.

            -

            Features of FoxOne Special Missions

            -

            Some of the features of FoxOne Special Missions are:

            -
              -
            • Over 30 challenging missions in different locations around the world
            • -
            • Over 20 different planes, including fighters, bombers, stealth jets, and drones
            • -
            • Over 100 different weapons, including missiles, bombs, rockets, guns, and lasers
            • -
            • Multiple game modes, including campaign, free flight, and multiplayer
            • -
            • Leaderboards and achievements to compete with other players
            • -
            • Support for gamepads and virtual joysticks
            • -
            -

            Gameplay of FoxOne Special Missions

            -

            The gameplay of FoxOne Special Missions is simple and intuitive. You can control your plane using the accelerometer or the virtual joystick on the screen. You can also use the buttons to fire your weapons, change your view, activate your radar, and perform other actions. You can complete various objectives in each mission, such as destroying enemy bases, escorting allies, intercepting enemy planes, and more. You can earn money and stars by completing missions, which you can use to buy new planes and weapons. You can also unlock new missions by completing previous ones.

            -

            What is Mod APK AN1?

            -

            Mod APK AN1 is a modified version of FoxOne Special Missions that gives you unlimited money and access to all planes. It is created by AN1.com, a website that provides modded games and apps for Android devices. With Mod APK AN1, you can enjoy FoxOne Special Missions without any limitations or restrictions.

            -

            Benefits of Mod APK AN1

            -

            Some of the benefits of Mod APK AN1 are:

            -

            foxone special missions mod apk unlimited money
            -foxone special missions mod apk latest version
            -foxone special missions mod apk download for android
            -foxone special missions mod apk rexdl
            -foxone special missions mod apk revdl
            -foxone special missions mod apk hack
            -foxone special missions mod apk free shopping
            -foxone special missions mod apk obb
            -foxone special missions mod apk offline
            -foxone special missions mod apk android 1
            -foxone special missions mod apk 2.0.6rc
            -foxone special missions mod apk 1.8.10rc
            -foxone special missions mod apk an1.com
            -foxone special missions mod apk an1.co.in
            -foxone special missions mod apk an1.net
            -foxone special missions mod apk an1.org
            -foxone special missions mod apk an1.ru
            -foxone special missions mod apk an1.in
            -foxone special missions mod apk an1.io
            -foxone special missions mod apk an1.me
            -download foxone special missions + mod apk an1
            -how to install foxone special missions + mod apk an1
            -how to play foxone special missions + mod apk an1
            -how to update foxone special missions + mod apk an1
            -how to uninstall foxone special missions + mod apk an1
            -what is foxone special missions + mod apk an1
            -why download foxone special missions + mod apk an1
            -where to get foxone special missions + mod apk an1
            -when was foxone special missions + mod apk an1 released
            -who made foxone special missions + mod apk an1
            -best settings for foxone special missions + mod apk an1
            -best planes for foxone special missions + mod apk an1
            -best weapons for foxone special missions + mod apk an1
            -best tips and tricks for foxone special missions + mod apk an1
            -best cheats and hacks for foxone special missions + mod apk an1
            -review of foxone special missions + mod apk an1
            -gameplay of foxone special missions + mod apk an1
            -walkthrough of foxone special missions + mod apk an1
            -guide of foxone special missions + mod apk an1
            -tutorial of foxone special missions + mod apk an1

            -
              -
            • You can get unlimited money to buy any plane or weapon you want
            • -
            • You can unlock all planes without completing any missions
            • -
            • You can enjoy all the features of the game without any ads or in-app purchases
            • -
            • You can play the game offline without any internet connection
            • -
            • You can update the game easily without losing your progress or data
            • -
            -

            How to download and install Mod APK AN1

            -

            To download and install Mod APK AN1 on your Android device, you need to follow these steps:

            -
              -
            1. Go to [AN1.com](^1^) and search for FoxOne Special Missions Mod APK
            2. -
            3. Download the modded file from the website
            4. -
            5. Enable unknown sources on your device settings to allow installation from third-party sources
            6. -
            7. Locate the downloaded file on your device storage and tap on it to install it
            8. -
            9. Launch the game and enjoy it with unlimited money and all planes unlocked
            10. Conclusion -

              FoxOne Special Missions is a great game for anyone who loves flying and fighting in the sky. It has realistic graphics, sound effects, and physics, as well as a variety of planes, weapons, and missions to choose from. However, if you want to enjoy the game without any limitations or restrictions, you should try Mod APK AN1, a modified version of the game that gives you unlimited money and access to all planes. You can download and install Mod APK AN1 easily from AN1.com, and play the game offline without any ads or in-app purchases. Mod APK AN1 is the best way to experience FoxOne Special Missions on your Android device.

              -

              Summary of the article

              -

              In this article, we reviewed FoxOne Special Missions and Mod APK AN1, and showed you how to download and install them on your Android device. We discussed the features, gameplay, and benefits of both the original game and the modded version. We hope you found this article helpful and informative, and that you will enjoy playing FoxOne Special Missions with Mod APK AN1.

              -

              FAQs

              -

              Here are some frequently asked questions about FoxOne Special Missions and Mod APK AN1:

              -
                -
              • Q: Is FoxOne Special Missions free to play?
              • -
              • A: Yes, FoxOne Special Missions is free to play, but it contains ads and in-app purchases that can enhance your gaming experience.
              • -
              • Q: Is Mod APK AN1 safe to use?
              • -
              • A: Yes, Mod APK AN1 is safe to use, as long as you download it from a trusted source like AN1.com. However, you should always be careful when installing apps from unknown sources, and scan them for viruses or malware before installing them.
              • -
              • Q: Do I need to root my device to use Mod APK AN1?
              • -
              • A: No, you do not need to root your device to use Mod APK AN1. You just need to enable unknown sources on your device settings to allow installation from third-party sources.
              • -
              • Q: Can I play FoxOne Special Missions with Mod APK AN1 online?
              • -
              • A: No, you cannot play FoxOne Special Missions with Mod APK AN1 online. The modded version of the game is only compatible with offline mode. If you want to play online, you need to use the original version of the game.
              • -
              • Q: Can I update FoxOne Special Missions with Mod APK AN1?
              • -
              • A: Yes, you can update FoxOne Special Missions with Mod APK AN1 easily without losing your progress or data. You just need to download the latest version of the modded file from AN1.com and install it over the existing one.
              • -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/test_project/latex/attention/parameter_attention.tex b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/test_project/latex/attention/parameter_attention.tex deleted file mode 100644 index 7bc4fe452dbdbfe44ff72f0cdbd37acd5c786ce6..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/test_project/latex/attention/parameter_attention.tex +++ /dev/null @@ -1,45 +0,0 @@ -\pagebreak -\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention} - -In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted): - -\begin{align*} - FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\ - A(q, K, V) = Softmax(qK^T)V -\end{align*} - -Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function. - -%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$. - -Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations. - -In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer. - -In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model. - -Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}. - -\begin{table}[h] -\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.} -\label{tab:parameter_attention} -\begin{center} -\vspace{-2mm} -%\scalebox{1.0}{ -\begin{tabular}{c|cccccc|cccc} -\hline\rule{0pt}{2.0ex} - & \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} & -\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} & - \multirow{2}{*}{$n_p$} & - PPL & BLEU & params & training\\ - & & & & & & & (dev) & (dev) & $\times10^6$ & time \\ -\hline\rule{0pt}{2.0ex} -base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\ -\hline\rule{0pt}{2.0ex} -AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\ -AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\ -\hline -\end{tabular} -%} -\end{center} -\end{table} diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/hifigan/__init__.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/hifigan/__init__.py deleted file mode 100644 index e0ae476fe58c48e998c56234a55b871beba4042d..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/hifigan/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .models import Generator - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/finalhandler/HISTORY.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/finalhandler/HISTORY.md deleted file mode 100644 index ec2d38b5d406e6b482165436ada0a2535cdaa3a9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/finalhandler/HISTORY.md +++ /dev/null @@ -1,195 +0,0 @@ -1.2.0 / 2022-03-22 -================== - - * Remove set content headers that break response - * deps: on-finished@2.4.1 - * deps: statuses@2.0.1 - - Rename `425 Unordered Collection` to standard `425 Too Early` - -1.1.2 / 2019-05-09 -================== - - * Set stricter `Content-Security-Policy` header - * deps: parseurl@~1.3.3 - * deps: statuses@~1.5.0 - -1.1.1 / 2018-03-06 -================== - - * Fix 404 output for bad / missing pathnames - * deps: encodeurl@~1.0.2 - - Fix encoding `%` as last character - * deps: statuses@~1.4.0 - -1.1.0 / 2017-09-24 -================== - - * Use `res.headersSent` when available - -1.0.6 / 2017-09-22 -================== - - * deps: debug@2.6.9 - -1.0.5 / 2017-09-15 -================== - - * deps: parseurl@~1.3.2 - - perf: reduce overhead for full URLs - - perf: unroll the "fast-path" `RegExp` - -1.0.4 / 2017-08-03 -================== - - * deps: debug@2.6.8 - -1.0.3 / 2017-05-16 -================== - - * deps: debug@2.6.7 - - deps: ms@2.0.0 - -1.0.2 / 2017-04-22 -================== - - * deps: debug@2.6.4 - - deps: ms@0.7.3 - -1.0.1 / 2017-03-21 -================== - - * Fix missing `` in HTML document - * deps: debug@2.6.3 - - Fix: `DEBUG_MAX_ARRAY_LENGTH` - -1.0.0 / 2017-02-15 -================== - - * Fix exception when `err` cannot be converted to a string - * Fully URL-encode the pathname in the 404 message - * Only include the pathname in the 404 message - * Send complete HTML document - * Set `Content-Security-Policy: default-src 'self'` header - * deps: debug@2.6.1 - - Allow colors in workers - - Deprecated `DEBUG_FD` environment variable set to `3` or higher - - Fix error when running under React Native - - Use same color for same namespace - - deps: ms@0.7.2 - -0.5.1 / 2016-11-12 -================== - - * Fix exception when `err.headers` is not an object - * deps: statuses@~1.3.1 - * perf: hoist regular expressions - * perf: remove duplicate validation path - -0.5.0 / 2016-06-15 -================== - - * Change invalid or non-numeric status code to 500 - * Overwrite status message to match set status code - * Prefer `err.statusCode` if `err.status` is invalid - * Set response headers from `err.headers` object - * Use `statuses` instead of `http` module for status messages - - Includes all defined status messages - -0.4.1 / 2015-12-02 -================== - - * deps: escape-html@~1.0.3 - - perf: enable strict mode - - perf: optimize string replacement - - perf: use faster string coercion - -0.4.0 / 2015-06-14 -================== - - * Fix a false-positive when unpiping in Node.js 0.8 - * Support `statusCode` property on `Error` objects - * Use `unpipe` module for unpiping requests - * deps: escape-html@1.0.2 - * deps: on-finished@~2.3.0 - - Add defined behavior for HTTP `CONNECT` requests - - Add defined behavior for HTTP `Upgrade` requests - - deps: ee-first@1.1.1 - * perf: enable strict mode - * perf: remove argument reassignment - -0.3.6 / 2015-05-11 -================== - - * deps: debug@~2.2.0 - - deps: ms@0.7.1 - -0.3.5 / 2015-04-22 -================== - - * deps: on-finished@~2.2.1 - - Fix `isFinished(req)` when data buffered - -0.3.4 / 2015-03-15 -================== - - * deps: debug@~2.1.3 - - Fix high intensity foreground color for bold - - deps: ms@0.7.0 - -0.3.3 / 2015-01-01 -================== - - * deps: debug@~2.1.1 - * deps: on-finished@~2.2.0 - -0.3.2 / 2014-10-22 -================== - - * deps: on-finished@~2.1.1 - - Fix handling of pipelined requests - -0.3.1 / 2014-10-16 -================== - - * deps: debug@~2.1.0 - - Implement `DEBUG_FD` env variable support - -0.3.0 / 2014-09-17 -================== - - * Terminate in progress response only on error - * Use `on-finished` to determine request status - -0.2.0 / 2014-09-03 -================== - - * Set `X-Content-Type-Options: nosniff` header - * deps: debug@~2.0.0 - -0.1.0 / 2014-07-16 -================== - - * Respond after request fully read - - prevents hung responses and socket hang ups - * deps: debug@1.0.4 - -0.0.3 / 2014-07-11 -================== - - * deps: debug@1.0.3 - - Add support for multiple wildcards in namespaces - -0.0.2 / 2014-06-19 -================== - - * Handle invalid status codes - -0.0.1 / 2014-06-05 -================== - - * deps: debug@1.0.2 - -0.0.0 / 2014-06-05 -================== - - * Extracted from connect/express diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/negotiator/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/negotiator/README.md deleted file mode 100644 index 82915e521b4d321d90111316c6193706f1d7b8bf..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/negotiator/README.md +++ /dev/null @@ -1,203 +0,0 @@ -# negotiator - -[![NPM Version][npm-image]][npm-url] -[![NPM Downloads][downloads-image]][downloads-url] -[![Node.js Version][node-version-image]][node-version-url] -[![Build Status][github-actions-ci-image]][github-actions-ci-url] -[![Test Coverage][coveralls-image]][coveralls-url] - -An HTTP content negotiator for Node.js - -## Installation - -```sh -$ npm install negotiator -``` - -## API - -```js -var Negotiator = require('negotiator') -``` - -### Accept Negotiation - -```js -availableMediaTypes = ['text/html', 'text/plain', 'application/json'] - -// The negotiator constructor receives a request object -negotiator = new Negotiator(request) - -// Let's say Accept header is 'text/html, application/*;q=0.2, image/jpeg;q=0.8' - -negotiator.mediaTypes() -// -> ['text/html', 'image/jpeg', 'application/*'] - -negotiator.mediaTypes(availableMediaTypes) -// -> ['text/html', 'application/json'] - -negotiator.mediaType(availableMediaTypes) -// -> 'text/html' -``` - -You can check a working example at `examples/accept.js`. - -#### Methods - -##### mediaType() - -Returns the most preferred media type from the client. - -##### mediaType(availableMediaType) - -Returns the most preferred media type from a list of available media types. - -##### mediaTypes() - -Returns an array of preferred media types ordered by the client preference. - -##### mediaTypes(availableMediaTypes) - -Returns an array of preferred media types ordered by priority from a list of -available media types. - -### Accept-Language Negotiation - -```js -negotiator = new Negotiator(request) - -availableLanguages = ['en', 'es', 'fr'] - -// Let's say Accept-Language header is 'en;q=0.8, es, pt' - -negotiator.languages() -// -> ['es', 'pt', 'en'] - -negotiator.languages(availableLanguages) -// -> ['es', 'en'] - -language = negotiator.language(availableLanguages) -// -> 'es' -``` - -You can check a working example at `examples/language.js`. - -#### Methods - -##### language() - -Returns the most preferred language from the client. - -##### language(availableLanguages) - -Returns the most preferred language from a list of available languages. - -##### languages() - -Returns an array of preferred languages ordered by the client preference. - -##### languages(availableLanguages) - -Returns an array of preferred languages ordered by priority from a list of -available languages. - -### Accept-Charset Negotiation - -```js -availableCharsets = ['utf-8', 'iso-8859-1', 'iso-8859-5'] - -negotiator = new Negotiator(request) - -// Let's say Accept-Charset header is 'utf-8, iso-8859-1;q=0.8, utf-7;q=0.2' - -negotiator.charsets() -// -> ['utf-8', 'iso-8859-1', 'utf-7'] - -negotiator.charsets(availableCharsets) -// -> ['utf-8', 'iso-8859-1'] - -negotiator.charset(availableCharsets) -// -> 'utf-8' -``` - -You can check a working example at `examples/charset.js`. - -#### Methods - -##### charset() - -Returns the most preferred charset from the client. - -##### charset(availableCharsets) - -Returns the most preferred charset from a list of available charsets. - -##### charsets() - -Returns an array of preferred charsets ordered by the client preference. - -##### charsets(availableCharsets) - -Returns an array of preferred charsets ordered by priority from a list of -available charsets. - -### Accept-Encoding Negotiation - -```js -availableEncodings = ['identity', 'gzip'] - -negotiator = new Negotiator(request) - -// Let's say Accept-Encoding header is 'gzip, compress;q=0.2, identity;q=0.5' - -negotiator.encodings() -// -> ['gzip', 'identity', 'compress'] - -negotiator.encodings(availableEncodings) -// -> ['gzip', 'identity'] - -negotiator.encoding(availableEncodings) -// -> 'gzip' -``` - -You can check a working example at `examples/encoding.js`. - -#### Methods - -##### encoding() - -Returns the most preferred encoding from the client. - -##### encoding(availableEncodings) - -Returns the most preferred encoding from a list of available encodings. - -##### encodings() - -Returns an array of preferred encodings ordered by the client preference. - -##### encodings(availableEncodings) - -Returns an array of preferred encodings ordered by priority from a list of -available encodings. - -## See Also - -The [accepts](https://npmjs.org/package/accepts#readme) module builds on -this module and provides an alternative interface, mime type validation, -and more. - -## License - -[MIT](LICENSE) - -[npm-image]: https://img.shields.io/npm/v/negotiator.svg -[npm-url]: https://npmjs.org/package/negotiator -[node-version-image]: https://img.shields.io/node/v/negotiator.svg -[node-version-url]: https://nodejs.org/en/download/ -[coveralls-image]: https://img.shields.io/coveralls/jshttp/negotiator/master.svg -[coveralls-url]: https://coveralls.io/r/jshttp/negotiator?branch=master -[downloads-image]: https://img.shields.io/npm/dm/negotiator.svg -[downloads-url]: https://npmjs.org/package/negotiator -[github-actions-ci-image]: https://img.shields.io/github/workflow/status/jshttp/negotiator/ci/master?label=ci -[github-actions-ci-url]: https://github.com/jshttp/negotiator/actions/workflows/ci.yml diff --git a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/data_parallel.py b/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/data_parallel.py deleted file mode 100644 index 376fc038919aa2a5bd696141e7bb6025d4981306..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/data_parallel.py +++ /dev/null @@ -1,112 +0,0 @@ -# -*- coding: utf8 -*- - -import torch.cuda as cuda -import torch.nn as nn -import torch -import collections -from torch.nn.parallel._functions import Gather - - -__all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to'] - - -def async_copy_to(obj, dev, main_stream=None): - if torch.is_tensor(obj): - v = obj.cuda(dev, non_blocking=True) - if main_stream is not None: - v.data.record_stream(main_stream) - return v - elif isinstance(obj, collections.Mapping): - return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()} - elif isinstance(obj, collections.Sequence): - return [async_copy_to(o, dev, main_stream) for o in obj] - else: - return obj - - -def dict_gather(outputs, target_device, dim=0): - """ - Gathers variables from different GPUs on a specified device - (-1 means the CPU), with dictionary support. - """ - def gather_map(outputs): - out = outputs[0] - if torch.is_tensor(out): - # MJY(20180330) HACK:: force nr_dims > 0 - if out.dim() == 0: - outputs = [o.unsqueeze(0) for o in outputs] - return Gather.apply(target_device, dim, *outputs) - elif out is None: - return None - elif isinstance(out, collections.Mapping): - return {k: gather_map([o[k] for o in outputs]) for k in out} - elif isinstance(out, collections.Sequence): - return type(out)(map(gather_map, zip(*outputs))) - return gather_map(outputs) - - -class DictGatherDataParallel(nn.DataParallel): - def gather(self, outputs, output_device): - return dict_gather(outputs, output_device, dim=self.dim) - - -class UserScatteredDataParallel(DictGatherDataParallel): - def scatter(self, inputs, kwargs, device_ids): - assert len(inputs) == 1 - inputs = inputs[0] - inputs = _async_copy_stream(inputs, device_ids) - inputs = [[i] for i in inputs] - assert len(kwargs) == 0 - kwargs = [{} for _ in range(len(inputs))] - - return inputs, kwargs - - -def user_scattered_collate(batch): - return batch - - -def _async_copy(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - for i, dev in zip(inputs, device_ids): - with cuda.device(dev): - outputs.append(async_copy_to(i, dev)) - - return tuple(outputs) - - -def _async_copy_stream(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - streams = [_get_stream(d) for d in device_ids] - for i, dev, stream in zip(inputs, device_ids, streams): - with cuda.device(dev): - main_stream = cuda.current_stream() - with cuda.stream(stream): - outputs.append(async_copy_to(i, dev, main_stream=main_stream)) - main_stream.wait_stream(stream) - - return outputs - - -"""Adapted from: torch/nn/parallel/_functions.py""" -# background streams used for copying -_streams = None - - -def _get_stream(device): - """Gets a background stream for copying between CPU and GPU""" - global _streams - if device == -1: - return None - if _streams is None: - _streams = [None] * cuda.device_count() - if _streams[device] is None: _streams[device] = cuda.Stream(device) - return _streams[device] diff --git a/spaces/fffiloni/simple-animation-doodle/OrientedCursor.js b/spaces/fffiloni/simple-animation-doodle/OrientedCursor.js deleted file mode 100644 index 221501afdd97c0589f0feb8b39ed83e136be23f0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/simple-animation-doodle/OrientedCursor.js +++ /dev/null @@ -1,166 +0,0 @@ -class OrientedCursor{ - - constructor(elementID){ - - this.elementID = elementID; - this.tiltX = 0; - this.tiltY = 0; - this.pressure = 0; - this.diameter = 0; - - this.targetAngle = 0; - - this.isOnCanvas = false; - } - - // ----------------------------------------- - // ----------------------------------------- - - - catchCursor(){ - let getCanvas = document.getElementById(this.elementID); - - getCanvas.addEventListener("pointermove", (e) => { - //console.log("pointerMove"); - - if (this.isOnCanvas) { - this.tiltX = e.tiltX; - this.tiltY = e.tiltY; - this.pressure = e.pressure; - - //console.log(inclinationX + ' ' + inclinationY + ' ' + pressure); - } - }, false); - - getCanvas.addEventListener("pointerdown", (e) => { - //console.log("pointerDown"); - getCanvas.setPointerCapture(e.pointerId); - this.isOnCanvas = true; - - this.tiltX = e.tiltX; - this.tiltY = e.tiltY; - this.pressure = e.pressure; - - }, false); - - getCanvas.addEventListener("pointerup", (e) => { - //console.log("pointerUp"); - - if (this.isOnCanvas) { - getCanvas.releasePointerCapture(e.pointerId); - this.isOnCanvas = false; - - this.tiltX = e.tiltX; - this.tiltY = e.tiltY; - this.pressure = e.pressure; - - //console.log(inclinationX + ' ' + inclinationY + ' ' + pressure); - - } - }, false); - } - - - // ----------------------------------------- - // ----------------------------------------- - - - calculateAngle(){ - this.targetAngle = atan2(this.tiltY, this.tiltX); - } - - - // ----------------------------------------- - // ----------------------------------------- - - - showData(){ - // LIVE COORDINATES - push(); - //noFill(); - fill('#000'); - noStroke(); - //stroke('#000'); - text('pressure: ' + this.pressure, 10, 30); - text('tilt_X: ' + this.tiltX, 10, 50); - text('tilt_Y: ' + this.tiltY, 10, 70); - text('angle arctan: ' + this.targetAngle, 10, 90); - pop(); - } - - // ----------------------------------------- - // ----------------------------------------- - - - mapPressure(){ - this.diameter = map(this.pressure, 0, 1, 1, 3); - } - - // ----------------------------------------- - // ----------------------------------------- - - - process_rotate(){ - translate(mouseX, mouseY); //mouseX & mouseY - rotate(this.targetAngle); - translate(-mouseX, -mouseY); // -mouseX & -mouseY - } - - // ----------------------------------------- - // ----------------------------------------- - - - showCursor(mouseX, mouseY){ - // POINTER CENTER - push(); - noStroke(); - fill(0, 0, 0); - circle(mouseX, mouseY, 20); - pop(); - - // RECTANGLE SHAPE - push(); - this.process_rotate() - - noFill(); - stroke(2) - rectMode(CENTER) - rect(mouseX, mouseY, this.diameter, 30); // reacts to pen pressure value - - noStroke(); - fill('yellow'); - circle(mouseX, mouseY, 10); // shows the pivot point - pop(); - - // POINTS FROM STYLUS AT GOOD INCLINATION & PRESSURE VALUE - push(); - this.process_rotate(); - noFill(); - stroke(1); - ellipseMode(CENTER); - circle(mouseX, mouseY + this.diameter, 10); // LEFT || WEST - circle(mouseX + this.diameter, mouseY, 10);// DOWN || SOUTH - circle(mouseX, mouseY - this.diameter, 10); // RIGHT || EAST - circle(mouseX - this.diameter, mouseY, 10); // UP || NORTH - - - pop(); - - circle(mouseX + this.diameter/4 * cos(this.targetAngle), mouseY + this.diameter/4 * sin(this.targetAngle), 1) - circle(mouseX + this.diameter/4 * cos(this.targetAngle + PI), mouseY + this.diameter/4 * sin(this.targetAngle+ PI), 1) - - - // TILT AXIS & LENGTH - push(); - fill('red'); - circle(mouseX + this.tiltX, mouseY + this.tiltY, 10); - - pop(); - - push(); - fill('blue'); - circle(mouseX - this.tiltX, mouseY - this.tiltY, 10); - pop(); - } - -} \ No newline at end of file diff --git a/spaces/fiyen/YangyangChatGPT/Dockerfile b/spaces/fiyen/YangyangChatGPT/Dockerfile deleted file mode 100644 index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000 --- a/spaces/fiyen/YangyangChatGPT/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -RUN pip install --user -r requirements.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV my_api_key empty -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/fetch.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/fetch.py deleted file mode 100644 index 7e5846848b1c550fef180b4e05491adc279f51dd..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/fetch.py +++ /dev/null @@ -1,109 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - -class FetchEnv(MiniGridEnv): - """ - Environment in which the agent has to fetch a random object - named using English text strings - """ - - def __init__( - self, - size=8, - numObjs=3 - ): - self.numObjs = numObjs - - super().__init__( - grid_size=size, - max_steps=5*size**2, - # Set this to True for maximum speed - see_through_walls=True - ) - - def _gen_grid(self, width, height): - self.grid = Grid(width, height) - - # Generate the surrounding walls - self.grid.horz_wall(0, 0) - self.grid.horz_wall(0, height-1) - self.grid.vert_wall(0, 0) - self.grid.vert_wall(width-1, 0) - - types = ['key', 'ball'] - - objs = [] - - # For each object to be generated - while len(objs) < self.numObjs: - objType = self._rand_elem(types) - objColor = self._rand_elem(COLOR_NAMES) - - if objType == 'key': - obj = Key(objColor) - elif objType == 'ball': - obj = Ball(objColor) - - self.place_obj(obj) - objs.append(obj) - - # Randomize the player start position and orientation - self.place_agent() - - # Choose a random object to be picked up - target = objs[self._rand_int(0, len(objs))] - self.targetType = target.type - self.targetColor = target.color - - descStr = '%s %s' % (self.targetColor, self.targetType) - - # Generate the mission string - idx = self._rand_int(0, 5) - if idx == 0: - self.mission = 'get a %s' % descStr - elif idx == 1: - self.mission = 'go get a %s' % descStr - elif idx == 2: - self.mission = 'fetch a %s' % descStr - elif idx == 3: - self.mission = 'go fetch a %s' % descStr - elif idx == 4: - self.mission = 'you must fetch a %s' % descStr - assert hasattr(self, 'mission') - - def step(self, action): - obs, reward, done, info = MiniGridEnv.step(self, action) - - if self.carrying: - if self.carrying.color == self.targetColor and \ - self.carrying.type == self.targetType: - reward = self._reward() - done = True - else: - reward = 0 - done = True - - return obs, reward, done, info - -class FetchEnv5x5N2(FetchEnv): - def __init__(self): - super().__init__(size=5, numObjs=2) - -class FetchEnv6x6N2(FetchEnv): - def __init__(self): - super().__init__(size=6, numObjs=2) - -register( - id='MiniGrid-Fetch-5x5-N2-v0', - entry_point='gym_minigrid.envs:FetchEnv5x5N2' -) - -register( - id='MiniGrid-Fetch-6x6-N2-v0', - entry_point='gym_minigrid.envs:FetchEnv6x6N2' -) - -register( - id='MiniGrid-Fetch-8x8-N3-v0', - entry_point='gym_minigrid.envs:FetchEnv' -) diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/webcam/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/webcam/run.py deleted file mode 100644 index 4f2a9e06226d9b780c77cb94e32dd38ecc9f6d3e..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/webcam/run.py +++ /dev/null @@ -1,17 +0,0 @@ -import numpy as np - -import gradio as gr - - -def snap(image, video): - return [image, video] - - -demo = gr.Interface( - snap, - [gr.Image(source="webcam", tool=None), gr.Video(source="webcam")], - ["image", "video"], -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/frncscp/bullerengue/musika/22kHz/losses.py b/spaces/frncscp/bullerengue/musika/22kHz/losses.py deleted file mode 100644 index d26721ccfafcb377e639a5e7d4815060887990fd..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/musika/22kHz/losses.py +++ /dev/null @@ -1,39 +0,0 @@ -import tensorflow as tf - - -def mae(x, y): - return tf.reduce_mean(tf.abs(x - y)) - - -def mse(x, y): - return tf.reduce_mean((x - y) ** 2) - - -def d_loss_f(fake): - return tf.reduce_mean(tf.maximum(1 + fake, 0)) - - -def d_loss_r(real): - return tf.reduce_mean(tf.maximum(1 - real, 0)) - - -def g_loss_f(fake): - return tf.reduce_mean(-fake) - - -def g_loss_r(real): - return tf.reduce_mean(real) - - -def spec_conv(real, fake): - diff = tf.math.sqrt(tf.math.reduce_sum((real - fake) ** 2, [-2, -1])) - den = tf.math.sqrt(tf.math.reduce_sum(real ** 2, [-2, -1])) - return tf.reduce_mean(diff / den) - - -def log_norm(real, fake): - return tf.reduce_mean(tf.math.log(tf.math.reduce_sum(tf.abs(real - fake), [-2, -1]))) - - -def msesum(x, y): - return tf.reduce_mean(tf.math.reduce_sum((x - y) ** 2, -1, keepdims=True) + 1e-7) diff --git a/spaces/gnakan/airtable-QA/sidebar.py b/spaces/gnakan/airtable-QA/sidebar.py deleted file mode 100644 index 590ee2e7fb5efe14b968fff591af8efed20ffb23..0000000000000000000000000000000000000000 --- a/spaces/gnakan/airtable-QA/sidebar.py +++ /dev/null @@ -1,64 +0,0 @@ -""" -Sidebar -""" - -import streamlit as st -from utils import ( - validate_api_key, - validate_pat, - validate_base_url, - populate_markdown -) - -def set_openai_api_key(api_key: str): - """ - Sets the OpenAI API key in the session state. - """ - st.session_state["OPENAI_API_KEY"] = api_key - -def set_airtable_personal_access_token(airtable_pat: str): - """ - Sets the Airtable personal access token in the session state. - """ - st.session_state["AIRTABLE_PAT"] = airtable_pat - -def set_airtable_base_url(airtable_url: str): - """ - Sets the Airtable base URL in the session state. - """ - st.session_state["AIRTABLE_URL"] = airtable_url - -def setup(): - """ - Displays a sidebar with input and info contents. - """ - with st.sidebar: - - api_key_input, airtable_pat_input, airtable_base_url_input = populate_markdown() - - if st.button('Configure', use_container_width=True): - if validate_api_key(api_key_input) and validate_pat(airtable_pat_input) and validate_base_url(airtable_base_url_input): - st.session_state["is_key_configured"] = True - st.success('Successfully Configured!', icon="✅") - else: - st.session_state["is_key_configured"] = False - error_message = 'Configuration failed. Please check the following input(s):' - if not validate_api_key(api_key_input): - error_message += '\n- OpenAI API Key format is invalid (should start with "sk-")' - if not validate_pat(airtable_pat_input): - error_message += '\n- Airtable Personal Access Token format is invalid (should start with "pat")' - if not validate_base_url(airtable_base_url_input): - error_message += '\n- Airtable Base URL format is invalid (should start with "https://airtable.com" and have the correct path)' - st.error(error_message, icon="🚨") - - if api_key_input: - set_openai_api_key(api_key_input) - if airtable_pat_input: - set_airtable_personal_access_token(airtable_pat_input) - if airtable_base_url_input: - set_airtable_base_url(airtable_base_url_input) - - st.markdown("---") - st.markdown( - "Forked from this project on [GitHub](https://github.com/ikram-shah/airtable-qna)" - ) \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/FL Studio 20.1.2.887 Crack Reg Key With Version _BEST_.md b/spaces/gotiQspiryo/whisper-ui/examples/FL Studio 20.1.2.887 Crack Reg Key With Version _BEST_.md deleted file mode 100644 index 189848d5bff90310f61a2535b1e7ab509fe290f2..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/FL Studio 20.1.2.887 Crack Reg Key With Version _BEST_.md +++ /dev/null @@ -1,6 +0,0 @@ -

              FL Studio 20.1.2.887 Crack Reg Key With Version


              Download Ziphttps://urlgoal.com/2uyNvE



              -
              - aaccfb2cb3
              -
              -
              -

              diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Mail Order Bride The Stubborn Bride Promised To The Rancher A Clean Western Historical Romance (Th [CRACKED].md b/spaces/gotiQspiryo/whisper-ui/examples/Mail Order Bride The Stubborn Bride Promised To The Rancher A Clean Western Historical Romance (Th [CRACKED].md deleted file mode 100644 index ace3e433b21ece7f4b38a70d941ad52b582c3f89..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Mail Order Bride The Stubborn Bride Promised To The Rancher A Clean Western Historical Romance (Th [CRACKED].md +++ /dev/null @@ -1,5 +0,0 @@ - -

              Dreams for marriage hinge on mail-order promises
              Nine advertisements for brides lead to inconvenient complications in romance. Traveling west alone on a promise of marriage, each woman has her reasons to accept a husband sight unseen. Some are fleeing poverty or abuse while others simply seek hope for a brighter future.

              -

              Mail Order Bride: The Stubborn Bride Promised To The Rancher: A Clean Western Historical Romance (Th


              DOWNLOADhttps://urlgoal.com/2uyMFs



              aaccfb2cb3
              -
              -
              \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh b/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh deleted file mode 100644 index a7ea3877beefe1d4d53f9f7e32b004d8ce01e22a..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -num_sil_states=3 -num_nonsil_states=1 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -eux - -dict=$1 -data_dir=$2 -lexicon=$3 - -dict_dir=$data_dir/local/dict_word -tmplm_dir=$data_dir/local/lang_tmp_word -lm_dir=$data_dir/lang_word - -mkdir -p $dict_dir $tmplm_dir $lm_dir - -# prepare dict -echo "SIL" > $dict_dir/silence_phones.txt -echo "SIL" > $dict_dir/optional_silence.txt -awk '{print $1}' $dict > $dict_dir/nonsilence_phones.txt - -(echo "!SIL SIL"; echo " SIL";) | cat - $lexicon > $dict_dir/lexicon.txt - -echo "SIL" > $dict_dir/extra_questions.txt -awk '{printf $1" "} END {printf "\n"}' $dict >> $dict_dir/extra_questions.txt - -# prepare lang -utils/prepare_lang.sh --position-dependent-phones false \ - --num_sil_states $num_sil_states --num_nonsil_states $num_nonsil_states \ - $dict_dir "" $tmplm_dir $lm_dir diff --git a/spaces/gradio/HuBERT/fairseq/logging/progress_bar.py b/spaces/gradio/HuBERT/fairseq/logging/progress_bar.py deleted file mode 100644 index 061082caefe542c5f0f87e04d9472583874126a3..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/logging/progress_bar.py +++ /dev/null @@ -1,490 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Wrapper around various loggers and progress bars (e.g., tqdm). -""" - -import atexit -import json -import logging -import os -import sys -from collections import OrderedDict -from contextlib import contextmanager -from numbers import Number -from typing import Optional - -import torch - -from .meters import AverageMeter, StopwatchMeter, TimeMeter - - -logger = logging.getLogger(__name__) - - -def progress_bar( - iterator, - log_format: Optional[str] = None, - log_interval: int = 100, - log_file: Optional[str] = None, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - tensorboard_logdir: Optional[str] = None, - default_log_format: str = "tqdm", - wandb_project: Optional[str] = None, - wandb_run_name: Optional[str] = None, - azureml_logging: Optional[bool] = False, -): - if log_format is None: - log_format = default_log_format - if log_file is not None: - handler = logging.FileHandler(filename=log_file) - logger.addHandler(handler) - - if log_format == "tqdm" and not sys.stderr.isatty(): - log_format = "simple" - - if log_format == "json": - bar = JsonProgressBar(iterator, epoch, prefix, log_interval) - elif log_format == "none": - bar = NoopProgressBar(iterator, epoch, prefix) - elif log_format == "simple": - bar = SimpleProgressBar(iterator, epoch, prefix, log_interval) - elif log_format == "tqdm": - bar = TqdmProgressBar(iterator, epoch, prefix) - else: - raise ValueError("Unknown log format: {}".format(log_format)) - - if tensorboard_logdir: - try: - # [FB only] custom wrapper for TensorBoard - import palaas # noqa - from .fb_tbmf_wrapper import FbTbmfWrapper - - bar = FbTbmfWrapper(bar, log_interval) - except ImportError: - bar = TensorboardProgressBarWrapper(bar, tensorboard_logdir) - - if wandb_project: - bar = WandBProgressBarWrapper(bar, wandb_project, run_name=wandb_run_name) - - if azureml_logging: - bar = AzureMLProgressBarWrapper(bar) - - return bar - - -def build_progress_bar( - args, - iterator, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - default: str = "tqdm", - no_progress_bar: str = "none", -): - """Legacy wrapper that takes an argparse.Namespace.""" - if getattr(args, "no_progress_bar", False): - default = no_progress_bar - if getattr(args, "distributed_rank", 0) == 0: - tensorboard_logdir = getattr(args, "tensorboard_logdir", None) - else: - tensorboard_logdir = None - return progress_bar( - iterator, - log_format=args.log_format, - log_interval=args.log_interval, - epoch=epoch, - prefix=prefix, - tensorboard_logdir=tensorboard_logdir, - default_log_format=default, - ) - - -def format_stat(stat): - if isinstance(stat, Number): - stat = "{:g}".format(stat) - elif isinstance(stat, AverageMeter): - stat = "{:.3f}".format(stat.avg) - elif isinstance(stat, TimeMeter): - stat = "{:g}".format(round(stat.avg)) - elif isinstance(stat, StopwatchMeter): - stat = "{:g}".format(round(stat.sum)) - elif torch.is_tensor(stat): - stat = stat.tolist() - return stat - - -class BaseProgressBar(object): - """Abstract class for progress bars.""" - - def __init__(self, iterable, epoch=None, prefix=None): - self.iterable = iterable - self.n = getattr(iterable, "n", 0) - self.epoch = epoch - self.prefix = "" - if epoch is not None: - self.prefix += "epoch {:03d}".format(epoch) - if prefix is not None: - self.prefix += (" | " if self.prefix != "" else "") + prefix - - def __len__(self): - return len(self.iterable) - - def __enter__(self): - return self - - def __exit__(self, *exc): - return False - - def __iter__(self): - raise NotImplementedError - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - raise NotImplementedError - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - raise NotImplementedError - - def update_config(self, config): - """Log latest configuration.""" - pass - - def _str_commas(self, stats): - return ", ".join(key + "=" + stats[key].strip() for key in stats.keys()) - - def _str_pipes(self, stats): - return " | ".join(key + " " + stats[key].strip() for key in stats.keys()) - - def _format_stats(self, stats): - postfix = OrderedDict(stats) - # Preprocess stats according to datatype - for key in postfix.keys(): - postfix[key] = str(format_stat(postfix[key])) - return postfix - - -@contextmanager -def rename_logger(logger, new_name): - old_name = logger.name - if new_name is not None: - logger.name = new_name - yield logger - logger.name = old_name - - -class JsonProgressBar(BaseProgressBar): - """Log output in JSON format.""" - - def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000): - super().__init__(iterable, epoch, prefix) - self.log_interval = log_interval - self.i = None - self.size = None - - def __iter__(self): - self.size = len(self.iterable) - for i, obj in enumerate(self.iterable, start=self.n): - self.i = i - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - step = step or self.i or 0 - if step > 0 and self.log_interval is not None and step % self.log_interval == 0: - update = ( - self.epoch - 1 + (self.i + 1) / float(self.size) - if self.epoch is not None - else None - ) - stats = self._format_stats(stats, epoch=self.epoch, update=update) - with rename_logger(logger, tag): - logger.info(json.dumps(stats)) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self.stats = stats - if tag is not None: - self.stats = OrderedDict( - [(tag + "_" + k, v) for k, v in self.stats.items()] - ) - stats = self._format_stats(self.stats, epoch=self.epoch) - with rename_logger(logger, tag): - logger.info(json.dumps(stats)) - - def _format_stats(self, stats, epoch=None, update=None): - postfix = OrderedDict() - if epoch is not None: - postfix["epoch"] = epoch - if update is not None: - postfix["update"] = round(update, 3) - # Preprocess stats according to datatype - for key in stats.keys(): - postfix[key] = format_stat(stats[key]) - return postfix - - -class NoopProgressBar(BaseProgressBar): - """No logging.""" - - def __init__(self, iterable, epoch=None, prefix=None): - super().__init__(iterable, epoch, prefix) - - def __iter__(self): - for obj in self.iterable: - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - pass - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - pass - - -class SimpleProgressBar(BaseProgressBar): - """A minimal logger for non-TTY environments.""" - - def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000): - super().__init__(iterable, epoch, prefix) - self.log_interval = log_interval - self.i = None - self.size = None - - def __iter__(self): - self.size = len(self.iterable) - for i, obj in enumerate(self.iterable, start=self.n): - self.i = i - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - step = step or self.i or 0 - if step > 0 and self.log_interval is not None and step % self.log_interval == 0: - stats = self._format_stats(stats) - postfix = self._str_commas(stats) - with rename_logger(logger, tag): - logger.info( - "{}: {:5d} / {:d} {}".format( - self.prefix, self.i + 1, self.size, postfix - ) - ) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - postfix = self._str_pipes(self._format_stats(stats)) - with rename_logger(logger, tag): - logger.info("{} | {}".format(self.prefix, postfix)) - - -class TqdmProgressBar(BaseProgressBar): - """Log to tqdm.""" - - def __init__(self, iterable, epoch=None, prefix=None): - super().__init__(iterable, epoch, prefix) - from tqdm import tqdm - - self.tqdm = tqdm( - iterable, - self.prefix, - leave=False, - disable=(logger.getEffectiveLevel() > logging.INFO), - ) - - def __iter__(self): - return iter(self.tqdm) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - self.tqdm.set_postfix(self._format_stats(stats), refresh=False) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - postfix = self._str_pipes(self._format_stats(stats)) - with rename_logger(logger, tag): - logger.info("{} | {}".format(self.prefix, postfix)) - - -try: - _tensorboard_writers = {} - from torch.utils.tensorboard import SummaryWriter -except ImportError: - try: - from tensorboardX import SummaryWriter - except ImportError: - SummaryWriter = None - - -def _close_writers(): - for w in _tensorboard_writers.values(): - w.close() - - -atexit.register(_close_writers) - - -class TensorboardProgressBarWrapper(BaseProgressBar): - """Log to tensorboard.""" - - def __init__(self, wrapped_bar, tensorboard_logdir): - self.wrapped_bar = wrapped_bar - self.tensorboard_logdir = tensorboard_logdir - - if SummaryWriter is None: - logger.warning( - "tensorboard not found, please install with: pip install tensorboard" - ) - - def _writer(self, key): - if SummaryWriter is None: - return None - _writers = _tensorboard_writers - if key not in _writers: - _writers[key] = SummaryWriter(os.path.join(self.tensorboard_logdir, key)) - _writers[key].add_text("sys.argv", " ".join(sys.argv)) - return _writers[key] - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to tensorboard.""" - self._log_to_tensorboard(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self._log_to_tensorboard(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - # TODO add hparams to Tensorboard - self.wrapped_bar.update_config(config) - - def _log_to_tensorboard(self, stats, tag=None, step=None): - writer = self._writer(tag or "") - if writer is None: - return - if step is None: - step = stats["num_updates"] - for key in stats.keys() - {"num_updates"}: - if isinstance(stats[key], AverageMeter): - writer.add_scalar(key, stats[key].val, step) - elif isinstance(stats[key], Number): - writer.add_scalar(key, stats[key], step) - elif torch.is_tensor(stats[key]) and stats[key].numel() == 1: - writer.add_scalar(key, stats[key].item(), step) - writer.flush() - - -try: - import wandb -except ImportError: - wandb = None - - -class WandBProgressBarWrapper(BaseProgressBar): - """Log to Weights & Biases.""" - - def __init__(self, wrapped_bar, wandb_project, run_name=None): - self.wrapped_bar = wrapped_bar - if wandb is None: - logger.warning("wandb not found, pip install wandb") - return - - # reinit=False to ensure if wandb.init() is called multiple times - # within one process it still references the same run - wandb.init(project=wandb_project, reinit=False, name=run_name) - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to tensorboard.""" - self._log_to_wandb(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self._log_to_wandb(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - if wandb is not None: - wandb.config.update(config) - self.wrapped_bar.update_config(config) - - def _log_to_wandb(self, stats, tag=None, step=None): - if wandb is None: - return - if step is None: - step = stats["num_updates"] - - prefix = "" if tag is None else tag + "/" - - for key in stats.keys() - {"num_updates"}: - if isinstance(stats[key], AverageMeter): - wandb.log({prefix + key: stats[key].val}, step=step) - elif isinstance(stats[key], Number): - wandb.log({prefix + key: stats[key]}, step=step) - - -try: - from azureml.core import Run -except ImportError: - Run = None - - -class AzureMLProgressBarWrapper(BaseProgressBar): - """Log to Azure ML""" - - def __init__(self, wrapped_bar): - self.wrapped_bar = wrapped_bar - if Run is None: - logger.warning("azureml.core not found, pip install azureml-core") - return - self.run = Run.get_context() - - def __exit__(self, *exc): - if Run is not None: - self.run.complete() - return False - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to AzureML""" - self._log_to_azureml(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats""" - self._log_to_azureml(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - self.wrapped_bar.update_config(config) - - def _log_to_azureml(self, stats, tag=None, step=None): - if Run is None: - return - if step is None: - step = stats["num_updates"] - - prefix = "" if tag is None else tag + "/" - - for key in stats.keys() - {"num_updates"}: - name = prefix + key - if isinstance(stats[key], AverageMeter): - self.run.log_row(name=name, **{"step": step, key: stats[key].val}) - elif isinstance(stats[key], Number): - self.run.log_row(name=name, **{"step": step, key: stats[key]}) diff --git a/spaces/gradio/HuBERT/fairseq/modules/checkpoint_activations.py b/spaces/gradio/HuBERT/fairseq/modules/checkpoint_activations.py deleted file mode 100644 index b44fc346cec1ab24d8056075b3df14020a86214b..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/checkpoint_activations.py +++ /dev/null @@ -1,236 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -from typing import Any, Dict, List, Tuple, Union - -import torch -import torch.utils.checkpoint as checkpoint -from fairseq import utils - - -def checkpoint_wrapper(m, offload_to_cpu=False): - """ - A friendlier wrapper for performing activation checkpointing. - - Compared to the PyTorch version, this version: - - wraps an nn.Module, so that all subsequent calls will use checkpointing - - handles keyword arguments in the forward - - handles non-Tensor outputs from the forward - - Usage:: - - checkpointed_module = checkpoint_wrapper(my_module, offload_to_cpu=True) - a, b = checkpointed_module(x, y=3, z=torch.Tensor([1])) - """ - # should I check whether original_forward has already been set? - assert not hasattr( - m, "precheckpoint_forward" - ), "checkpoint function has already been applied?" - m.precheckpoint_forward = m.forward - m.forward = functools.partial( - _checkpointed_forward, - m.precheckpoint_forward, # original_forward - offload_to_cpu, - ) - return m - - -def unwrap_checkpoint(m: torch.nn.Module): - """ - unwrap a module and its children from checkpoint_wrapper - """ - for module in m.modules(): - if hasattr(module, "precheckpoint_forward"): - module.forward = module.precheckpoint_forward - del module.precheckpoint_forward - return m - - -def _checkpointed_forward(original_forward, offload_to_cpu, *args, **kwargs): - # Autograd Functions in PyTorch work best with positional args, since - # the backward must return gradients (or None) for every input argument. - # We can flatten keyword arguments to make this easier. - kwarg_keys, flat_args = pack_kwargs(*args, **kwargs) - parent_ctx_dict = {"offload": offload_to_cpu} - output = CheckpointFunction.apply( - original_forward, parent_ctx_dict, kwarg_keys, *flat_args - ) - if isinstance(output, torch.Tensor): - return output - else: - packed_non_tensor_outputs = parent_ctx_dict["packed_non_tensor_outputs"] - if packed_non_tensor_outputs: - output = unpack_non_tensors(output, packed_non_tensor_outputs) - return output - - -def pack_kwargs(*args, **kwargs) -> Tuple[List[str], List[Any]]: - """ - Usage:: - - kwarg_keys, flat_args = pack_kwargs(1, 2, a=3, b=4) - args, kwargs = unpack_kwargs(kwarg_keys, flat_args) - assert args == [1, 2] - assert kwargs == {"a": 3, "b": 4} - """ - kwarg_keys = [] - flat_args = list(args) - for k, v in kwargs.items(): - kwarg_keys.append(k) - flat_args.append(v) - return kwarg_keys, flat_args - - -def unpack_kwargs( - kwarg_keys: List[str], flat_args: List[Any] -) -> Tuple[List[Any], Dict[str, Any]]: - if len(kwarg_keys) == 0: - return flat_args, {} - args = flat_args[: -len(kwarg_keys)] - kwargs = {k: v for k, v in zip(kwarg_keys, flat_args[-len(kwarg_keys) :])} - return args, kwargs - - -def split_non_tensors( - mixed: Union[torch.Tensor, Tuple[Any]] -) -> Tuple[Tuple[torch.Tensor], Dict[str, List[Any]]]: - """ - Usage:: - - x = torch.Tensor([1]) - y = torch.Tensor([2]) - tensors, packed_non_tensors = split_non_tensors((x, y, None, 3)) - recon = unpack_non_tensors(tensors, packed_non_tensors) - assert recon == (x, y, None, 3) - """ - if isinstance(mixed, torch.Tensor): - return (mixed,), None - tensors = [] - packed_non_tensors = {"is_tensor": [], "objects": []} - for o in mixed: - if isinstance(o, torch.Tensor): - packed_non_tensors["is_tensor"].append(True) - tensors.append(o) - else: - packed_non_tensors["is_tensor"].append(False) - packed_non_tensors["objects"].append(o) - return tuple(tensors), packed_non_tensors - - -def unpack_non_tensors( - tensors: Tuple[torch.Tensor], - packed_non_tensors: Dict[str, List[Any]], -) -> Tuple[Any]: - if packed_non_tensors is None: - return tensors - assert isinstance(packed_non_tensors, dict) - mixed = [] - is_tensor_list = packed_non_tensors["is_tensor"] - objects = packed_non_tensors["objects"] - assert len(tensors) + len(objects) == len(is_tensor_list) - obj_i = tnsr_i = 0 - for is_tensor in is_tensor_list: - if is_tensor: - mixed.append(tensors[tnsr_i]) - tnsr_i += 1 - else: - mixed.append(objects[obj_i]) - obj_i += 1 - return tuple(mixed) - - -class CheckpointFunction(torch.autograd.Function): - """Similar to the torch version, but support non-Tensor outputs. - - The caller is expected to provide a dict (*parent_ctx_dict*) that will hold - the non-Tensor outputs. These should be combined with the Tensor *outputs* - by calling ``unpack_non_tensors``. - """ - - @staticmethod - def forward(ctx, run_function, parent_ctx_dict, kwarg_keys, *args): - if torch.is_grad_enabled(): # grad may be disabled, e.g., during validation - checkpoint.check_backward_validity(args) - - ctx.run_function = run_function - ctx.kwarg_keys = kwarg_keys - ctx.fwd_rng_state = utils.get_rng_state() - - tensor_inputs, packed_non_tensor_inputs = split_non_tensors(args) - if parent_ctx_dict["offload"]: - ctx.fwd_device = tuple(x.device for x in tensor_inputs) - ctx.grad_requirements = tuple(x.requires_grad for x in tensor_inputs) - tensor_inputs = tuple(x.cpu() for x in tensor_inputs) - - else: - ctx.fwd_device, ctx.grad_requirements = None, None - - ctx.save_for_backward(*tensor_inputs) - ctx.packed_non_tensor_inputs = packed_non_tensor_inputs - - with torch.no_grad(): - unpacked_args, unpacked_kwargs = unpack_kwargs(kwarg_keys, args) - outputs = run_function(*unpacked_args, **unpacked_kwargs) - - if isinstance(outputs, torch.Tensor): - return outputs - else: - # Autograd Functions don't like non-Tensor outputs. We can split the - # non-Tensor and Tensor outputs, returning the former by reference - # through *parent_ctx_dict* and returning the latter directly. - outputs, packed_non_tensor_outputs = split_non_tensors(outputs) - parent_ctx_dict["packed_non_tensor_outputs"] = packed_non_tensor_outputs - return outputs - - @staticmethod - def backward(ctx, *args): - if not torch.autograd._is_checkpoint_valid(): - raise RuntimeError( - "Checkpointing is not compatible with .grad(), please use .backward() if possible" - ) - - tensor_inputs: Tuple = ctx.saved_tensors - tensor_inputs = checkpoint.detach_variable(tensor_inputs) - if ctx.fwd_device is not None: - tensor_inputs = [ - t.to(ctx.fwd_device[i]) for i, t in enumerate(tensor_inputs) - ] - for i, need_grad in enumerate(ctx.grad_requirements): - tensor_inputs[i].requires_grad = need_grad - inputs = unpack_non_tensors(tensor_inputs, ctx.packed_non_tensor_inputs) - - # Store the current states. - bwd_rng_state = utils.get_rng_state() - - # Set the states to what it used to be before the forward pass. - utils.set_rng_state(ctx.fwd_rng_state) - - with torch.enable_grad(): - unpacked_args, unpacked_kwargs = unpack_kwargs(ctx.kwarg_keys, inputs) - outputs = ctx.run_function(*unpacked_args, **unpacked_kwargs) - tensor_outputs, _ = split_non_tensors(outputs) - # Set the states back to what it was at the start of this function. - utils.set_rng_state(bwd_rng_state) - - # Run backward() with only Tensors that require grad - outputs_with_grad = [] - args_with_grad = [] - for i in range(len(tensor_outputs)): - if tensor_outputs[i].requires_grad: - outputs_with_grad.append(tensor_outputs[i]) - args_with_grad.append(args[i]) - if len(outputs_with_grad) == 0: - raise RuntimeError( - "None of the outputs have requires_grad=True, " - "this checkpoint() is not necessary" - ) - - torch.autograd.backward(outputs_with_grad, args_with_grad) - - grads = tuple( - inp.grad if isinstance(inp, torch.Tensor) else None for inp in inputs - ) - return (None, None, None) + grads diff --git a/spaces/gradio/image_classifier_interface_load/README.md b/spaces/gradio/image_classifier_interface_load/README.md deleted file mode 100644 index abedcbd8e0466a0166d4ada3d59bda7a411f6f0f..0000000000000000000000000000000000000000 --- a/spaces/gradio/image_classifier_interface_load/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: image_classifier_interface_load -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gradio/multiple-api-name-test/app.py b/spaces/gradio/multiple-api-name-test/app.py deleted file mode 100644 index 18610563b36df3f468f5841693befbeb619fce92..0000000000000000000000000000000000000000 --- a/spaces/gradio/multiple-api-name-test/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr - -# Used in gradio unit test - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - num = gr.Number() - minus_one_btn = gr.Button(value="minus one") - double_btn = gr.Button(value="double") - with gr.Column(): - minus_one = gr.Number() - double = gr.Number() - minus_one_btn.click(lambda s: s - 1, num, minus_one, api_name="minus_one") - double_btn.click(lambda s: s * 2, num, double, api_name="double") - -demo.launch() diff --git a/spaces/gylleus/icongen/torch_utils/training_stats.py b/spaces/gylleus/icongen/torch_utils/training_stats.py deleted file mode 100644 index 26f467f9eaa074ee13de1cf2625cd7da44880847..0000000000000000000000000000000000000000 --- a/spaces/gylleus/icongen/torch_utils/training_stats.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict() # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -#---------------------------------------------------------------------------- - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -#---------------------------------------------------------------------------- - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num(name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -#---------------------------------------------------------------------------- - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -#---------------------------------------------------------------------------- diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.h b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.h deleted file mode 100644 index 524c804122a2582e20e2e4e9c49267e1a1b6db60..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.h +++ /dev/null @@ -1,90 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct filtered_lrelu_kernel_params -{ - // These parameters decide which kernel to use. - int up; // upsampling ratio (1, 2, 4) - int down; // downsampling ratio (1, 2, 4) - int2 fuShape; // [size, 1] | [size, size] - int2 fdShape; // [size, 1] | [size, size] - - int _dummy; // Alignment. - - // Rest of the parameters. - const void* x; // Input tensor. - void* y; // Output tensor. - const void* b; // Bias tensor. - unsigned char* s; // Sign tensor in/out. NULL if unused. - const float* fu; // Upsampling filter. - const float* fd; // Downsampling filter. - - int2 pad0; // Left/top padding. - float gain; // Additional gain factor. - float slope; // Leaky ReLU slope on negative side. - float clamp; // Clamp after nonlinearity. - int flip; // Filter kernel flip for gradient computation. - - int tilesXdim; // Original number of horizontal output tiles. - int tilesXrep; // Number of horizontal tiles per CTA. - int blockZofs; // Block z offset to support large minibatch, channel dimensions. - - int4 xShape; // [width, height, channel, batch] - int4 yShape; // [width, height, channel, batch] - int2 sShape; // [width, height] - width is in bytes. Contiguous. Zeros if unused. - int2 sOfs; // [ofs_x, ofs_y] - offset between upsampled data and sign tensor. - int swLimit; // Active width of sign tensor in bytes. - - longlong4 xStride; // Strides of all tensors except signs, same component order as shapes. - longlong4 yStride; // - int64_t bStride; // - longlong3 fuStride; // - longlong3 fdStride; // -}; - -struct filtered_lrelu_act_kernel_params -{ - void* x; // Input/output, modified in-place. - unsigned char* s; // Sign tensor in/out. NULL if unused. - - float gain; // Additional gain factor. - float slope; // Leaky ReLU slope on negative side. - float clamp; // Clamp after nonlinearity. - - int4 xShape; // [width, height, channel, batch] - longlong4 xStride; // Input/output tensor strides, same order as in shape. - int2 sShape; // [width, height] - width is in elements. Contiguous. Zeros if unused. - int2 sOfs; // [ofs_x, ofs_y] - offset between upsampled data and sign tensor. -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct filtered_lrelu_kernel_spec -{ - void* setup; // Function for filter kernel setup. - void* exec; // Function for main operation. - int2 tileOut; // Width/height of launch tile. - int numWarps; // Number of warps per thread block, determines launch block size. - int xrep; // For processing multiple horizontal tiles per thread block. - int dynamicSharedKB; // How much dynamic shared memory the exec kernel wants. -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template filtered_lrelu_kernel_spec choose_filtered_lrelu_kernel(const filtered_lrelu_kernel_params& p, int sharedKB); -template void* choose_filtered_lrelu_act_kernel(void); -template cudaError_t copy_filters(cudaStream_t stream); - -//------------------------------------------------------------------------ \ No newline at end of file diff --git a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/__init__.py b/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/haakohu/deep_privacy2/dp2/data/datasets/fdh.py b/spaces/haakohu/deep_privacy2/dp2/data/datasets/fdh.py deleted file mode 100644 index 0c5293b42874c644da4622687f407c069a0a8e07..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/data/datasets/fdh.py +++ /dev/null @@ -1,142 +0,0 @@ -import torch -import tops -import numpy as np -import io -import webdataset as wds -import os -import json -from pathlib import Path -from ..utils import png_decoder, mask_decoder, get_num_workers, collate_fn - - -def kp_decoder(x): - # Keypoints are between [0, 1] for webdataset - keypoints = torch.from_numpy(np.load(io.BytesIO(x))).float() - def check_outside(x): return (x < 0).logical_or(x > 1) - is_outside = check_outside(keypoints[:, 0]).logical_or( - check_outside(keypoints[:, 1]) - ) - keypoints[:, 2] = (keypoints[:, 2] > 0).logical_and(is_outside.logical_not()) - return keypoints - - -def vertices_decoder(x): - vertices = torch.from_numpy(np.load(io.BytesIO(x)).astype(np.int32)) - return vertices.squeeze()[None] - - -class InsertNewKeypoints: - - def __init__(self, keypoints_path: Path) -> None: - with open(keypoints_path, "r") as fp: - self.keypoints = json.load(fp) - - def __call__(self, sample): - key = sample["__key__"] - keypoints = torch.tensor(self.keypoints[key], dtype=torch.float32) - def check_outside(x): return (x < 0).logical_or(x > 1) - is_outside = check_outside(keypoints[:, 0]).logical_or( - check_outside(keypoints[:, 1]) - ) - keypoints[:, 2] = (keypoints[:, 2] > 0).logical_and(is_outside.logical_not()) - - sample["keypoints.npy"] = keypoints - return sample - - -def get_dataloader_fdh_wds( - path, - batch_size: int, - num_workers: int, - transform: torch.nn.Module, - gpu_transform: torch.nn.Module, - infinite: bool, - shuffle: bool, - partial_batches: bool, - load_embedding: bool, - sample_shuffle=10_000, - tar_shuffle=100, - read_condition=False, - channels_last=False, - load_new_keypoints=False, - keypoints_split=None, - ): - # Need to set this for split_by_node to work. - os.environ["RANK"] = str(tops.rank()) - os.environ["WORLD_SIZE"] = str(tops.world_size()) - if infinite: - pipeline = [wds.ResampledShards(str(path))] - else: - pipeline = [wds.SimpleShardList(str(path))] - if shuffle: - pipeline.append(wds.shuffle(tar_shuffle)) - pipeline.extend([ - wds.split_by_node, - wds.split_by_worker, - ]) - if shuffle: - pipeline.append(wds.shuffle(sample_shuffle)) - - decoder = [ - wds.handle_extension("image.png", png_decoder), - wds.handle_extension("mask.png", mask_decoder), - wds.handle_extension("maskrcnn_mask.png", mask_decoder), - wds.handle_extension("keypoints.npy", kp_decoder), - ] - - rename_keys = [ - ["img", "image.png"], ["mask", "mask.png"], - ["keypoints", "keypoints.npy"], ["maskrcnn_mask", "maskrcnn_mask.png"], - ["__key__", "__key__"] - ] - if load_embedding: - decoder.extend([ - wds.handle_extension("vertices.npy", vertices_decoder), - wds.handle_extension("E_mask.png", mask_decoder) - ]) - rename_keys.extend([ - ["vertices", "vertices.npy"], - ["E_mask", "e_mask.png"] - ]) - - if read_condition: - decoder.append( - wds.handle_extension("condition.png", png_decoder) - ) - rename_keys.append(["condition", "condition.png"]) - - pipeline.extend([ - wds.tarfile_to_samples(), - wds.decode(*decoder), - - ]) - if load_new_keypoints: - assert keypoints_split in ["train", "val"] - keypoint_url = "https://api.loke.aws.unit.no/dlr-gui-backend-resources-content/v2/contents/links/1eb88522-8b91-49c7-b56a-ed98a9c7888cef9c0429-a385-4248-abe3-8682de26d041f268aed1-7c88-4677-baad-7623c2ee330f" - file_name = "fdh_keypoints_val-050133b34d.json" - if keypoints_split == "train": - keypoint_url = "https://api.loke.aws.unit.no/dlr-gui-backend-resources-content/v2/contents/links/3e828b1c-d6c0-4622-90bc-1b2cce48ccfff14ab45d-0a5c-431d-be13-7e60580765bd7938601c-e72e-41d9-8836-fffc49e76f58" - file_name = "fdh_keypoints_train-2cff11f69a.json" - # Set check_hash=True if you suspect download is incorrect. - filepath = tops.download_file(keypoint_url, file_name=file_name, check_hash=False) - pipeline.append( - wds.map(InsertNewKeypoints(filepath)) - ) - pipeline.extend([ - wds.batched(batch_size, collation_fn=collate_fn, partial=partial_batches), - wds.rename_keys(*rename_keys), - ]) - - if transform is not None: - pipeline.append(wds.map(transform)) - pipeline = wds.DataPipeline(*pipeline) - if infinite: - pipeline = pipeline.repeat(nepochs=1000000) - - loader = wds.WebLoader( - pipeline, batch_size=None, shuffle=False, - num_workers=get_num_workers(num_workers), - persistent_workers=True, - ) - loader = tops.DataPrefetcher(loader, gpu_transform, channels_last=channels_last, to_float=False) - return loader diff --git a/spaces/haryoaw/id-recigen/README.md b/spaces/haryoaw/id-recigen/README.md deleted file mode 100644 index 3f7e08c69fc1375ecd775148cccd8dadebcfe18e..0000000000000000000000000000000000000000 --- a/spaces/haryoaw/id-recigen/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Id Recigen -emoji: 📚 -colorFrom: red -colorTo: purple -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/area_change.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/area_change.py deleted file mode 100644 index 0e066c8775fe6bf595f14d977b02b4c6227efa5e..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/area_change.py +++ /dev/null @@ -1,92 +0,0 @@ -import cv2 -import os -import plotly.express as px -import numpy as np -import pandas as pd -from plotly.subplots import make_subplots -import plotly.io as pio -pio.kaleido.scope.mathjax = None - - -def distribute_glacier(list_of_samples): - list_of_glaciers = {} - for glacier in [ 'COL', 'Mapple', 'Crane', 'Jorum','DBE','SI', 'JAC']: - list_of_glaciers[glacier] = [sample for sample in list_of_samples if glacier in sample] - return list_of_glaciers - - -if __name__ == '__main__': - generate_data = True - if generate_data: - # directories with zone label - train_dir = '/home/ho11laqe/PycharmProjects/data_raw/zones/train' - test_dir = '/home/ho11laqe/PycharmProjects/data_raw/zones/test' - - list_of_train_samples = [] - for sample in os.listdir(train_dir): - list_of_train_samples.append(os.path.join(train_dir, sample)) - - list_of_test_samples = [] - for sample in os.listdir(test_dir): - list_of_test_samples.append(os.path.join(test_dir, sample)) - - list_of_samples = list_of_train_samples + list_of_test_samples - - list_of_glacier = distribute_glacier(list_of_samples) - - fig = make_subplots(rows=len(list_of_glacier.keys()), cols=1) - nan = [] - rock = [] - ice = [] - ocean = [] - date = [] - glacier_name = [] - for i, glacier in enumerate(list_of_glacier.keys()): - - for sample in list_of_glacier[glacier]: - print(sample) - seg_mask = cv2.imread(sample, cv2.IMREAD_GRAYSCALE) - all_pixel = seg_mask.shape[0] * seg_mask.shape[1] - nan.append(np.count_nonzero(seg_mask == 0) / all_pixel * 100) - rock.append(np.count_nonzero(seg_mask == 64) / all_pixel * 100) - ice.append(np.count_nonzero(seg_mask == 127) / all_pixel * 100) - ocean.append(np.count_nonzero(seg_mask == 254) / all_pixel * 100) - - sample_split = sample.split('_') - date.append(sample_split[-6]) - glacier_name.append(glacier) - - df = pd.DataFrame(dict(Shadow=nan, Rock=rock, Glacier=ice, Ocean=ocean, date=date, glacier_name=glacier_name)) - df.to_csv('output/area.csv') - - else: - df = pd.read_csv('output/area.csv') - - df = df.drop_duplicates(subset=['date', 'glacier_name']) - area_plot = px.area(df, - x="date", - y=["Rock", "Shadow", "Glacier", "Ocean"], - color_discrete_map={"Shadow": 'black', "Ocean": 'blue', "Glacier": "aliceblue", "Rock": "gray"}, - template="plotly_white", - height=700, - width =600, - facet_row='glacier_name', - category_orders={'glacier': [ 'COL', 'Mapple', 'Crane', 'Jorum','DBE','SI', 'JAC']} - ) - area_plot.update_yaxes(type='linear', range=[0, 100], ticksuffix='%', title='area', side='right') - area_plot.for_each_annotation(lambda a: a.update(text=a.text.split("=")[1], textangle=0, x=0, xanchor='right')) - area_plot.update_layout(legend=dict(title='Area:', - orientation="h", - yanchor="bottom", - y=1.02, - xanchor="right", - x=1, - font=dict(size=12)), - margin=dict(l=70, r=0, t=0, b=0) - ) - area_plot.for_each_yaxis(lambda a: a.update(title='')) - area_plot.update_xaxes(title=' ',tickfont=dict(size=12)) - area_plot.update_layout(font=dict(family="Times New Roma", size=10, )) - area_plot.update_annotations(font=dict(size=12)) - area_plot.write_image("output/area.pdf", format='pdf') - # fig.show() diff --git a/spaces/huggan/sefa/models/model_zoo.py b/spaces/huggan/sefa/models/model_zoo.py deleted file mode 100644 index 85eccbc112c0f50566d772e51c6e633227cbcf40..0000000000000000000000000000000000000000 --- a/spaces/huggan/sefa/models/model_zoo.py +++ /dev/null @@ -1,307 +0,0 @@ -# python3.7 -"""Model zoo.""" - -# pylint: disable=line-too-long - -MODEL_ZOO = { - # PGGAN official. - 'pggan_celebahq1024': dict( - gan_type='pggan', - resolution=1024, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EW_3jQ6E7xlKvCSHYrbmkQQBAB8tgIv5W5evdT6-GuXiWw?e=gRifVa&download=1', - hf_hub_repo='huggan/pggan-celebahq-1024' - ), - 'pggan_bedroom256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EUZQWGz2GT5Bh_GJLalP63IBvCsXDTOxDFIC_ZBsmoEacA?e=VNXiDb&download=1', - ), - 'pggan_livingroom256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Efzh6qQv6QtCm0YN1lulH-YByqdE3AqlI-E6US_hXMuiig?e=ppdyB2&download=1', - ), - 'pggan_diningroom256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EcLb3_hGUkdClompZo27xk0BNmotgbFqdIeu-ZOGJsBMRg?e=xjYpN3&download=1', - ), - 'pggan_kitchen256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ESCyg6hpNn1LlHVX_un1wLsBZAORUNkW9MO2kU1X5kafAQ?e=09TbGC&download=1', - ), - 'pggan_church256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EQ8cKujs2TVGjCL_j6bsnk8BqD9REF2ME2lBnpbTPsqIvA?e=zH55fT&download=1', - ), - 'pggan_tower256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EeyBJvgRVGJClKr1KKYDF_cBT1FDepRU1-GLqYNh8W9-fQ?e=nrpa5N&download=1', - ), - 'pggan_bridge256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EZ2QScfPy19PiDERLJQ3gPMBP4WmvZHwhNFLzfaP2YD8hQ?e=bef1U9&download=1', - ), - 'pggan_restaurant256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ERvJ4pz8jgtMrcuJXUfcOQEBDugZ099_TetCQs-9-ILCVg?e=qYsVdQ&download=1', - ), - 'pggan_classroom256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EUU9SCOPUxhMoUS4Ceo9kl0BQkVK7d69lA-JeOP-zOWvXw?e=YIB4no&download=1', - ), - 'pggan_conferenceroom256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EX8AF0_6NoJAl5vKFewHWnsBk0r4PK4WsqsMrJyj84TrqQ?e=oNQIZS&download=1', - ), - 'pggan_person256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EWu4SqR42YpCoqsVJOcM2cMBcdfXA0j5wZ2hno9X0R9ydQ?e=KuDRns&download=1', - ), - 'pggan_cat256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EQdveyUNOMtAue52n6BxoHoB6Yup5-PTvBDmyfUn7Un4Hw?e=7acGbT&download=1', - ), - 'pggan_dog256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ESaKyXA5fGlOvXJYDDFbT2kB9c0HlXh9n_wnyhiP05nhow?e=d4aKDV&download=1', - ), - 'pggan_bird256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Ef2p4Pd3AKVCmSm00YikCIABhylh2dLPaFjPfPVn3RiTXA?e=9bRitp&download=1', - ), - 'pggan_horse256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EXwCPdv6XqJFtuvFFoswRScBmLJbhKzaC5D_iovl1GFOTw?e=WDdD77&download=1', - ), - 'pggan_sheep256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ER6J5EKjAUNFtm9VwLf-uUsBZ5dnqxeKsPxY9ijiPtMhcQ?e=OKtfva&download=1', - ), - 'pggan_cow256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ERZLxw7N7xJPm72FyePTbpcByzrr0pH-Fg7qyLt5tYGXwQ?e=ovIPCl&download=1', - ), - 'pggan_car256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EfGc2we47aFDtAY1548pRvsByIju-uXRbkZEFpJotuPKZw?e=DQqVj8&download=1', - ), - 'pggan_bicycle256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Ed1dN_FgwmdBgeNWhaRUry8BgwT88-n2ppicSDPx-f7f_Q?e=bxTxnf&download=1', - ), - 'pggan_motorbike256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EV3yQdeJXIdPjZbMO0mp2-MBJbKuuBdypzBL4gnedO57Dw?e=tXdvtD&download=1', - ), - 'pggan_bus256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Ed7-OYLnq0RCqRlM8qK8wZ8B87dz_NUxIKBrvyFUwRCEbg?e=VP5bmX&download=1', - ), - 'pggan_train256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EedE2cozKOVAkhvbdLd4SfwBknFW8vWZnKiqgeIBbAvCCA?e=BrLpTl&download=1', - ), - 'pggan_boat256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Eb39waqQFr9Bp4wO0rC5NHwB0Vz2NGCuqbRPucguBIkDrg?e=lddSyL&download=1', - ), - 'pggan_airplane256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Ee6FzIx3KjNDhxrS5mDvpCEB3iQ7TgErmKhbwbV-eF07iw?e=xflPXa&download=1', - ), - 'pggan_bottle256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EWhoy2AFCTZGtEG1UoayWjcB9Kdc_wreJ8p4RlBB93nbNg?e=DMZceU&download=1', - ), - 'pggan_chair256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EbQRTfwdostBhXG30Uacn7ABsEUFa-tEW3oxiM5zDYQbRw?e=FkB7T0&download=1', - ), - 'pggan_pottedplant256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EWg7hnoGATBOuJvXWr4m7CQBJL9o7nqnD6nOMRhtH2SKXg?e=Zi3hjD&download=1', - ), - 'pggan_tvmonitor256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EVXwttoJVtBMuhHNDdK3cMwBdMiZARJV38PMTsL6whnFlA?e=RbG0ru&download=1', - ), - 'pggan_diningtable256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EXVzBkbmTCVImMtuHLCTBeMBXZmv0RWyx5KXQQAe7-7D5w?e=6RYSnm&download=1', - ), - 'pggan_sofa256': dict( - gan_type='pggan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EaADQYDXwY9NrzbiUFcRYRgBOu1GdJMG8YgNZZmbNjbn-Q?e=DqKrXG&download=1', - ), - - # StyleGAN official. - 'stylegan_ffhq1024': dict( - gan_type='stylegan', - resolution=1024, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EdfMxgb0hU9BoXwiR3dqYDEBowCSEF1IcsW3n4kwfoZ9OQ?e=VwIV58&download=1', - ), - 'stylegan_celebahq1024': dict( - gan_type='stylegan', - resolution=1024, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EcCdXHddE7FOvyfmqeOyc9ABqVuWh8PQYFnV6JM1CXvFig?e=1nUYZ5&download=1', - ), - 'stylegan_bedroom256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Ea6RBPddjcRNoFMXm8AyEBcBUHdlRNtjtclNKFe89amjBw?e=Og8Vff&download=1', - ), - 'stylegan_cat256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EVjX8u9HuehLip3z0hRfIHcB7QtoFkTB7NiRDb8nrKOl2w?e=lHcp1B&download=1', - hf_hub_repo="huggan/stylegan_cat256" - ), - 'stylegan_car512': dict( - gan_type='stylegan', - resolution=512, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EcRJNNzzUzJGjI2X53S9HjkBhXkKT5JRd6Q3IIhCY1AyRw?e=FvMRNj&download=1', - hf_hub_repo="huggan/stylegan_car512" - ), - - # StyleGAN ours. - 'stylegan_celeba_partial256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ET2etKNzMS9JmHj5j60fqMcBRJfQfYNvqUrujaIXxCvKDQ?e=QReLE6&download=1', - ), - 'stylegan_ffhq256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ES-NAUCC2qdHg87BftvlBiQBVpbJ8-005Q4TNr5KrOxQEw?e=00AnWt&download=1', - ), - 'stylegan_ffhq512': dict( - gan_type='stylegan', - resolution=512, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EZYrrwOiEgVOg-PfGv7QTegBzFQ9yq2v7o1WxNq5JJ9KNA?e=SZU8PI&download=1', - ), - 'stylegan_livingroom256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EfFCYLHjqbFDmjOvCCFJgDcBZ1QYgETfZJxp4ZTHjLxZBg?e=InVd0n&download=1', - ), - 'stylegan_diningroom256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ERsUza_hSFRIm4iZCag7P0kBQ9EIdfQKByw4QYt_ay97lg?e=Cimh7S&download=1', - ), - 'stylegan_kitchen256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ERcYvoingQNKix35lUs0vUkBQQkAZMp1rtDxjwNlOJAoaA?e=a1Tcwr&download=1', - ), - 'stylegan_apartment256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EfurPNSB2BRFtXdqGkmDD6YBwyKN8YK2v7nKwnJQdsbf6A?e=w3oYa4&download=1', - ), - 'stylegan_church256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ETMgG1_d06tAlbUkJD1qA9IBaLZ9zJKPkG2kO-4jxhVV5w?e=Dbkb7o&download=1', - ), - 'stylegan_tower256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Ebm9QMgqB2VDqyIE5rFhreEBgZ_RyKcRf8bQ333K453u3w?e=if8sDj&download=1', - ), - 'stylegan_bridge256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Ed9QM6OP9sVHnazSp4cqPSEBb-ALfBPXRxP1hD7FsTYh8w?e=3vv06p&download=1', - ), - 'stylegan_restaurant256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/ESDhYr01WtlEvBNFrVpFezcB2l9lF1rBYuHFoeNpBr5B7A?e=uFWFNh&download=1', - ), - 'stylegan_classroom256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EbWnI3oto9NPk-lxwZlWqPQB2atWpGiTWMIT59MzF9ij9Q?e=KvcNBg&download=1', - ), - 'stylegan_conferenceroom256': dict( - gan_type='stylegan', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Eb1gVi3pGa9PgJ4XYYu_6yABQZ0ZcGDak4FEHaTHaeYFzw?e=0BeE8t&download=1', - ), - - # StyleGAN third-party. - 'stylegan_animeface512': dict( - gan_type='stylegan', - resolution=512, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EWDWflY6lBpGgX0CGQpd2Z4B5wTEVamTOA9JRYne7zdCvA?e=tOzgYA&download=1', - hf_hub_repo='huggan/stylegan_animeface512' - ), - 'stylegan_animeportrait512': dict( - gan_type='stylegan', - resolution=512, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EXBvhTBi-v5NsnQtrxhFEKsBin4xg-Dud9Jr62AEwFTIxg?e=bMGK7r&download=1', - ), - 'stylegan_artface512': dict( - gan_type='stylegan', - resolution=512, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/Eca0OiGqhyZMmoPbKahSBWQBWvcAH4q2CE3zdZJflp2jkQ?e=h4rWAm&download=1', - ), - - # StyleGAN2 official. - 'stylegan2_ffhq1024': dict( - gan_type='stylegan2', - resolution=1024, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EX0DNWiBvl5FuOQTF4oMPBYBNSalcxTK0AbLwBn9Y3vfgg?e=Q0sZit&download=1', - ), - 'stylegan2_church256': dict( - gan_type='stylegan2', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EQzDtJUdQ4ROunMGn2sZouEBmNeFX4QWvxjermVE5cZvNA?e=tQ7r9r&download=1', - ), - 'stylegan2_cat256': dict( - gan_type='stylegan2', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EUKXeBwUUbZJr6kup7PW4ekBx2-vmTp8FjcGb10v8bgJxQ?e=nkerMF&download=1', - ), - 'stylegan2_horse256': dict( - gan_type='stylegan2', - resolution=256, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EconoT6tb69OuAIqfXRtGlsBZz4vBx01UmmFO-JAS356Jg?e=bcSCC4&download=1', - ), - 'stylegan2_car512': dict( - gan_type='stylegan2', - resolution=512, - url='https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EYSnUsxU8KJFuMHhZm-JLWoB0nHxdlbrLHNZ_Qkoe3b9LA?e=Ycjp5A&download=1' - ), -} - -# pylint: enable=line-too-long diff --git a/spaces/ikun12/ikun/app.py b/spaces/ikun12/ikun/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/ikun12/ikun/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
              ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
              ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/ilmhona/api/Dockerfile b/spaces/ilmhona/api/Dockerfile deleted file mode 100644 index 62cc285df0a98cd4f45dd0a09fafe3c6f1105968..0000000000000000000000000000000000000000 --- a/spaces/ilmhona/api/Dockerfile +++ /dev/null @@ -1,30 +0,0 @@ -# Use the official FastAPI image -FROM tiangolo/uvicorn-gunicorn-fastapi:python3.9 - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user - -# Switch to the "user" user -USER user - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install necessary packages and upgrade pip -USER root -COPY requirements-fastapi.txt . -RUN pip install --no-cache-dir --upgrade pip -RUN pip install --no-cache-dir --upgrade -r requirements-fastapi.txt - -# Switch back to the "user" user -USER user - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -# Specify the command to run the app -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860", "--workers", "4"] diff --git a/spaces/innnky/nanami/inference_f0.py b/spaces/innnky/nanami/inference_f0.py deleted file mode 100644 index 1c5f2ed1f316c935ca18763d5281ab29398d8542..0000000000000000000000000000000000000000 --- a/spaces/innnky/nanami/inference_f0.py +++ /dev/null @@ -1,86 +0,0 @@ -import torch,pdb -import numpy as np -import soundfile as sf -from models import SynthesizerTrn256 -from scipy.io import wavfile -from fairseq import checkpoint_utils -import pyworld,librosa -import torch.nn.functional as F - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model_path = "path_to_ContentVec_legacy500.pt" -print("load model(s) from {}".format(model_path)) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -model = model.half() -model.eval() - -net_g = SynthesizerTrn256(513,40,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,4,2,2,2],512,[16,16,4,4,4],0) -weights=torch.load("qihai.pt") -net_g.load_state_dict(weights,strict=True) -net_g.eval().to(device) -net_g.half() - -def get_f0(x,f0_up_key=0): - f0_max = 1100.0 - f0_min = 50.0 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0, t = pyworld.dio( - x.astype(np.double), - fs=16000, - f0_ceil=800, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, 16000) - f0*=pow(2,f0_up_key/12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse - - - -wav_path="xxxxxxxx.wav" -f0_up_key=0 - -audio, sampling_rate = sf.read(wav_path) -if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) -if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - -pitch = get_f0(audio,f0_up_key) - -feats = torch.from_numpy(audio).float() -if feats.dim() == 2: # double channels - feats = feats.mean(-1) -assert feats.dim() == 1, feats.dim() -feats = feats.view(1, -1) -padding_mask = torch.BoolTensor(feats.shape).fill_(False) -inputs = { - "source": feats.half().to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 -} -with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) -feats=F.interpolate(feats.permute(0,2,1),scale_factor=2).permute(0,2,1) -p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存 -feats = feats[:,:p_len, :] -pitch = pitch[:p_len] -p_len = torch.LongTensor([p_len]).to(device) -pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) -with torch.no_grad(): - audio = net_g.infer(feats, p_len,pitch)[0][0, 0].data.cpu().float().numpy() - -wavfile.write("test.wav", 32000, audio) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/100 Creampie Exclusive Compilation 2018 - [KTR].ss 64 Bit.md b/spaces/inplisQlawa/anything-midjourney-v4-1/100 Creampie Exclusive Compilation 2018 - [KTR].ss 64 Bit.md deleted file mode 100644 index 71b36a444b16b7dadf713d613daafd41dbd70b8e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/100 Creampie Exclusive Compilation 2018 - [KTR].ss 64 Bit.md +++ /dev/null @@ -1,8 +0,0 @@ -
              -

              leson topillier school fling porn movie by taobao funny girl 10mp4
              amateur pussy japanese strip party and pov hd mature moms need more than boyfriend. 100 creampie exclusive compilation 2018 - [ktr].ss 64 bit.

              -

              thai super sex tube! free porn movies and well written reviews, complete sex guide and a new xxx website every day! your welcome! all models were 18 years of age or older at the time of depiction.. all,data on this site are provided by 3rd parties and are not vetted or certified by our team. advertise here:. [email protected] (only, but not exclusive!) what is … 10 people have tagged themselves or their gfs in this photo. thai super sex tube. popular · sexy · welcome · latest · under the lips · welcome to my dildo archives!

              -

              100 Creampie Exclusive Compilation 2018 - [KTR].ss 64 bit


              DOWNLOAD >>> https://urlin.us/2uEvkV



              -

              ploytec usb asio 2.8.45 serial'.. full version serial crack keygen downloads. usb to serial driver. seeds:1 leech:0 7.62 mb. filespecific.com is a photo sharing website and pictures uploaded are same as my facebook. please enjoy this collection and leave a comment. 100 creampie exclusive compilation 2018 - [ktr].ss 64 bit

              -

              no more ads and popups! free porn movies and well written reviews, complete sex guide and a new xxx website every day! your welcome! all models were 18 years of age or older at the time of depiction.. all,data on this site are provided by 3rd parties and are not vetted or certified by our team. advertise here:. [email protected] (only, but not exclusive!) what is. 10 people have tagged themselves or their gfs in this photo. thai super sex tube. popular · sexy · welcome · latest · under the lips · welcome to my dildo archives!. [email protected] (only, but not exclusive!) what is … 10 people have tagged themselves or their gfs in this photo. popular · sexy · welcome · latest · under the lips · welcome to my dildo archives!

              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Baadshaho In Hindi Download ((FREE)) Free In Torrent.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Baadshaho In Hindi Download ((FREE)) Free In Torrent.md deleted file mode 100644 index 9152c98856c656e1d95185bfaec7d4693242fb7d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Baadshaho In Hindi Download ((FREE)) Free In Torrent.md +++ /dev/null @@ -1,11 +0,0 @@ -

              Baadshaho in hindi download free in torrent


              Download Ziphttps://urlin.us/2uEw02



              -
              -Baadshaho (2017) Hindi Watch online free full movie movierulz todaypk tamilmv tamilrockers. ... Baadshaho Download Torrent Files Quality HDCam. Hindi | Kannada | Telugu | Tamil | Malayalam | Marathi | Punjabi | Hindi | Mangala | ... -Baadshaho 2018 Full Movie -Baadshah | Hindi | movie | 2018 | full | HD | 1080p | New | action | Hindi | ... -Hindi Movie | Baadshah | Starring - Rajinikanth, Pritam, Rakul Preet Singh, Sajid, ...BADSHAHO | 2018 | action | Full Movie | Baadshaho Movie | Baadshah | ... -Baadshah | Hindi | movie | 2018 | full | HD | 1080p | New | action | Hindi... -Baadshah | Hindi | Movie | 8a78ff9644
              -
              -
              -

              diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kscan3d [Full Version].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kscan3d [Full Version].md deleted file mode 100644 index 81d81d8431be740ab45e558626bc62ccdd1fc35c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kscan3d [Full Version].md +++ /dev/null @@ -1,6 +0,0 @@ -

              kscan3d [Full Version]


              DOWNLOAD https://urlin.us/2uEwdQ



              - -3D scan in full color; Use the Kinect for Xbox One sensor to capture a scene; 3D print your scan using 3D Builder ... What's new in this version. 1fdad05405
              -
              -
              -

              diff --git a/spaces/inreVtussa/clothingai/Examples/Apocalypto 2006 In Hindi Dubbed.md b/spaces/inreVtussa/clothingai/Examples/Apocalypto 2006 In Hindi Dubbed.md deleted file mode 100644 index 725326b0c442ca9ae7e1f48f5d6c4b5dca681394..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Apocalypto 2006 In Hindi Dubbed.md +++ /dev/null @@ -1,10 +0,0 @@ -

              Apocalypto 2006 In Hindi Dubbed


              Download ✑ ✑ ✑ https://tiurll.com/2uCiy1



              -
              -June 3, 2021 is the best website/platform for Bollywood and Hollywood HD movies. We provide Google Direct download links from Google Drive for fast and safe... You can download HD movies online via browser anywhere in the world -We learned last week that Disney+ will not be launching Bollywood movies on the platform. -Today, finally, a service representative confirmed the news. -We apologize to all Bollywood fans who will now have to find another option to watch their favorite movies. -Despite the fact that many users have long come to terms with this situation, many still cannot accept it. 8a78ff9644
              -
              -
              -

              diff --git a/spaces/inreVtussa/clothingai/Examples/Bygate M 1987 Speaking Oxford University Press.md b/spaces/inreVtussa/clothingai/Examples/Bygate M 1987 Speaking Oxford University Press.md deleted file mode 100644 index b0ccb8fa7551d88c1bf19203e3c71490e6c77a76..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Bygate M 1987 Speaking Oxford University Press.md +++ /dev/null @@ -1,32 +0,0 @@ - -

              Bygate M 1987 Speaking Oxford University Press: A Classic Book on Oral Communication Skills

              - -

              Speaking is one of the four language skills that every learner of English needs to master. However, speaking is not just a matter of producing sounds and words. It also involves understanding the context, the purpose, the audience, and the conventions of different types of speech acts. How can learners improve their speaking skills and become more confident and fluent speakers?

              -

              Bygate M 1987 Speaking Oxford University Press


              Download File ››› https://tiurll.com/2uCjJS



              - -

              One of the most influential books on this topic is Bygate M 1987 Speaking Oxford University Press. This book, written by Martin Bygate, a professor of applied linguistics at Lancaster University, provides a comprehensive and practical guide to the theory and practice of speaking in English. It covers various aspects of speaking, such as:

              - -
                -
              • The nature and functions of spoken language
              • -
              • The characteristics and strategies of successful speakers
              • -
              • The role of interaction and feedback in speaking
              • -
              • The development and assessment of speaking proficiency
              • -
              • The design and implementation of speaking tasks and activities
              • -
              - -

              The book is based on extensive research and draws on examples from various contexts and genres of spoken language. It also offers useful tips and suggestions for teachers and learners on how to enhance their speaking skills and overcome common problems and challenges. Bygate M 1987 Speaking Oxford University Press is a classic book that has influenced many researchers and practitioners in the field of oral communication. It is still relevant and valuable today for anyone who wants to improve their speaking skills in English.

              - -

              How to Use Bygate M 1987 Speaking Oxford University Press for Learning and Teaching

              - -

              Bygate M 1987 Speaking Oxford University Press is not only a theoretical book, but also a practical one. It offers many examples and exercises that can help learners and teachers to apply the concepts and principles of speaking in English. Here are some ways to use this book for learning and teaching:

              - -
                -
              • Learners can use this book as a self-study resource to improve their speaking skills. They can read the chapters that interest them and do the exercises that suit their level and needs. They can also record themselves speaking and compare their performance with the criteria and feedback provided in the book.
              • -
              • Teachers can use this book as a reference and a source of ideas for designing and conducting speaking lessons and activities. They can adapt the tasks and activities in the book to their own context and objectives. They can also use the book to assess their students' speaking skills and provide them with constructive feedback.
              • -
              • Learners and teachers can use this book as a basis for discussion and reflection on their own speaking experiences and challenges. They can share their opinions, insights, and questions about the topics and issues raised in the book. They can also learn from each other's perspectives and strategies for improving their speaking skills.
              • -
              - -

              Bygate M 1987 Speaking Oxford University Press is a book that can benefit anyone who wants to speak English more effectively and confidently. It is a book that can inspire learners and teachers to explore the fascinating and complex world of spoken language.

              -

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Call Of Duty Modern Warfare 2 Download [UPD] Crack Multiplayer.md b/spaces/inreVtussa/clothingai/Examples/Call Of Duty Modern Warfare 2 Download [UPD] Crack Multiplayer.md deleted file mode 100644 index 0cf666adab0c95244389b953a70d1c5bf672fa0d..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Call Of Duty Modern Warfare 2 Download [UPD] Crack Multiplayer.md +++ /dev/null @@ -1,102 +0,0 @@ - -

              Call Of Duty Modern Warfare 2 Download Crack Multiplayer

              -

              If you are a fan of the Call of Duty series, you probably know that the latest installment, Call of Duty: Modern Warfare II, is one of the most anticipated games of 2022. This game promises to deliver an epic and immersive single-player campaign, as well as a thrilling and competitive multiplayer mode. However, if you don't want to pay for the game or wait for its official release, you might be wondering how to download Call of Duty Modern Warfare 2 crack multiplayer and play it online for free.

              -

              In this article, we will show you how to do that in a few simple steps. But before we begin, we want to warn you that downloading and playing cracked games is illegal and risky. You might face legal consequences, get banned from online services, or expose your computer to malware and viruses. Therefore, we do not condone or encourage piracy in any way. This article is for educational purposes only.

              -

              Call Of Duty Modern Warfare 2 Download Crack Multiplayer


              Download >> https://tiurll.com/2uCk9w



              -

              What is Call of Duty Modern Warfare 2 crack multiplayer?

              -

              A crack is a modified version of a game that bypasses its copy protection and allows it to run without a valid license or activation. A multiplayer crack is a special type of crack that enables online gameplay on unofficial servers or networks. Usually, cracked games cannot be played online because they are blocked by the game developers or publishers.

              -

              Call of Duty Modern Warfare 2 crack multiplayer is a crack that allows you to play the game online with other players who have the same crack. It works by emulating the Steam platform and connecting you to alternative servers that host the game. However, this also means that you cannot play with players who have the legitimate version of the game or access the official features and updates.

              -

              How to download Call of Duty Modern Warfare 2 crack multiplayer?

              -

              To download Call of Duty Modern Warfare 2 crack multiplayer, you will need to follow these steps:

              -
                -
              1. Download the game from a reliable source. You can find many websites that offer the game for free, but be careful not to download fake or malicious files. You can use some of the links provided in the search results above as reference samples.
              2. -
              3. Extract the game files using a program like WinRAR or 7-Zip. You should get a folder named Call of Duty HQ.
              4. -
              5. Download the crack from a trusted source. You can also use some of the links provided in the search results above as reference samples. The crack should contain files like Greenluma.dll, GreenLuma.ini, GreenLuma-2020.exe, Koal.exe, etc.
              6. -
              7. Copy and paste the crack files into the root folder of Steam (where Steam.exe is located). If you don't have Steam installed, you can download it from here: https://store.steampowered.com/about/. You might need to replace some existing files.
              8. -
              9. Move the Call of Duty HQ folder to steamapps\common (where your Steam games are located).
              10. -
              11. Download Warzone 2.0 from Steam and run it before accepting the license agreement. Then exit the game by holding Alt + F4.
              12. -
              13. Run Koal.exe and select Install platform integration.
              14. -
              15. Run GreenLuma-2020.exe from the Steam root folder. It will ask you to use Save AppList. Press Yes.
              16. -
              17. Run D-L-L-I-n-j-e-c-t-o-r.exe from the Steam root folder. In the console window, enter the number 20 and press Enter. Then copy and paste all these IDs: https://pastebin.com/raw/9fZJq6jB
              18. -
              19. Start Steam and launch Call of Duty: Modern Warfare II from your library.
              20. -
              21. Go to the campaign window and enjoy the game. You can also play multiplayer mode with other players who have the same crack.
              22. -
              -

              Tips and tricks for playing Call of Duty Modern Warfare 2 crack multiplayer

              -

              Here are some tips and tricks for playing Call of Duty Modern Warfare 2 crack multiplayer:

              -
                -
              • Make sure you have a stable internet connection and a decent PC that meets the system requirements of the game.
              • -
              • Disable your antivirus software or firewall before running the game or the crack. They might interfere with the game files or block your online access.
              • -
              • Create a new Steam account or use an alternative one to avoid getting banned from your main account or losing your progress.
              • -
              • Do not update or verify the game files through Steam. It might break the crack or revert it to its original state.
              • -
              • Do not use cheats or hacks while playing online. It might ruin your gaming experience or get you kicked out of servers.
              • -
              • Be respectful and friendly to other players online. Do not spam, troll, or harass them.
              • -
              -

              Conclusion

              -

              In this article, we have shown you how to download Call of Duty Modern Warfare 2 crack multiplayer and play it online for free. However, we remind you that this is an illegal and risky activity that we do not support or recommend. If you want to enjoy the full features and benefits of the game, you should buy it from its official website or store: https://www.callofduty.com/modernwarfareii

              -

              We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

              -

              What are the benefits of playing Call of Duty Modern Warfare 2 crack multiplayer?

              -

              Playing Call of Duty Modern Warfare 2 crack multiplayer has some benefits that might appeal to some gamers. Here are some of them:

              -
                -
              • You can save money by not buying the game or paying for a subscription service.
              • -
              • You can play the game before its official release date and enjoy its features and content ahead of others.
              • -
              • You can experience the game in a different way by joining alternative servers or communities that might have different rules or mods.
              • -
              • You can challenge yourself by playing against skilled or experienced players who also use the crack.
              • -
              • You can have fun and enjoy the game without worrying about achievements, rankings, or rewards.
              • -
              -

              What are the drawbacks of playing Call of Duty Modern Warfare 2 crack multiplayer?

              -

              Playing Call of Duty Modern Warfare 2 crack multiplayer also has some drawbacks that might discourage some gamers. Here are some of them:

              -

              -
                -
              • You might face legal issues or penalties for violating the intellectual property rights of the game developers or publishers.
              • -
              • You might get banned from Steam or other online services for using a cracked game or a fake account.
              • -
              • You might expose your computer to malware or viruses that might harm your system or steal your data.
              • -
              • You might encounter technical problems or errors that might affect your gameplay or performance.
              • -
              • You might miss out on the official updates, patches, or DLCs that might improve or expand the game.
              • -
              • You might have a limited or poor online experience due to low server quality, lack of players, or cheating.
              • -
              • You might lose your progress or data if the crack stops working or gets detected.
              • -
              -

              How to play Call of Duty Modern Warfare 2 crack multiplayer safely and responsibly?

              -

              If you decide to play Call of Duty Modern Warfare 2 crack multiplayer, you should do it safely and responsibly. Here are some tips to help you:

              -
                -
              • Use a VPN service to hide your IP address and location from potential trackers or hackers.
              • -
              • Scan your downloaded files with a reliable antivirus software before running them.
              • -
              • Backup your important files and data before installing or playing the game.
              • -
              • Use a separate computer or device for playing the game if possible.
              • -
              • Do not share your personal information or credentials with anyone online.
              • -
              • Do not download or install any suspicious files or programs from unknown sources.
              • -
              • Do not use any cheats or hacks while playing online.
              • -
              • Do not abuse or harass other players online.
              • -
              • Support the game developers and publishers by buying the game if you like it.
              • -
              -

              Conclusion

              -

              In this article, we have shown you how to download Call of Duty Modern Warfare 2 crack multiplayer and play it online for free. We have also discussed the benefits and drawbacks of playing cracked games, as well as some tips to play them safely and responsibly. However, we remind you that this is an illegal and risky activity that we do not support or recommend. If you want to enjoy the full features and benefits of the game, you should buy it from its official website or store: https://www.callofduty.com/modernwarfareii

              -

              We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

              -

              What are the features of Call of Duty Modern Warfare 2 crack multiplayer?

              -

              Call of Duty Modern Warfare 2 crack multiplayer offers you a chance to experience the features of the game that make it one of the best first-person shooters of all time. Here are some of them:

              -
                -
              • A gripping and cinematic single-player campaign that follows the exploits of Task Force 141 as they fight against a new threat from Russia and its allies.
              • -
              • A robust and diverse multiplayer mode that features 16 maps, 15 game modes, 70 weapons, 10 perks, and customizable killstreaks.
              • -
              • A cooperative mode called Special Ops that lets you team up with a friend or a random player to complete various missions and challenges.
              • -
              • A new feature called Gunsmith that allows you to customize your weapons with attachments, camos, charms, stickers, and more.
              • -
              • A new feature called Crossplay that enables you to play with or against players on different platforms (PC, PS4, Xbox One).
              • -
              • A new feature called Warzone that introduces a massive battle royale mode with up to 150 players, vehicles, loot, contracts, and more.
              • -
              -

              How to improve your skills in Call of Duty Modern Warfare 2 crack multiplayer?

              -

              If you want to improve your skills in Call of Duty Modern Warfare 2 crack multiplayer, you will need to practice and learn from your mistakes. Here are some tips to help you:

              -
                -
              • Choose a weapon that suits your playstyle and preference. Experiment with different attachments and loadouts to find the best combination for you.
              • -
              • Learn the maps and their layouts. Know where the spawn points, objectives, choke points, cover spots, and high-traffic areas are.
              • -
              • Use your minimap and listen to your surroundings. Pay attention to enemy movements, gunfire, footsteps, callouts, and other cues.
              • -
              • Communicate and coordinate with your teammates. Use voice chat or text chat to share information, strategies, and requests.
              • -
              • Adapt and adjust to different situations and opponents. Switch your tactics, weapons, perks, or killstreaks if necessary.
              • -
              • Play smart and strategically. Use cover, flank routes, grenades, flashbangs, smoke screens, and other tools to gain an advantage over your enemies.
              • -
              • Have fun and enjoy the game. Don't get frustrated or angry if you lose or die. Learn from your mistakes and try again.
              • -
              -

              Conclusion

              -

              In this article, we have shown you how to download Call of Duty Modern Warfare 2 crack multiplayer and play it online for free. We have also discussed the benefits and drawbacks of playing cracked games, as well as some tips to play them safely and responsibly. Moreover, we have highlighted the features and tips of playing Call of Duty Modern Warfare 2 crack multiplayer that make it one of the best first-person shooters of all time. However, we remind you that this is an illegal and risky activity that we do not support or recommend. If you want to enjoy the full features and benefits of the game, you should buy it from its official website or store: https://www.callofduty.com/modernwarfareii

              -

              We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

              -

              Conclusion

              -

              In this article, we have shown you how to download Call of Duty Modern Warfare 2 crack multiplayer and play it online for free. We have also discussed the benefits and drawbacks of playing cracked games, as well as some tips to play them safely and responsibly. Moreover, we have highlighted the features and tips of playing Call of Duty Modern Warfare 2 crack multiplayer that make it one of the best first-person shooters of all time. However, we remind you that this is an illegal and risky activity that we do not support or recommend. If you want to enjoy the full features and benefits of the game, you should buy it from its official website or store: https://www.callofduty.com/modernwarfareii

              -

              We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Crack Mot De Passe Site Internet [WORK].md b/spaces/inreVtussa/clothingai/Examples/Crack Mot De Passe Site Internet [WORK].md deleted file mode 100644 index 6b4e0811c6837b395c6eaf3eed73f692e69fa3d1..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Crack Mot De Passe Site Internet [WORK].md +++ /dev/null @@ -1,6 +0,0 @@ -

              Crack Mot De Passe Site Internet


              DOWNLOAD ✸✸✸ https://tiurll.com/2uCiLO



              - -Notifies you if you're using easy-to-crack passwords; Alerts you if have duplicate passwords that could leave you vulnerable; Uses a secure Password Generator ... 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/app/layout.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/app/layout.tsx deleted file mode 100644 index ddee01cb98c9dd284ac9bce880ce2768ffa51b9f..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/app/layout.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import './globals.css' -import type { Metadata } from 'next' -import { Inter } from 'next/font/google' - -const inter = Inter({ subsets: ['latin'] }) - -export const metadata: Metadata = { - title: 'VideoChain UI', - description: 'Generate AI videos using this Hugging Face Space!', -} - -export default function RootLayout({ - children, -}: { - children: React.ReactNode -}) { - return ( - - - {children} - - - ) -} diff --git a/spaces/jekyl/JosefJilek-loliDiffusion/app.py b/spaces/jekyl/JosefJilek-loliDiffusion/app.py deleted file mode 100644 index c6fe0376ea04cdfb76e4a1e5e544b6015b99c517..0000000000000000000000000000000000000000 --- a/spaces/jekyl/JosefJilek-loliDiffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/JosefJilek/loliDiffusion").launch() \ No newline at end of file diff --git a/spaces/jgurzoni/image_background_swapper/models/ade20k/base.py b/spaces/jgurzoni/image_background_swapper/models/ade20k/base.py deleted file mode 100644 index 8cdbe2d3e7dbadf4ed5e5a7cf2d248761ef25d9c..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/models/ade20k/base.py +++ /dev/null @@ -1,627 +0,0 @@ -"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch""" - -import os - -import pandas as pd -import torch -import torch.nn as nn -import torch.nn.functional as F -from scipy.io import loadmat -from torch.nn.modules import BatchNorm2d - -from . import resnet -from . import mobilenet - - -NUM_CLASS = 150 -base_path = os.path.dirname(os.path.abspath(__file__)) # current file path -colors_path = os.path.join(base_path, 'color150.mat') -classes_path = os.path.join(base_path, 'object150_info.csv') - -segm_options = dict(colors=loadmat(colors_path)['colors'], - classes=pd.read_csv(classes_path),) - - -class NormalizeTensor: - def __init__(self, mean, std, inplace=False): - """Normalize a tensor image with mean and standard deviation. - .. note:: - This transform acts out of place by default, i.e., it does not mutates the input tensor. - See :class:`~torchvision.transforms.Normalize` for more details. - Args: - tensor (Tensor): Tensor image of size (C, H, W) to be normalized. - mean (sequence): Sequence of means for each channel. - std (sequence): Sequence of standard deviations for each channel. - inplace(bool,optional): Bool to make this operation inplace. - Returns: - Tensor: Normalized Tensor image. - """ - - self.mean = mean - self.std = std - self.inplace = inplace - - def __call__(self, tensor): - if not self.inplace: - tensor = tensor.clone() - - dtype = tensor.dtype - mean = torch.as_tensor(self.mean, dtype=dtype, device=tensor.device) - std = torch.as_tensor(self.std, dtype=dtype, device=tensor.device) - tensor.sub_(mean[None, :, None, None]).div_(std[None, :, None, None]) - return tensor - - -# Model Builder -class ModelBuilder: - # custom weights initialization - @staticmethod - def weights_init(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.kaiming_normal_(m.weight.data) - elif classname.find('BatchNorm') != -1: - m.weight.data.fill_(1.) - m.bias.data.fill_(1e-4) - - @staticmethod - def build_encoder(arch='resnet50dilated', fc_dim=512, weights=''): - pretrained = True if len(weights) == 0 else False - arch = arch.lower() - if arch == 'mobilenetv2dilated': - orig_mobilenet = mobilenet.__dict__['mobilenetv2'](pretrained=pretrained) - net_encoder = MobileNetV2Dilated(orig_mobilenet, dilate_scale=8) - elif arch == 'resnet18': - orig_resnet = resnet.__dict__['resnet18'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet18dilated': - orig_resnet = resnet.__dict__['resnet18'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, dilate_scale=8) - elif arch == 'resnet50dilated': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = ResnetDilated(orig_resnet, dilate_scale=8) - elif arch == 'resnet50': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - else: - raise Exception('Architecture undefined!') - - # encoders are usually pretrained - # net_encoder.apply(ModelBuilder.weights_init) - if len(weights) > 0: - print('Loading weights for net_encoder') - net_encoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_encoder - - @staticmethod - def build_decoder(arch='ppm_deepsup', - fc_dim=512, num_class=NUM_CLASS, - weights='', use_softmax=False, drop_last_conv=False): - arch = arch.lower() - if arch == 'ppm_deepsup': - net_decoder = PPMDeepsup( - num_class=num_class, - fc_dim=fc_dim, - use_softmax=use_softmax, - drop_last_conv=drop_last_conv) - elif arch == 'c1_deepsup': - net_decoder = C1DeepSup( - num_class=num_class, - fc_dim=fc_dim, - use_softmax=use_softmax, - drop_last_conv=drop_last_conv) - else: - raise Exception('Architecture undefined!') - - net_decoder.apply(ModelBuilder.weights_init) - if len(weights) > 0: - print('Loading weights for net_decoder') - net_decoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_decoder - - @staticmethod - def get_decoder(weights_path, arch_encoder, arch_decoder, fc_dim, drop_last_conv, *arts, **kwargs): - path = os.path.join(weights_path, 'ade20k', f'ade20k-{arch_encoder}-{arch_decoder}/decoder_epoch_20.pth') - return ModelBuilder.build_decoder(arch=arch_decoder, fc_dim=fc_dim, weights=path, use_softmax=True, drop_last_conv=drop_last_conv) - - @staticmethod - def get_encoder(weights_path, arch_encoder, arch_decoder, fc_dim, segmentation, - *arts, **kwargs): - if segmentation: - path = os.path.join(weights_path, 'ade20k', f'ade20k-{arch_encoder}-{arch_decoder}/encoder_epoch_20.pth') - else: - path = '' - return ModelBuilder.build_encoder(arch=arch_encoder, fc_dim=fc_dim, weights=path) - - -def conv3x3_bn_relu(in_planes, out_planes, stride=1): - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False), - BatchNorm2d(out_planes), - nn.ReLU(inplace=True), - ) - - -class SegmentationModule(nn.Module): - def __init__(self, - weights_path, - num_classes=150, - arch_encoder="resnet50dilated", - drop_last_conv=False, - net_enc=None, # None for Default encoder - net_dec=None, # None for Default decoder - encode=None, # {None, 'binary', 'color', 'sky'} - use_default_normalization=False, - return_feature_maps=False, - return_feature_maps_level=3, # {0, 1, 2, 3} - return_feature_maps_only=True, - **kwargs, - ): - super().__init__() - self.weights_path = weights_path - self.drop_last_conv = drop_last_conv - self.arch_encoder = arch_encoder - if self.arch_encoder == "resnet50dilated": - self.arch_decoder = "ppm_deepsup" - self.fc_dim = 2048 - elif self.arch_encoder == "mobilenetv2dilated": - self.arch_decoder = "c1_deepsup" - self.fc_dim = 320 - else: - raise NotImplementedError(f"No such arch_encoder={self.arch_encoder}") - model_builder_kwargs = dict(arch_encoder=self.arch_encoder, - arch_decoder=self.arch_decoder, - fc_dim=self.fc_dim, - drop_last_conv=drop_last_conv, - weights_path=self.weights_path) - - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.encoder = ModelBuilder.get_encoder(**model_builder_kwargs) if net_enc is None else net_enc - self.decoder = ModelBuilder.get_decoder(**model_builder_kwargs) if net_dec is None else net_dec - self.use_default_normalization = use_default_normalization - self.default_normalization = NormalizeTensor(mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]) - - self.encode = encode - - self.return_feature_maps = return_feature_maps - - assert 0 <= return_feature_maps_level <= 3 - self.return_feature_maps_level = return_feature_maps_level - - def normalize_input(self, tensor): - if tensor.min() < 0 or tensor.max() > 1: - raise ValueError("Tensor should be 0..1 before using normalize_input") - return self.default_normalization(tensor) - - @property - def feature_maps_channels(self): - return 256 * 2**(self.return_feature_maps_level) # 256, 512, 1024, 2048 - - def forward(self, img_data, segSize=None): - if segSize is None: - raise NotImplementedError("Please pass segSize param. By default: (300, 300)") - - fmaps = self.encoder(img_data, return_feature_maps=True) - pred = self.decoder(fmaps, segSize=segSize) - - if self.return_feature_maps: - return pred, fmaps - # print("BINARY", img_data.shape, pred.shape) - return pred - - def multi_mask_from_multiclass(self, pred, classes): - def isin(ar1, ar2): - return (ar1[..., None] == ar2).any(-1).float() - return isin(pred, torch.LongTensor(classes).to(self.device)) - - @staticmethod - def multi_mask_from_multiclass_probs(scores, classes): - res = None - for c in classes: - if res is None: - res = scores[:, c] - else: - res += scores[:, c] - return res - - def predict(self, tensor, imgSizes=(-1,), # (300, 375, 450, 525, 600) - segSize=None): - """Entry-point for segmentation. Use this methods instead of forward - Arguments: - tensor {torch.Tensor} -- BCHW - Keyword Arguments: - imgSizes {tuple or list} -- imgSizes for segmentation input. - default: (300, 450) - original implementation: (300, 375, 450, 525, 600) - - """ - if segSize is None: - segSize = tensor.shape[-2:] - segSize = (tensor.shape[2], tensor.shape[3]) - with torch.no_grad(): - if self.use_default_normalization: - tensor = self.normalize_input(tensor) - scores = torch.zeros(1, NUM_CLASS, segSize[0], segSize[1]).to(self.device) - features = torch.zeros(1, self.feature_maps_channels, segSize[0], segSize[1]).to(self.device) - - result = [] - for img_size in imgSizes: - if img_size != -1: - img_data = F.interpolate(tensor.clone(), size=img_size) - else: - img_data = tensor.clone() - - if self.return_feature_maps: - pred_current, fmaps = self.forward(img_data, segSize=segSize) - else: - pred_current = self.forward(img_data, segSize=segSize) - - - result.append(pred_current) - scores = scores + pred_current / len(imgSizes) - - # Disclaimer: We use and aggregate only last fmaps: fmaps[3] - if self.return_feature_maps: - features = features + F.interpolate(fmaps[self.return_feature_maps_level], size=segSize) / len(imgSizes) - - _, pred = torch.max(scores, dim=1) - - if self.return_feature_maps: - return features - - return pred, result - - def get_edges(self, t): - edge = torch.cuda.ByteTensor(t.size()).zero_() - edge[:, :, :, 1:] = edge[:, :, :, 1:] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, :, :-1] = edge[:, :, :, :-1] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, 1:, :] = edge[:, :, 1:, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - edge[:, :, :-1, :] = edge[:, :, :-1, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - - if True: - return edge.half() - return edge.float() - - -# pyramid pooling, deep supervision -class PPMDeepsup(nn.Module): - def __init__(self, num_class=NUM_CLASS, fc_dim=4096, - use_softmax=False, pool_scales=(1, 2, 3, 6), - drop_last_conv=False): - super().__init__() - self.use_softmax = use_softmax - self.drop_last_conv = drop_last_conv - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim + len(pool_scales) * 512, 512, - kernel_size=3, padding=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.dropout_deepsup = nn.Dropout2d(0.1) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - if self.drop_last_conv: - return ppm_out - else: - x = self.conv_last(ppm_out) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.dropout_deepsup(_) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -class Resnet(nn.Module): - def __init__(self, orig_resnet): - super(Resnet, self).__init__() - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x); conv_out.append(x); - x = self.layer2(x); conv_out.append(x); - x = self.layer3(x); conv_out.append(x); - x = self.layer4(x); conv_out.append(x); - - if return_feature_maps: - return conv_out - return [x] - -# Resnet Dilated -class ResnetDilated(nn.Module): - def __init__(self, orig_resnet, dilate_scale=8): - super().__init__() - from functools import partial - - if dilate_scale == 8: - orig_resnet.layer3.apply( - partial(self._nostride_dilate, dilate=2)) - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=4)) - elif dilate_scale == 16: - orig_resnet.layer4.apply( - partial(self._nostride_dilate, dilate=2)) - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def _nostride_dilate(self, m, dilate): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - # the convolution with stride - if m.stride == (2, 2): - m.stride = (1, 1) - if m.kernel_size == (3, 3): - m.dilation = (dilate // 2, dilate // 2) - m.padding = (dilate // 2, dilate // 2) - # other convoluions - else: - if m.kernel_size == (3, 3): - m.dilation = (dilate, dilate) - m.padding = (dilate, dilate) - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x) - conv_out.append(x) - x = self.layer2(x) - conv_out.append(x) - x = self.layer3(x) - conv_out.append(x) - x = self.layer4(x) - conv_out.append(x) - - if return_feature_maps: - return conv_out - return [x] - -class MobileNetV2Dilated(nn.Module): - def __init__(self, orig_net, dilate_scale=8): - super(MobileNetV2Dilated, self).__init__() - from functools import partial - - # take pretrained mobilenet features - self.features = orig_net.features[:-1] - - self.total_idx = len(self.features) - self.down_idx = [2, 4, 7, 14] - - if dilate_scale == 8: - for i in range(self.down_idx[-2], self.down_idx[-1]): - self.features[i].apply( - partial(self._nostride_dilate, dilate=2) - ) - for i in range(self.down_idx[-1], self.total_idx): - self.features[i].apply( - partial(self._nostride_dilate, dilate=4) - ) - elif dilate_scale == 16: - for i in range(self.down_idx[-1], self.total_idx): - self.features[i].apply( - partial(self._nostride_dilate, dilate=2) - ) - - def _nostride_dilate(self, m, dilate): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - # the convolution with stride - if m.stride == (2, 2): - m.stride = (1, 1) - if m.kernel_size == (3, 3): - m.dilation = (dilate//2, dilate//2) - m.padding = (dilate//2, dilate//2) - # other convoluions - else: - if m.kernel_size == (3, 3): - m.dilation = (dilate, dilate) - m.padding = (dilate, dilate) - - def forward(self, x, return_feature_maps=False): - if return_feature_maps: - conv_out = [] - for i in range(self.total_idx): - x = self.features[i](x) - if i in self.down_idx: - conv_out.append(x) - conv_out.append(x) - return conv_out - - else: - return [self.features(x)] - - -# last conv, deep supervision -class C1DeepSup(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, use_softmax=False, drop_last_conv=False): - super(C1DeepSup, self).__init__() - self.use_softmax = use_softmax - self.drop_last_conv = drop_last_conv - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - x = self.cbr(conv5) - - if self.drop_last_conv: - return x - else: - x = self.conv_last(x) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - return x - - # deep sup - conv4 = conv_out[-2] - _ = self.cbr_deepsup(conv4) - _ = self.conv_last_deepsup(_) - - x = nn.functional.log_softmax(x, dim=1) - _ = nn.functional.log_softmax(_, dim=1) - - return (x, _) - - -# last conv -class C1(nn.Module): - def __init__(self, num_class=150, fc_dim=2048, use_softmax=False): - super(C1, self).__init__() - self.use_softmax = use_softmax - - self.cbr = conv3x3_bn_relu(fc_dim, fc_dim // 4, 1) - - # last conv - self.conv_last = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - x = self.cbr(conv5) - x = self.conv_last(x) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - - return x - - -# pyramid pooling -class PPM(nn.Module): - def __init__(self, num_class=150, fc_dim=4096, - use_softmax=False, pool_scales=(1, 2, 3, 6)): - super(PPM, self).__init__() - self.use_softmax = use_softmax - - self.ppm = [] - for scale in pool_scales: - self.ppm.append(nn.Sequential( - nn.AdaptiveAvgPool2d(scale), - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm = nn.ModuleList(self.ppm) - - self.conv_last = nn.Sequential( - nn.Conv2d(fc_dim+len(pool_scales)*512, 512, - kernel_size=3, padding=1, bias=False), - BatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Dropout2d(0.1), - nn.Conv2d(512, num_class, kernel_size=1) - ) - - def forward(self, conv_out, segSize=None): - conv5 = conv_out[-1] - - input_size = conv5.size() - ppm_out = [conv5] - for pool_scale in self.ppm: - ppm_out.append(nn.functional.interpolate( - pool_scale(conv5), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False)) - ppm_out = torch.cat(ppm_out, 1) - - x = self.conv_last(ppm_out) - - if self.use_softmax: # is True during inference - x = nn.functional.interpolate( - x, size=segSize, mode='bilinear', align_corners=False) - x = nn.functional.softmax(x, dim=1) - else: - x = nn.functional.log_softmax(x, dim=1) - return x diff --git a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/nn/parallel/data_parallel.py b/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/nn/parallel/data_parallel.py deleted file mode 100644 index 376fc038919aa2a5bd696141e7bb6025d4981306..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/nn/parallel/data_parallel.py +++ /dev/null @@ -1,112 +0,0 @@ -# -*- coding: utf8 -*- - -import torch.cuda as cuda -import torch.nn as nn -import torch -import collections -from torch.nn.parallel._functions import Gather - - -__all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to'] - - -def async_copy_to(obj, dev, main_stream=None): - if torch.is_tensor(obj): - v = obj.cuda(dev, non_blocking=True) - if main_stream is not None: - v.data.record_stream(main_stream) - return v - elif isinstance(obj, collections.Mapping): - return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()} - elif isinstance(obj, collections.Sequence): - return [async_copy_to(o, dev, main_stream) for o in obj] - else: - return obj - - -def dict_gather(outputs, target_device, dim=0): - """ - Gathers variables from different GPUs on a specified device - (-1 means the CPU), with dictionary support. - """ - def gather_map(outputs): - out = outputs[0] - if torch.is_tensor(out): - # MJY(20180330) HACK:: force nr_dims > 0 - if out.dim() == 0: - outputs = [o.unsqueeze(0) for o in outputs] - return Gather.apply(target_device, dim, *outputs) - elif out is None: - return None - elif isinstance(out, collections.Mapping): - return {k: gather_map([o[k] for o in outputs]) for k in out} - elif isinstance(out, collections.Sequence): - return type(out)(map(gather_map, zip(*outputs))) - return gather_map(outputs) - - -class DictGatherDataParallel(nn.DataParallel): - def gather(self, outputs, output_device): - return dict_gather(outputs, output_device, dim=self.dim) - - -class UserScatteredDataParallel(DictGatherDataParallel): - def scatter(self, inputs, kwargs, device_ids): - assert len(inputs) == 1 - inputs = inputs[0] - inputs = _async_copy_stream(inputs, device_ids) - inputs = [[i] for i in inputs] - assert len(kwargs) == 0 - kwargs = [{} for _ in range(len(inputs))] - - return inputs, kwargs - - -def user_scattered_collate(batch): - return batch - - -def _async_copy(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - for i, dev in zip(inputs, device_ids): - with cuda.device(dev): - outputs.append(async_copy_to(i, dev)) - - return tuple(outputs) - - -def _async_copy_stream(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - streams = [_get_stream(d) for d in device_ids] - for i, dev, stream in zip(inputs, device_ids, streams): - with cuda.device(dev): - main_stream = cuda.current_stream() - with cuda.stream(stream): - outputs.append(async_copy_to(i, dev, main_stream=main_stream)) - main_stream.wait_stream(stream) - - return outputs - - -"""Adapted from: torch/nn/parallel/_functions.py""" -# background streams used for copying -_streams = None - - -def _get_stream(device): - """Gets a background stream for copying between CPU and GPU""" - global _streams - if device == -1: - return None - if _streams is None: - _streams = [None] * cuda.device_count() - if _streams[device] is None: _streams[device] = cuda.Stream(device) - return _streams[device] diff --git a/spaces/jiawei011/dreamgaussian/mesh_renderer.py b/spaces/jiawei011/dreamgaussian/mesh_renderer.py deleted file mode 100644 index 53b6c681fbf3a0856d63aea15bcfd05d967580d9..0000000000000000000000000000000000000000 --- a/spaces/jiawei011/dreamgaussian/mesh_renderer.py +++ /dev/null @@ -1,154 +0,0 @@ -import os -import math -import cv2 -import trimesh -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F - -import nvdiffrast.torch as dr -from mesh import Mesh, safe_normalize - -def scale_img_nhwc(x, size, mag='bilinear', min='bilinear'): - assert (x.shape[1] >= size[0] and x.shape[2] >= size[1]) or (x.shape[1] < size[0] and x.shape[2] < size[1]), "Trying to magnify image in one dimension and minify in the other" - y = x.permute(0, 3, 1, 2) # NHWC -> NCHW - if x.shape[1] > size[0] and x.shape[2] > size[1]: # Minification, previous size was bigger - y = torch.nn.functional.interpolate(y, size, mode=min) - else: # Magnification - if mag == 'bilinear' or mag == 'bicubic': - y = torch.nn.functional.interpolate(y, size, mode=mag, align_corners=True) - else: - y = torch.nn.functional.interpolate(y, size, mode=mag) - return y.permute(0, 2, 3, 1).contiguous() # NCHW -> NHWC - -def scale_img_hwc(x, size, mag='bilinear', min='bilinear'): - return scale_img_nhwc(x[None, ...], size, mag, min)[0] - -def scale_img_nhw(x, size, mag='bilinear', min='bilinear'): - return scale_img_nhwc(x[..., None], size, mag, min)[..., 0] - -def scale_img_hw(x, size, mag='bilinear', min='bilinear'): - return scale_img_nhwc(x[None, ..., None], size, mag, min)[0, ..., 0] - -def trunc_rev_sigmoid(x, eps=1e-6): - x = x.clamp(eps, 1 - eps) - return torch.log(x / (1 - x)) - -def make_divisible(x, m=8): - return int(math.ceil(x / m) * m) - -class Renderer(nn.Module): - def __init__(self, opt): - - super().__init__() - - self.opt = opt - - self.mesh = Mesh.load(self.opt.mesh, resize=False) - - if not self.opt.force_cuda_rast and (not self.opt.gui or os.name == 'nt'): - self.glctx = dr.RasterizeGLContext() - else: - self.glctx = dr.RasterizeCudaContext() - - # extract trainable parameters - self.v_offsets = nn.Parameter(torch.zeros_like(self.mesh.v)) - self.raw_albedo = nn.Parameter(trunc_rev_sigmoid(self.mesh.albedo)) - - - def get_params(self): - - params = [ - {'params': self.raw_albedo, 'lr': self.opt.texture_lr}, - ] - - if self.opt.train_geo: - params.append({'params': self.v_offsets, 'lr': self.opt.geom_lr}) - - return params - - @torch.no_grad() - def export_mesh(self, save_path): - self.mesh.v = (self.mesh.v + self.v_offsets).detach() - self.mesh.albedo = torch.sigmoid(self.raw_albedo.detach()) - self.mesh.write(save_path) - - - def render(self, pose, proj, h0, w0, ssaa=1, bg_color=1, texture_filter='linear-mipmap-linear'): - - # do super-sampling - if ssaa != 1: - h = make_divisible(h0 * ssaa, 8) - w = make_divisible(w0 * ssaa, 8) - else: - h, w = h0, w0 - - results = {} - - # get v - if self.opt.train_geo: - v = self.mesh.v + self.v_offsets # [N, 3] - else: - v = self.mesh.v - - pose = torch.from_numpy(pose.astype(np.float32)).to(v.device) - proj = torch.from_numpy(proj.astype(np.float32)).to(v.device) - - # get v_clip and render rgb - v_cam = torch.matmul(F.pad(v, pad=(0, 1), mode='constant', value=1.0), torch.inverse(pose).T).float().unsqueeze(0) - v_clip = v_cam @ proj.T - - rast, rast_db = dr.rasterize(self.glctx, v_clip, self.mesh.f, (h, w)) - - alpha = (rast[0, ..., 3:] > 0).float() - depth, _ = dr.interpolate(-v_cam[..., [2]], rast, self.mesh.f) # [1, H, W, 1] - depth = depth.squeeze(0) # [H, W, 1] - - texc, texc_db = dr.interpolate(self.mesh.vt.unsqueeze(0).contiguous(), rast, self.mesh.ft, rast_db=rast_db, diff_attrs='all') - albedo = dr.texture(self.raw_albedo.unsqueeze(0), texc, uv_da=texc_db, filter_mode=texture_filter) # [1, H, W, 3] - albedo = torch.sigmoid(albedo) - # get vn and render normal - if self.opt.train_geo: - i0, i1, i2 = self.mesh.f[:, 0].long(), self.mesh.f[:, 1].long(), self.mesh.f[:, 2].long() - v0, v1, v2 = v[i0, :], v[i1, :], v[i2, :] - - face_normals = torch.cross(v1 - v0, v2 - v0) - face_normals = safe_normalize(face_normals) - - vn = torch.zeros_like(v) - vn.scatter_add_(0, i0[:, None].repeat(1,3), face_normals) - vn.scatter_add_(0, i1[:, None].repeat(1,3), face_normals) - vn.scatter_add_(0, i2[:, None].repeat(1,3), face_normals) - - vn = torch.where(torch.sum(vn * vn, -1, keepdim=True) > 1e-20, vn, torch.tensor([0.0, 0.0, 1.0], dtype=torch.float32, device=vn.device)) - else: - vn = self.mesh.vn - - normal, _ = dr.interpolate(vn.unsqueeze(0).contiguous(), rast, self.mesh.fn) - normal = safe_normalize(normal[0]) - - # rotated normal (where [0, 0, 1] always faces camera) - rot_normal = normal @ pose[:3, :3] - viewcos = rot_normal[..., [2]] - - # antialias - albedo = dr.antialias(albedo, rast, v_clip, self.mesh.f).squeeze(0) # [H, W, 3] - albedo = alpha * albedo + (1 - alpha) * bg_color - - # ssaa - if ssaa != 1: - albedo = scale_img_hwc(albedo, (h0, w0)) - alpha = scale_img_hwc(alpha, (h0, w0)) - depth = scale_img_hwc(depth, (h0, w0)) - normal = scale_img_hwc(normal, (h0, w0)) - viewcos = scale_img_hwc(viewcos, (h0, w0)) - - results['image'] = albedo.clamp(0, 1) - results['alpha'] = alpha - results['depth'] = depth - results['normal'] = (normal + 1) / 2 - results['viewcos'] = viewcos - - return results \ No newline at end of file diff --git a/spaces/jiejiejie0420/bingo/src/pages/api/sydney.ts b/spaces/jiejiejie0420/bingo/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/IO/test_PKCS8.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/IO/test_PKCS8.py deleted file mode 100644 index cf91d69cf4c69faedb623f11c62a09e7c61000f8..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/IO/test_PKCS8.py +++ /dev/null @@ -1,425 +0,0 @@ -# -# SelfTest/IO/test_PKCS8.py: Self-test for the PKCS8 module -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -"""Self-tests for Crypto.IO.PKCS8 module""" - -import unittest -from binascii import unhexlify - -from Crypto.Util.py3compat import * -from Crypto.IO import PKCS8 - -from Crypto.Util.asn1 import DerNull - -oid_key = '1.2.840.113549.1.1.1' - -# Original RSA key (in DER format) -# hexdump -v -e '32/1 "%02x" "\n"' key.der -clear_key=""" -308201ab020100025a00b94a7f7075ab9e79e8196f47be707781e80dd965cf16 -0c951a870b71783b6aaabbd550c0e65e5a3dfe15b8620009f6d7e5efec42a3f0 -6fe20faeebb0c356e79cdec6db4dd427e82d8ae4a5b90996227b8ba54ccfc4d2 -5c08050203010001025a00afa09c70d528299b7552fe766b5d20f9a221d66938 -c3b68371d48515359863ff96f0978d700e08cd6fd3d8a3f97066fc2e0d5f78eb -3a50b8e17ba297b24d1b8e9cdfd18d608668198d724ad15863ef0329195dee89 -3f039395022d0ebe0518df702a8b25954301ec60a97efdcec8eaa4f2e76ca7e8 -8dfbc3f7e0bb83f9a0e8dc47c0f8c746e9df6b022d0c9195de13f09b7be1fdd7 -1f56ae7d973e08bd9fd2c3dfd8936bb05be9cc67bd32d663c7f00d70932a0be3 -c24f022d0ac334eb6cabf1933633db007b763227b0d9971a9ea36aca8b669ec9 -4fcf16352f6b3dcae28e4bd6137db4ddd3022d0400a09f15ee7b351a2481cb03 -09920905c236d09c87afd3022f3afc2a19e3b746672b635238956ee7e6dd62d5 -022d0cd88ed14fcfbda5bbf0257f700147137bbab9c797af7df866704b889aa3 -7e2e93df3ff1a0fd3490111dcdbc4c -""" - -# Same key as above, wrapped in PKCS#8 but w/o password -# -# openssl pkcs8 -topk8 -inform DER -nocrypt -in key.der -outform DER -out keyp8.der -# hexdump -v -e '32/1 "%02x" "\n"' keyp8.der -wrapped_clear_key=""" -308201c5020100300d06092a864886f70d0101010500048201af308201ab0201 -00025a00b94a7f7075ab9e79e8196f47be707781e80dd965cf160c951a870b71 -783b6aaabbd550c0e65e5a3dfe15b8620009f6d7e5efec42a3f06fe20faeebb0 -c356e79cdec6db4dd427e82d8ae4a5b90996227b8ba54ccfc4d25c0805020301 -0001025a00afa09c70d528299b7552fe766b5d20f9a221d66938c3b68371d485 -15359863ff96f0978d700e08cd6fd3d8a3f97066fc2e0d5f78eb3a50b8e17ba2 -97b24d1b8e9cdfd18d608668198d724ad15863ef0329195dee893f039395022d -0ebe0518df702a8b25954301ec60a97efdcec8eaa4f2e76ca7e88dfbc3f7e0bb -83f9a0e8dc47c0f8c746e9df6b022d0c9195de13f09b7be1fdd71f56ae7d973e -08bd9fd2c3dfd8936bb05be9cc67bd32d663c7f00d70932a0be3c24f022d0ac3 -34eb6cabf1933633db007b763227b0d9971a9ea36aca8b669ec94fcf16352f6b -3dcae28e4bd6137db4ddd3022d0400a09f15ee7b351a2481cb0309920905c236 -d09c87afd3022f3afc2a19e3b746672b635238956ee7e6dd62d5022d0cd88ed1 -4fcfbda5bbf0257f700147137bbab9c797af7df866704b889aa37e2e93df3ff1 -a0fd3490111dcdbc4c -""" - -### -# -# The key above will now be encrypted with different algorithms. -# The password is always 'TestTest'. -# -# Each item in the wrapped_enc_keys list contains: -# * wrap algorithm -# * iteration count -# * Salt -# * IV -# * Expected result -### -wrapped_enc_keys = [] - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -outform DER -out keyenc.der -v2 des3 -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC', -2048, -"47EA7227D8B22E2F", # IV -"E3F7A838AB911A4D", # Salt -""" -30820216304006092a864886f70d01050d3033301b06092a864886f70d01050c -300e0408e3f7a838ab911a4d02020800301406082a864886f70d0307040847ea -7227d8b22e2f048201d0ea388b374d2d0e4ceb7a5139f850fdff274884a6e6c0 -64326e09d00dbba9018834edb5a51a6ae3d1806e6e91eebf33788ce71fee0637 -a2ebf58859dd32afc644110c390274a6128b50c39b8d907823810ec471bada86 -6f5b75d8ea04ad310fad2e73621696db8e426cd511ee93ec1714a1a7db45e036 -4bf20d178d1f16bbb250b32c2d200093169d588de65f7d99aad9ddd0104b44f1 -326962e1520dfac3c2a800e8a14f678dff2b3d0bb23f69da635bf2a643ac934e -219a447d2f4460b67149e860e54f365da130763deefa649c72b0dcd48966a2d3 -4a477444782e3e66df5a582b07bbb19778a79bd355074ce331f4a82eb966b0c4 -52a09eab6116f2722064d314ae433b3d6e81d2436e93fdf446112663cde93b87 -9c8be44beb45f18e2c78fee9b016033f01ecda51b9b142091fa69f65ab784d2c -5ad8d34be6f7f1464adfc1e0ef3f7848f40d3bdea4412758f2fcb655c93d8f4d -f6fa48fc5aa4b75dd1c017ab79ac9d737233a6d668f5364ccf47786debd37334 -9c10c9e6efbe78430a61f71c89948aa32cdc3cc7338cf994147819ce7ab23450 -c8f7d9b94c3bb377d17a3fa204b601526317824b142ff6bc843fa7815ece89c0 -839573f234dac8d80cc571a045353d61db904a4398d8ef3df5ac -""" -)) - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -outform DER -out keyenc.der -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'skip encryption', # pbeWithMD5AndDES-CBC, only decoding is supported --1, -"", -"", -""" -308201f1301b06092a864886f70d010503300e0408f9b990c89af1d41b020208 -00048201d0c6267fe8592903891933d559e71a7ca68b2e39150f19daca0f7921 -52f97e249d72f670d5140e9150433310ed7c7ee51927693fd39884cb9551cea5 -a7b746f7edf199f8787d4787a35dad930d7db057b2118851211b645ac8b90fa6 -b0e7d49ac8567cbd5fff226e87aa9129a0f52c45e9307752e8575c3b0ff756b7 -31fda6942d15ecb6b27ea19370ccc79773f47891e80d22b440d81259c4c28eac -e0ca839524116bcf52d8c566e49a95ddb0e5493437279a770a39fd333f3fca91 -55884fad0ba5aaf273121f893059d37dd417da7dcfd0d6fa7494968f13b2cc95 -65633f2c891340193e5ec00e4ee0b0e90b3b93da362a4906360845771ade1754 -9df79140be5993f3424c012598eadd3e7c7c0b4db2c72cf103d7943a5cf61420 -93370b9702386c3dd4eb0a47f34b579624a46a108b2d13921fa1b367495fe345 -6aa128aa70f8ca80ae13eb301e96c380724ce67c54380bbea2316c1faf4d058e -b4ca2e23442047606b9bc4b3bf65b432cb271bea4eb35dd3eb360d3be8612a87 -a50e96a2264490aeabdc07c6e78e5dbf4fe3388726d0e2a228346bf3c2907d68 -2a6276b22ae883fb30fa611f4e4193e7a08480fcd7db48308bacbd72bf4807aa -11fd394859f97d22982f7fe890b2e2a0f7e7ffb693 -""" -)) - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -# -outform DER -out keyenc.der -v1 PBE-SHA1-RC2-64 -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'skip encryption', # pbeWithSHA1AndRC2-CBC, only decoding is supported --1, -"", -"", -""" -308201f1301b06092a864886f70d01050b300e04083ee943bdae185008020208 -00048201d0e4614d9371d3ff10ceabc2f6a7a13a0f449f9a714144e46518ea55 -e3e6f0cde24031d01ef1f37ec40081449ef01914faf45983dde0d2bc496712de -8dd15a5527dff4721d9016c13f34fb93e3ce68577e30146266d71b539f854e56 -753a192cf126ed4812734d86f81884374f1100772f78d0646e9946407637c565 -d070acab413c55952f7237437f2e48cae7fa0ff8d370de2bf446dd08049a3663 -d9c813ac197468c02e2b687e7ca994cf7f03f01b6eca87dbfed94502c2094157 -ea39f73fe4e591df1a68b04d19d9adab90bb9898467c1464ad20bf2b8fb9a5ff -d3ec91847d1c67fd768a4b9cfb46572eccc83806601372b6fad0243f58f623b7 -1c5809dea0feb8278fe27e5560eed8448dc93f5612f546e5dd7c5f6404365eb2 -5bf3396814367ae8b15c5c432b57eaed1f882c05c7f6517ee9e42b87b7b8d071 -9d6125d1b52f7b2cca1f6bd5f584334bf90bce1a7d938274cafe27b68e629698 -b16e27ae528db28593af9adcfccbebb3b9e1f2af5cd5531b51968389caa6c091 -e7de1f1b96f0d258e54e540d961a7c0ef51fda45d6da5fddd33e9bbfd3a5f8d7 -d7ab2e971de495cddbc86d38444fee9f0ac097b00adaf7802dabe0cff5b43b45 -4f26b7b547016f89be52676866189911c53e2f2477""" -)) - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -# -outform DER -out keyenc.der -v1 PBE-MD5-RC2-64 -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'skip encryption', # pbeWithMD5AndRC2-CBC, only decoding is supported --1, -"", -"", -""" -308201f1301b06092a864886f70d010506300e0408f5cd2fee56d9b4b8020208 -00048201d086454942d6166a19d6b108465bd111e7080911f573d54b1369c676 -df28600e84936bfec04f91023ff16499e2e07178c340904f12ffa6886ab66228 -32bf43c2bff5a0ed14e765918cf5fc543ad49566246f7eb3fc044fa5a9c25f40 -8fc8c8296b91658d3bb1067c0aba008c4fefd9e2bcdbbbd63fdc8085482bccf4 -f150cec9a084259ad441a017e5d81a1034ef2484696a7a50863836d0eeda45cd -8cee8ecabfed703f8d9d4bbdf3a767d32a0ccdc38550ee2928d7fe3fa27eda5b -5c7899e75ad55d076d2c2d3c37d6da3d95236081f9671dab9a99afdb1cbc890e -332d1a91105d9a8ce08b6027aa07367bd1daec3059cb51f5d896124da16971e4 -0ca4bcadb06c854bdf39f42dd24174011414e51626d198775eff3449a982df7b -ace874e77e045eb6d7c3faef0750792b29a068a6291f7275df1123fac5789c51 -27ace42836d81633faf9daf38f6787fff0394ea484bbcd465b57d4dbee3cf8df -b77d1db287b3a6264c466805be5a4fe85cfbca180699859280f2dd8e2c2c10b5 -7a7d2ac670c6039d41952fbb0e4f99b560ebe1d020e1b96d02403283819c00cc -529c51f0b0101555e4c58002ba3c6e3c12e3fde1aec94382792e96d9666a2b33 -3dc397b22ecab67ee38a552fec29a1d4ff8719c748""" -)) - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -# -outform DER -out keyenc.der -v1 PBE-SHA1-DES -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'skip encryption', # pbeWithSHA1AndDES-CBC, only decoding is supported --1, -"", -"", -""" -308201f1301b06092a864886f70d01050a300e04089bacc9cf1e8f734e020208 -00048201d03e502f3ceafe8fd19ab2939576bfdded26d719b2441db1459688f5 -9673218b41ec1f739edf1e460bd927bc28470c87b2d4fc8ea02ba17b47a63c49 -c5c1bee40529dadfd3ef8b4472c730bc136678c78abfb34670ec9d7dcd17ee3f -892f93f2629e6e0f4b24ecb9f954069bf722f466dece3913bb6abbd2c471d9a5 -c5eea89b14aaccda43d30b0dd0f6eb6e9850d9747aa8aa8414c383ad01c374ee -26d3552abec9ba22669cc9622ccf2921e3d0c8ecd1a70e861956de0bec6104b5 -b649ac994970c83f8a9e84b14a7dff7843d4ca3dd4af87cea43b5657e15ae0b5 -a940ce5047f006ab3596506600724764f23757205fe374fee04911336d655acc -03e159ec27789191d1517c4f3f9122f5242d44d25eab8f0658cafb928566ca0e -8f6589aa0c0ab13ca7a618008ae3eafd4671ee8fe0b562e70b3623b0e2a16eee -97fd388087d2e03530c9fe7db6e52eccc7c48fd701ede35e08922861a9508d12 -bc8bbf24f0c6bee6e63dbcb489b603d4c4a78ce45bf2eab1d5d10456c42a65a8 -3a606f4e4b9b46eb13b57f2624b651859d3d2d5192b45dbd5a2ead14ff20ca76 -48f321309aa56d8c0c4a192b580821cc6c70c75e6f19d1c5414da898ec4dd39d -b0eb93d6ba387a80702dfd2db610757ba340f63230 -""" -)) - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -# -outform DER -out keyenc.der -v2 aes128 -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'PBKDF2WithHMAC-SHA1AndAES128-CBC', -2048, -"4F66EE5D3BCD531FE6EBF4B4E73016B8", # IV -"479F25156176C53A", # Salt -""" -3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c -300e0408479f25156176c53a02020800301d060960864801650304010204104f -66ee5d3bcd531fe6ebf4b4e73016b8048201d0e33cfa560423f589d097d21533 -3b880a5ebac5b2ac58b4e73b0d787aee7764f034fe34ca1d1bd845c0a7c3316f -afbfb2129e03dcaf5a5031394206492828dacef1e04639bee5935e0f46114202 -10bc6c37182f4889be11c5d0486c398f4be952e5740f65de9d8edeb275e2b406 -e19bc29ad5ebb97fa536344fc3d84c7e755696f12b810898de4e6f069b8a81c8 -0aab0d45d7d062303aaa4a10c2ce84fdb5a03114039cfe138e38bb15b2ced717 -93549cdad85e730b14d9e2198b663dfdc8d04a4349eb3de59b076ad40b116d4a -25ed917c576bc7c883c95ef0f1180e28fc9981bea069594c309f1aa1b253ceab -a2f0313bb1372bcb51a745056be93d77a1f235a762a45e8856512d436b2ca0f7 -dd60fbed394ba28978d2a2b984b028529d0a58d93aba46c6bbd4ac1e4013cbaa -63b00988bc5f11ccc40141c346762d2b28f64435d4be98ec17c1884985e3807e -e550db606600993efccf6de0dfc2d2d70b5336a3b018fa415d6bdd59f5777118 -16806b7bc17c4c7e20ad7176ebfa5a1aa3f6bc10f04b77afd443944642ac9cca -d740e082b4a3bbb8bafdd34a0b3c5f2f3c2aceccccdccd092b78994b845bfa61 -706c3b9df5165ed1dbcbf1244fe41fc9bf993f52f7658e2f87e1baaeacb0f562 -9d905c -""" -)) - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -# -outform DER -out keyenc.der -v2 aes192 -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'PBKDF2WithHMAC-SHA1AndAES192-CBC', -2048, -"5CFC2A4FF7B63201A4A8A5B021148186", # IV -"D718541C264944CE", # Salt -""" -3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c -300e0408d718541c264944ce02020800301d060960864801650304011604105c -fc2a4ff7b63201a4a8a5b021148186048201d08e74aaa21b8bcfb15b9790fe95 -b0e09ddb0f189b6fb1682fdb9f122b804650ddec3c67a1df093a828b3e5fbcc6 -286abbcc5354c482fd796d972e919ca8a5eba1eaa2293af1d648013ddad72106 -75622264dfba55dafdda39e338f058f1bdb9846041ffff803797d3fdf3693135 -8a192729ea8346a7e5e58e925a2e2e4af0818581859e8215d87370eb4194a5ff -bae900857d4c591dbc651a241865a817eaede9987c9f9ae4f95c0bf930eea88c -4d7596e535ffb7ca369988aba75027a96b9d0bc9c8b0b75f359067fd145a378b -02aaa15e9db7a23176224da48a83249005460cc6e429168657f2efa8b1af7537 -d7d7042f2d683e8271b21d591090963eeb57aea6172f88da139e1614d6a7d1a2 -1002d5a7a93d6d21156e2b4777f6fc069287a85a1538c46b7722ccde591ab55c -630e1ceeb1ac42d1b41f3f654e9da86b5efced43775ea68b2594e50e4005e052 -0fe753c0898120c2c07265367ff157f6538a1e4080d6f9d1ca9eb51939c9574e -f2e4e1e87c1434affd5808563cddd376776dbbf790c6a40028f311a8b58dafa2 -0970ed34acd6e3e89d063987893b2b9570ddb8cc032b05a723bba9444933ebf3 -c624204be72f4190e0245197d0cb772bec933fd8442445f9a28bd042d5a3a1e9 -9a8a07 -""" -)) - -# -# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -# -outform DER -out keyenc.der -v2 aes192 -# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der -# -wrapped_enc_keys.append(( -'PBKDF2WithHMAC-SHA1AndAES256-CBC', -2048, -"323351F94462AC563E053A056252C2C4", # IV -"02A6CD0D12E727B5", # Salt -""" -3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c -300e040802a6cd0d12e727b502020800301d060960864801650304012a041032 -3351f94462ac563e053a056252c2c4048201d07f4ef1c7be21aae738a20c5632 -b8bdbbb9083b6e7f68822267b1f481fd27fdafd61a90660de6e4058790e4c912 -bf3f319a7c37e6eb3d956daaa143865020d554bf6215e8d7492359aaeef45d6e -d85a686ed26c0bf7c18d071d827a86f0b73e1db0c0e7f3d42201544093302a90 -551ad530692468c47ac15c69500b8ca67d4a17b64d15cecc035ae50b768a36cf -07c395afa091e9e6f86f665455fbdc1b21ad79c0908b73da5de75a9b43508d5d -44dc97a870cd3cd9f01ca24452e9b11c1b4982946702cfcbfda5b2fcc0203fb5 -0b52a115760bd635c94d4c95ac2c640ee9a04ffaf6ccff5a8d953dd5d88ca478 -c377811c521f2191639c643d657a9e364af88bb7c14a356c2b0b4870a23c2f54 -d41f8157afff731471dccc6058b15e1151bcf84b39b5e622a3a1d65859c912a5 -591b85e034a1f6af664f030a6bfc8c3d20c70f32b54bcf4da9c2da83cef49cf8 -e9a74f0e5d358fe50b88acdce6a9db9a7ad61536212fc5f877ebfc7957b8bda4 -b1582a0f10d515a20ee06cf768db9c977aa6fbdca7540d611ff953012d009dac -e8abd059f8e8ffea637c9c7721f817aaf0bb23403e26a0ef0ff0e2037da67d41 -af728481f53443551a9bff4cea023164e9622b5441a309e1f4bff98e5bf76677 -8d7cd9 -""" -)) - -def txt2bin(inputs): - s = b('').join([b(x) for x in inputs if not (x in '\n\r\t ')]) - return unhexlify(s) - -class Rng: - def __init__(self, output): - self.output=output - self.idx=0 - def __call__(self, n): - output = self.output[self.idx:self.idx+n] - self.idx += n - return output - -class PKCS8_Decrypt(unittest.TestCase): - - def setUp(self): - self.oid_key = oid_key - self.clear_key = txt2bin(clear_key) - self.wrapped_clear_key = txt2bin(wrapped_clear_key) - self.wrapped_enc_keys = [] - for t in wrapped_enc_keys: - self.wrapped_enc_keys.append(( - t[0], - t[1], - txt2bin(t[2]), - txt2bin(t[3]), - txt2bin(t[4]) - )) - - ### NO ENCRYTION - - def test1(self): - """Verify unwrapping w/o encryption""" - res1, res2, res3 = PKCS8.unwrap(self.wrapped_clear_key) - self.assertEqual(res1, self.oid_key) - self.assertEqual(res2, self.clear_key) - - def test2(self): - """Verify wrapping w/o encryption""" - wrapped = PKCS8.wrap(self.clear_key, self.oid_key) - res1, res2, res3 = PKCS8.unwrap(wrapped) - self.assertEqual(res1, self.oid_key) - self.assertEqual(res2, self.clear_key) - - ## ENCRYPTION - - def test3(self): - """Verify unwrapping with encryption""" - - for t in self.wrapped_enc_keys: - res1, res2, res3 = PKCS8.unwrap(t[4], b("TestTest")) - self.assertEqual(res1, self.oid_key) - self.assertEqual(res2, self.clear_key) - - def test4(self): - """Verify wrapping with encryption""" - - for t in self.wrapped_enc_keys: - if t[0] == 'skip encryption': - continue - rng = Rng(t[2]+t[3]) - params = { 'iteration_count':t[1] } - wrapped = PKCS8.wrap( - self.clear_key, - self.oid_key, - b("TestTest"), - protection=t[0], - prot_params=params, - key_params=DerNull(), - randfunc=rng) - self.assertEqual(wrapped, t[4]) - -def get_tests(config={}): - from Crypto.SelfTest.st_common import list_test_cases - listTests = [] - listTests += list_test_cases(PKCS8_Decrypt) - return listTests - -if __name__ == '__main__': - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GimpGradientFile.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GimpGradientFile.py deleted file mode 100644 index 8e801be0b8a3c373e3cbd274a10f0da57edb5e70..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GimpGradientFile.py +++ /dev/null @@ -1,137 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read (and render) GIMP gradient files -# -# History: -# 97-08-23 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -""" -Stuff to translate curve segments to palette values (derived from -the corresponding code in GIMP, written by Federico Mena Quintero. -See the GIMP distribution for more information.) -""" - - -from math import log, pi, sin, sqrt - -from ._binary import o8 - -EPSILON = 1e-10 -"""""" # Enable auto-doc for data member - - -def linear(middle, pos): - if pos <= middle: - if middle < EPSILON: - return 0.0 - else: - return 0.5 * pos / middle - else: - pos = pos - middle - middle = 1.0 - middle - if middle < EPSILON: - return 1.0 - else: - return 0.5 + 0.5 * pos / middle - - -def curved(middle, pos): - return pos ** (log(0.5) / log(max(middle, EPSILON))) - - -def sine(middle, pos): - return (sin((-pi / 2.0) + pi * linear(middle, pos)) + 1.0) / 2.0 - - -def sphere_increasing(middle, pos): - return sqrt(1.0 - (linear(middle, pos) - 1.0) ** 2) - - -def sphere_decreasing(middle, pos): - return 1.0 - sqrt(1.0 - linear(middle, pos) ** 2) - - -SEGMENTS = [linear, curved, sine, sphere_increasing, sphere_decreasing] -"""""" # Enable auto-doc for data member - - -class GradientFile: - gradient = None - - def getpalette(self, entries=256): - palette = [] - - ix = 0 - x0, x1, xm, rgb0, rgb1, segment = self.gradient[ix] - - for i in range(entries): - x = i / (entries - 1) - - while x1 < x: - ix += 1 - x0, x1, xm, rgb0, rgb1, segment = self.gradient[ix] - - w = x1 - x0 - - if w < EPSILON: - scale = segment(0.5, 0.5) - else: - scale = segment((xm - x0) / w, (x - x0) / w) - - # expand to RGBA - r = o8(int(255 * ((rgb1[0] - rgb0[0]) * scale + rgb0[0]) + 0.5)) - g = o8(int(255 * ((rgb1[1] - rgb0[1]) * scale + rgb0[1]) + 0.5)) - b = o8(int(255 * ((rgb1[2] - rgb0[2]) * scale + rgb0[2]) + 0.5)) - a = o8(int(255 * ((rgb1[3] - rgb0[3]) * scale + rgb0[3]) + 0.5)) - - # add to palette - palette.append(r + g + b + a) - - return b"".join(palette), "RGBA" - - -class GimpGradientFile(GradientFile): - """File handler for GIMP's gradient format.""" - - def __init__(self, fp): - if fp.readline()[:13] != b"GIMP Gradient": - msg = "not a GIMP gradient file" - raise SyntaxError(msg) - - line = fp.readline() - - # GIMP 1.2 gradient files don't contain a name, but GIMP 1.3 files do - if line.startswith(b"Name: "): - line = fp.readline().strip() - - count = int(line) - - gradient = [] - - for i in range(count): - s = fp.readline().split() - w = [float(x) for x in s[:11]] - - x0, x1 = w[0], w[2] - xm = w[1] - rgb0 = w[3:7] - rgb1 = w[7:11] - - segment = SEGMENTS[int(s[11])] - cspace = int(s[12]) - - if cspace != 0: - msg = "cannot handle HSV colour space" - raise OSError(msg) - - gradient.append((x0, x1, xm, rgb0, rgb1, segment)) - - self.gradient = gradient diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/Jpeg2KImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/Jpeg2KImagePlugin.py deleted file mode 100644 index 9309768bacffcf071dcc3db764285db911d38323..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/Jpeg2KImagePlugin.py +++ /dev/null @@ -1,399 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# JPEG2000 file handling -# -# History: -# 2014-03-12 ajh Created -# 2021-06-30 rogermb Extract dpi information from the 'resc' header box -# -# Copyright (c) 2014 Coriolis Systems Limited -# Copyright (c) 2014 Alastair Houghton -# -# See the README file for information on usage and redistribution. -# -import io -import os -import struct - -from . import Image, ImageFile, _binary - - -class BoxReader: - """ - A small helper class to read fields stored in JPEG2000 header boxes - and to easily step into and read sub-boxes. - """ - - def __init__(self, fp, length=-1): - self.fp = fp - self.has_length = length >= 0 - self.length = length - self.remaining_in_box = -1 - - def _can_read(self, num_bytes): - if self.has_length and self.fp.tell() + num_bytes > self.length: - # Outside box: ensure we don't read past the known file length - return False - if self.remaining_in_box >= 0: - # Inside box contents: ensure read does not go past box boundaries - return num_bytes <= self.remaining_in_box - else: - return True # No length known, just read - - def _read_bytes(self, num_bytes): - if not self._can_read(num_bytes): - msg = "Not enough data in header" - raise SyntaxError(msg) - - data = self.fp.read(num_bytes) - if len(data) < num_bytes: - msg = f"Expected to read {num_bytes} bytes but only got {len(data)}." - raise OSError(msg) - - if self.remaining_in_box > 0: - self.remaining_in_box -= num_bytes - return data - - def read_fields(self, field_format): - size = struct.calcsize(field_format) - data = self._read_bytes(size) - return struct.unpack(field_format, data) - - def read_boxes(self): - size = self.remaining_in_box - data = self._read_bytes(size) - return BoxReader(io.BytesIO(data), size) - - def has_next_box(self): - if self.has_length: - return self.fp.tell() + self.remaining_in_box < self.length - else: - return True - - def next_box_type(self): - # Skip the rest of the box if it has not been read - if self.remaining_in_box > 0: - self.fp.seek(self.remaining_in_box, os.SEEK_CUR) - self.remaining_in_box = -1 - - # Read the length and type of the next box - lbox, tbox = self.read_fields(">I4s") - if lbox == 1: - lbox = self.read_fields(">Q")[0] - hlen = 16 - else: - hlen = 8 - - if lbox < hlen or not self._can_read(lbox - hlen): - msg = "Invalid header length" - raise SyntaxError(msg) - - self.remaining_in_box = lbox - hlen - return tbox - - -def _parse_codestream(fp): - """Parse the JPEG 2000 codestream to extract the size and component - count from the SIZ marker segment, returning a PIL (size, mode) tuple.""" - - hdr = fp.read(2) - lsiz = _binary.i16be(hdr) - siz = hdr + fp.read(lsiz - 2) - lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, _, _, _, _, csiz = struct.unpack_from( - ">HHIIIIIIIIH", siz - ) - ssiz = [None] * csiz - xrsiz = [None] * csiz - yrsiz = [None] * csiz - for i in range(csiz): - ssiz[i], xrsiz[i], yrsiz[i] = struct.unpack_from(">BBB", siz, 36 + 3 * i) - - size = (xsiz - xosiz, ysiz - yosiz) - if csiz == 1: - if (yrsiz[0] & 0x7F) > 8: - mode = "I;16" - else: - mode = "L" - elif csiz == 2: - mode = "LA" - elif csiz == 3: - mode = "RGB" - elif csiz == 4: - mode = "RGBA" - else: - mode = None - - return size, mode - - -def _res_to_dpi(num, denom, exp): - """Convert JPEG2000's (numerator, denominator, exponent-base-10) resolution, - calculated as (num / denom) * 10^exp and stored in dots per meter, - to floating-point dots per inch.""" - if denom != 0: - return (254 * num * (10**exp)) / (10000 * denom) - - -def _parse_jp2_header(fp): - """Parse the JP2 header box to extract size, component count, - color space information, and optionally DPI information, - returning a (size, mode, mimetype, dpi) tuple.""" - - # Find the JP2 header box - reader = BoxReader(fp) - header = None - mimetype = None - while reader.has_next_box(): - tbox = reader.next_box_type() - - if tbox == b"jp2h": - header = reader.read_boxes() - break - elif tbox == b"ftyp": - if reader.read_fields(">4s")[0] == b"jpx ": - mimetype = "image/jpx" - - size = None - mode = None - bpc = None - nc = None - dpi = None # 2-tuple of DPI info, or None - - while header.has_next_box(): - tbox = header.next_box_type() - - if tbox == b"ihdr": - height, width, nc, bpc = header.read_fields(">IIHB") - size = (width, height) - if nc == 1 and (bpc & 0x7F) > 8: - mode = "I;16" - elif nc == 1: - mode = "L" - elif nc == 2: - mode = "LA" - elif nc == 3: - mode = "RGB" - elif nc == 4: - mode = "RGBA" - elif tbox == b"res ": - res = header.read_boxes() - while res.has_next_box(): - tres = res.next_box_type() - if tres == b"resc": - vrcn, vrcd, hrcn, hrcd, vrce, hrce = res.read_fields(">HHHHBB") - hres = _res_to_dpi(hrcn, hrcd, hrce) - vres = _res_to_dpi(vrcn, vrcd, vrce) - if hres is not None and vres is not None: - dpi = (hres, vres) - break - - if size is None or mode is None: - msg = "Malformed JP2 header" - raise SyntaxError(msg) - - return size, mode, mimetype, dpi - - -## -# Image plugin for JPEG2000 images. - - -class Jpeg2KImageFile(ImageFile.ImageFile): - format = "JPEG2000" - format_description = "JPEG 2000 (ISO 15444)" - - def _open(self): - sig = self.fp.read(4) - if sig == b"\xff\x4f\xff\x51": - self.codec = "j2k" - self._size, self.mode = _parse_codestream(self.fp) - else: - sig = sig + self.fp.read(8) - - if sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a": - self.codec = "jp2" - header = _parse_jp2_header(self.fp) - self._size, self.mode, self.custom_mimetype, dpi = header - if dpi is not None: - self.info["dpi"] = dpi - if self.fp.read(12).endswith(b"jp2c\xff\x4f\xff\x51"): - self._parse_comment() - else: - msg = "not a JPEG 2000 file" - raise SyntaxError(msg) - - if self.size is None or self.mode is None: - msg = "unable to determine size/mode" - raise SyntaxError(msg) - - self._reduce = 0 - self.layers = 0 - - fd = -1 - length = -1 - - try: - fd = self.fp.fileno() - length = os.fstat(fd).st_size - except Exception: - fd = -1 - try: - pos = self.fp.tell() - self.fp.seek(0, io.SEEK_END) - length = self.fp.tell() - self.fp.seek(pos) - except Exception: - length = -1 - - self.tile = [ - ( - "jpeg2k", - (0, 0) + self.size, - 0, - (self.codec, self._reduce, self.layers, fd, length), - ) - ] - - def _parse_comment(self): - hdr = self.fp.read(2) - length = _binary.i16be(hdr) - self.fp.seek(length - 2, os.SEEK_CUR) - - while True: - marker = self.fp.read(2) - if not marker: - break - typ = marker[1] - if typ in (0x90, 0xD9): - # Start of tile or end of codestream - break - hdr = self.fp.read(2) - length = _binary.i16be(hdr) - if typ == 0x64: - # Comment - self.info["comment"] = self.fp.read(length - 2)[2:] - break - else: - self.fp.seek(length - 2, os.SEEK_CUR) - - @property - def reduce(self): - # https://github.com/python-pillow/Pillow/issues/4343 found that the - # new Image 'reduce' method was shadowed by this plugin's 'reduce' - # property. This attempts to allow for both scenarios - return self._reduce or super().reduce - - @reduce.setter - def reduce(self, value): - self._reduce = value - - def load(self): - if self.tile and self._reduce: - power = 1 << self._reduce - adjust = power >> 1 - self._size = ( - int((self.size[0] + adjust) / power), - int((self.size[1] + adjust) / power), - ) - - # Update the reduce and layers settings - t = self.tile[0] - t3 = (t[3][0], self._reduce, self.layers, t[3][3], t[3][4]) - self.tile = [(t[0], (0, 0) + self.size, t[2], t3)] - - return ImageFile.ImageFile.load(self) - - -def _accept(prefix): - return ( - prefix[:4] == b"\xff\x4f\xff\x51" - or prefix[:12] == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a" - ) - - -# ------------------------------------------------------------ -# Save support - - -def _save(im, fp, filename): - # Get the keyword arguments - info = im.encoderinfo - - if filename.endswith(".j2k") or info.get("no_jp2", False): - kind = "j2k" - else: - kind = "jp2" - - offset = info.get("offset", None) - tile_offset = info.get("tile_offset", None) - tile_size = info.get("tile_size", None) - quality_mode = info.get("quality_mode", "rates") - quality_layers = info.get("quality_layers", None) - if quality_layers is not None and not ( - isinstance(quality_layers, (list, tuple)) - and all( - [ - isinstance(quality_layer, (int, float)) - for quality_layer in quality_layers - ] - ) - ): - msg = "quality_layers must be a sequence of numbers" - raise ValueError(msg) - - num_resolutions = info.get("num_resolutions", 0) - cblk_size = info.get("codeblock_size", None) - precinct_size = info.get("precinct_size", None) - irreversible = info.get("irreversible", False) - progression = info.get("progression", "LRCP") - cinema_mode = info.get("cinema_mode", "no") - mct = info.get("mct", 0) - signed = info.get("signed", False) - comment = info.get("comment") - if isinstance(comment, str): - comment = comment.encode() - plt = info.get("plt", False) - - fd = -1 - if hasattr(fp, "fileno"): - try: - fd = fp.fileno() - except Exception: - fd = -1 - - im.encoderconfig = ( - offset, - tile_offset, - tile_size, - quality_mode, - quality_layers, - num_resolutions, - cblk_size, - precinct_size, - irreversible, - progression, - cinema_mode, - mct, - signed, - fd, - comment, - plt, - ) - - ImageFile._save(im, fp, [("jpeg2k", (0, 0) + im.size, 0, kind)]) - - -# ------------------------------------------------------------ -# Registry stuff - - -Image.register_open(Jpeg2KImageFile.format, Jpeg2KImageFile, _accept) -Image.register_save(Jpeg2KImageFile.format, _save) - -Image.register_extensions( - Jpeg2KImageFile.format, [".jp2", ".j2k", ".jpc", ".jpf", ".jpx", ".j2c"] -) - -Image.register_mime(Jpeg2KImageFile.format, "image/jp2") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/encodings/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/encodings/__init__.py deleted file mode 100644 index 156cb232a7aa80eee1526c7598f72043de10473f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/encodings/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Empty __init__.py file to signal Python this directory is a package.""" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/tree/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/tree/__init__.py deleted file mode 100644 index f269b72b009c4da94d70b83a9b6b9f03af0345da..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/tree/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -"""Query classes for tree indices.""" - -from gpt_index.indices.query.tree.embedding_query import GPTTreeIndexEmbeddingQuery -from gpt_index.indices.query.tree.leaf_query import GPTTreeIndexLeafQuery -from gpt_index.indices.query.tree.retrieve_query import GPTTreeIndexRetQuery - -__all__ = [ - "GPTTreeIndexLeafQuery", - "GPTTreeIndexRetQuery", - "GPTTreeIndexEmbeddingQuery", -] diff --git a/spaces/justYu2001/furniture-detection/utils/plots.py b/spaces/justYu2001/furniture-detection/utils/plots.py deleted file mode 100644 index fdd8d0e853deb228badeeed52fbbe5fb8eb10632..0000000000000000000000000000000000000000 --- a/spaces/justYu2001/furniture-detection/utils/plots.py +++ /dev/null @@ -1,489 +0,0 @@ -# Plotting utils - -import glob -import math -import os -import random -from copy import copy -from pathlib import Path - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sns -import torch -import yaml -from PIL import Image, ImageDraw, ImageFont -from scipy.signal import butter, filtfilt - -from utils.general import xywh2xyxy, xyxy2xywh -from utils.metrics import fitness - -# Settings -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -def color_list(): - # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb - def hex2rgb(h): - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - return [hex2rgb(h) for h in matplotlib.colors.TABLEAU_COLORS.values()] # or BASE_ (8), CSS4_ (148), XKCD_ (949) - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def plot_one_box(x, img, color=None, label=None, line_thickness=3): - # Plots one bounding box on image img - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - color = color or [random.randint(0, 255) for _ in range(3)] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - -def plot_one_box_PIL(box, img, color=None, label=None, line_thickness=None): - img = Image.fromarray(img) - draw = ImageDraw.Draw(img) - line_thickness = line_thickness or max(int(min(img.size) / 200), 2) - draw.rectangle(box, width=line_thickness, outline=tuple(color)) # plot - if label: - fontsize = max(round(max(img.size) / 40), 12) - font = ImageFont.truetype("Arial.ttf", fontsize) - txt_width, txt_height = font.getsize(label) - draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=tuple(color)) - draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font) - return np.asarray(img) - - -def plot_wh_methods(): # from utils.plots import *; plot_wh_methods() - # Compares the two methods for width-height anchor multiplication - # https://github.com/ultralytics/yolov3/issues/168 - x = np.arange(-4.0, 4.0, .1) - ya = np.exp(x) - yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 - - fig = plt.figure(figsize=(6, 3), tight_layout=True) - plt.plot(x, ya, '.-', label='YOLOv3') - plt.plot(x, yb ** 2, '.-', label='YOLOR ^2') - plt.plot(x, yb ** 1.6, '.-', label='YOLOR ^1.6') - plt.xlim(left=-4, right=4) - plt.ylim(bottom=0, top=6) - plt.xlabel('input') - plt.ylabel('output') - plt.grid() - plt.legend() - fig.savefig('comparison.png', dpi=200) - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): - # Plot image grid with labels - - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - # un-normalise - if np.max(images[0]) <= 1: - images *= 255 - - tl = 3 # line thickness - tf = max(tl - 1, 1) # font thickness - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Check if we should resize - scale_factor = max_size / max(h, w) - if scale_factor < 1: - h = math.ceil(scale_factor * h) - w = math.ceil(scale_factor * w) - - colors = color_list() # list of colors - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, img in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - - block_x = int(w * (i // ns)) - block_y = int(h * (i % ns)) - - img = img.transpose(1, 2, 0) - if scale_factor < 1: - img = cv2.resize(img, (w, h)) - - mosaic[block_y:block_y + h, block_x:block_x + w, :] = img - if len(targets) > 0: - image_targets = targets[targets[:, 0] == i] - boxes = xywh2xyxy(image_targets[:, 2:6]).T - classes = image_targets[:, 1].astype('int') - labels = image_targets.shape[1] == 6 # labels if no conf column - conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale_factor < 1: # absolute coords need scale if image scales - boxes *= scale_factor - boxes[[0, 2]] += block_x - boxes[[1, 3]] += block_y - for j, box in enumerate(boxes.T): - cls = int(classes[j]) - color = colors[cls % len(colors)] - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j]) - plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl) - - # Draw image filename labels - if paths: - label = Path(paths[i]).name[:40] # trim to 40 char - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, - lineType=cv2.LINE_AA) - - # Image border - cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) - - if fname: - r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size - mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA) - # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save - Image.fromarray(mosaic).save(fname) # PIL save - return mosaic - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_test_txt(): # from utils.plots import *; plot_test() - # Plot test.txt histograms - x = np.loadtxt('test.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt() - # Plot study.txt generated by test.py - fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) - # ax = ax.ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolor-p6', 'yolor-w6', 'yolor-e6', 'yolor-d6']]: - for f in sorted(Path(path).glob('study*.txt')): - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] - # for i in range(7): - # ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - # ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid(alpha=0.2) - ax2.set_yticks(np.arange(20, 60, 5)) - ax2.set_xlim(0, 57) - ax2.set_ylim(30, 55) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig(str(Path(path).name) + '.png', dpi=300) - - -def plot_labels(labels, names=(), save_dir=Path(''), loggers=None): - # plot dataset labels - print('Plotting labels... ') - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - colors = color_list() - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_ylabel('instances') - if 0 < len(names) < 30: - ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(names, rotation=90, fontsize=10) - else: - ax[0].set_xlabel('classes') - sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - for cls, *box in labels[:1000]: - ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - # loggers - for k, v in loggers.items() or {}: - if k == 'wandb' and v: - v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False) - - -def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution() - # Plot hyperparameter evolution results in evolve.txt - with open(yaml_file) as f: - hyp = yaml.load(f, Loader=yaml.SafeLoader) - x = np.loadtxt('evolve.txt', ndmin=2) - f = fitness(x) - # weights = (f - f.min()) ** 2 # for weighted results - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, (k, v) in enumerate(hyp.items()): - y = x[:, i + 7] - # mu = (y * weights).sum() / weights.sum() # best weighted result - mu = y[f.argmax()] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - plt.savefig('evolve.png', dpi=200) - print('\nPlot saved as evolve.png') - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay() - # Plot training 'results*.txt', overlaying train and val losses - s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends - t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles - for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) - ax = ax.ravel() - for i in range(5): - for j in [i, i + 5]: - y = results[j, x] - ax[i].plot(x, y, marker='.', label=s[j]) - # y_smooth = butter_lowpass_filtfilt(y) - # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) - - ax[i].set_title(t[i]) - ax[i].legend() - ax[i].set_ylabel(f) if i == 0 else None # add filename - fig.savefig(f.replace('.txt', '.png'), dpi=200) - - -def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): - # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp') - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', - 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] - if bucket: - # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] - files = ['results%g.txt' % x for x in id] - c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) - os.system(c) - else: - files = list(Path(save_dir).glob('results*.txt')) - assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - for i in range(10): - y = results[i, x] - if i in [0, 1, 2, 5, 6, 7]: - y[y == 0] = np.nan # don't show zero loss values - # y /= y[0] # normalize - label = labels[fi] if len(labels) else f.stem - ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8) - ax[i].set_title(s[i]) - # if i in [5, 6, 7]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - fig.savefig(Path(save_dir) / 'results.png', dpi=200) - - -def output_to_keypoint(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - kpts = o[:,6:] - o = o[:,:6] - for index, (*box, conf, cls) in enumerate(o.detach().cpu().numpy()): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf, *list(kpts.detach().cpu().numpy()[index])]) - return np.array(targets) - - -def plot_skeleton_kpts(im, kpts, steps, orig_shape=None): - #Plot the skeleton and keypointsfor coco datatset - palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102], - [230, 230, 0], [255, 153, 255], [153, 204, 255], - [255, 102, 255], [255, 51, 255], [102, 178, 255], - [51, 153, 255], [255, 153, 153], [255, 102, 102], - [255, 51, 51], [153, 255, 153], [102, 255, 102], - [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0], - [255, 255, 255]]) - - skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12], - [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3], - [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]] - - pose_limb_color = palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]] - pose_kpt_color = palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]] - radius = 5 - num_kpts = len(kpts) // steps - - for kid in range(num_kpts): - r, g, b = pose_kpt_color[kid] - x_coord, y_coord = kpts[steps * kid], kpts[steps * kid + 1] - if not (x_coord % 640 == 0 or y_coord % 640 == 0): - if steps == 3: - conf = kpts[steps * kid + 2] - if conf < 0.5: - continue - cv2.circle(im, (int(x_coord), int(y_coord)), radius, (int(r), int(g), int(b)), -1) - - for sk_id, sk in enumerate(skeleton): - r, g, b = pose_limb_color[sk_id] - pos1 = (int(kpts[(sk[0]-1)*steps]), int(kpts[(sk[0]-1)*steps+1])) - pos2 = (int(kpts[(sk[1]-1)*steps]), int(kpts[(sk[1]-1)*steps+1])) - if steps == 3: - conf1 = kpts[(sk[0]-1)*steps+2] - conf2 = kpts[(sk[1]-1)*steps+2] - if conf1<0.5 or conf2<0.5: - continue - if pos1[0]%640 == 0 or pos1[1]%640==0 or pos1[0]<0 or pos1[1]<0: - continue - if pos2[0] % 640 == 0 or pos2[1] % 640 == 0 or pos2[0]<0 or pos2[1]<0: - continue - cv2.line(im, pos1, pos2, (int(r), int(g), int(b)), thickness=2) diff --git a/spaces/jvde/sovits-webui/text/symbols.py b/spaces/jvde/sovits-webui/text/symbols.py deleted file mode 100644 index ce7d043ce7c06e63fc60950127b978ad06abbe5d..0000000000000000000000000000000000000000 --- a/spaces/jvde/sovits-webui/text/symbols.py +++ /dev/null @@ -1,15 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/k2-fsa/generate-subtitles-for-videos/README.md b/spaces/k2-fsa/generate-subtitles-for-videos/README.md deleted file mode 100644 index 5611253c8dde3b784dae6af8db026a7151b361c8..0000000000000000000000000000000000000000 --- a/spaces/k2-fsa/generate-subtitles-for-videos/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Generate subtitles -emoji: 🌖 -colorFrom: yellow -colorTo: green -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kadirnar/yolox/configs/__init__.py b/spaces/kadirnar/yolox/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kaicheng/ChatGPT_ad/chatgpt - windows.bat b/spaces/kaicheng/ChatGPT_ad/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/kasun/git-large/README.md b/spaces/kasun/git-large/README.md deleted file mode 100644 index e40611d5e0aea2dbb6895a89b58ab004e28dcc6a..0000000000000000000000000000000000000000 --- a/spaces/kasun/git-large/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: git-large -emoji: 🌍 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -duplicated_from: kasun/blip-larg ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kdrkdrkdr/ProsekaTTS/mel_processing.py b/spaces/kdrkdrkdr/ProsekaTTS/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ProsekaTTS/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/html.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/html.py deleted file mode 100644 index cc3262a1eafda34842e4dbad47bb6ba72f0c5a68..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/html.py +++ /dev/null @@ -1,86 +0,0 @@ -import dominate -from dominate.tags import meta, h3, table, tr, td, p, a, img, br -import os - - -class HTML: - """This HTML class allows us to save images and write texts into a single HTML file. - - It consists of functions such as (add a text header to the HTML file), - (add a row of images to the HTML file), and (save the HTML to the disk). - It is based on Python library 'dominate', a Python library for creating and manipulating HTML documents using a DOM API. - """ - - def __init__(self, web_dir, title, refresh=0): - """Initialize the HTML classes - - Parameters: - web_dir (str) -- a directory that stores the webpage. HTML file will be created at /index.html; images will be saved at 0: - with self.doc.head: - meta(http_equiv="refresh", content=str(refresh)) - - def get_image_dir(self): - """Return the directory that stores images""" - return self.img_dir - - def add_header(self, text): - """Insert a header to the HTML file - - Parameters: - text (str) -- the header text - """ - with self.doc: - h3(text) - - def add_images(self, ims, txts, links, width=400): - """add images to the HTML file - - Parameters: - ims (str list) -- a list of image paths - txts (str list) -- a list of image names shown on the website - links (str list) -- a list of hyperref links; when you click an image, it will redirect you to a new page - """ - self.t = table(border=1, style="table-layout: fixed;") # Insert a table - self.doc.add(self.t) - with self.t: - with tr(): - for im, txt, link in zip(ims, txts, links): - with td(style="word-wrap: break-word;", halign="center", valign="top"): - with p(): - with a(href=os.path.join('images', link)): - img(style="width:%dpx" % width, src=os.path.join('images', im)) - br() - p(txt) - - def save(self): - """save the current content to the HMTL file""" - html_file = '%s/index.html' % self.web_dir - f = open(html_file, 'wt') - f.write(self.doc.render()) - f.close() - - -if __name__ == '__main__': # we show an example usage here. - html = HTML('web/', 'test_html') - html.add_header('hello world') - - ims, txts, links = [], [], [] - for n in range(4): - ims.append('image_%d.png' % n) - txts.append('text_%d' % n) - links.append('image_%d.png' % n) - html.add_images(ims, txts, links) - html.save() diff --git a/spaces/kevinwang676/VALLE/modules/scaling.py b/spaces/kevinwang676/VALLE/modules/scaling.py deleted file mode 100644 index 824a2077cedb787dd05bbad5ba6fe65099e11fcf..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/modules/scaling.py +++ /dev/null @@ -1,1401 +0,0 @@ -# Copyright 2022 Xiaomi Corp. (authors: Daniel Povey) -# -# See ../../../../LICENSE for clarification regarding multiple authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import collections -import logging -import random -import math -from functools import reduce -from itertools import repeat -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from torch.nn import Embedding as ScaledEmbedding - -from utils import Transpose - - -class ActivationBalancerFunction(torch.autograd.Function): - @staticmethod - def forward( - ctx, - x: Tensor, - scale_factor: Tensor, - sign_factor: Optional[Tensor], - channel_dim: int, - ) -> Tensor: - if channel_dim < 0: - channel_dim += x.ndim - ctx.channel_dim = channel_dim - xgt0 = x > 0 - if sign_factor is None: - ctx.save_for_backward(xgt0, scale_factor) - else: - ctx.save_for_backward(xgt0, scale_factor, sign_factor) - return x - - @staticmethod - def backward(ctx, x_grad: Tensor) -> Tuple[Tensor, None, None, None]: - if len(ctx.saved_tensors) == 3: - xgt0, scale_factor, sign_factor = ctx.saved_tensors - for _ in range(ctx.channel_dim, x_grad.ndim - 1): - scale_factor = scale_factor.unsqueeze(-1) - sign_factor = sign_factor.unsqueeze(-1) - factor = sign_factor + scale_factor * (xgt0.to(x_grad.dtype) - 0.5) - else: - xgt0, scale_factor = ctx.saved_tensors - for _ in range(ctx.channel_dim, x_grad.ndim - 1): - scale_factor = scale_factor.unsqueeze(-1) - factor = scale_factor * (xgt0.to(x_grad.dtype) - 0.5) - neg_delta_grad = x_grad.abs() * factor - return ( - x_grad - neg_delta_grad, - None, - None, - None, - ) - - -def _compute_scale_factor( - x: Tensor, - channel_dim: int, - min_abs: float, - max_abs: float, - gain_factor: float, - max_factor: float, -) -> Tensor: - if channel_dim < 0: - channel_dim += x.ndim - sum_dims = [d for d in range(x.ndim) if d != channel_dim] - x_abs_mean = torch.mean(x.abs(), dim=sum_dims).to(torch.float32) - - if min_abs == 0.0: - below_threshold = 0.0 - else: - # below_threshold is 0 if x_abs_mean > min_abs, can be at most max_factor if - # x_abs)_mean , min_abs. - below_threshold = ( - (min_abs - x_abs_mean) * (gain_factor / min_abs) - ).clamp(min=0, max=max_factor) - - above_threshold = ((x_abs_mean - max_abs) * (gain_factor / max_abs)).clamp( - min=0, max=max_factor - ) - - return below_threshold - above_threshold - - -def _compute_sign_factor( - x: Tensor, - channel_dim: int, - min_positive: float, - max_positive: float, - gain_factor: float, - max_factor: float, -) -> Tensor: - if channel_dim < 0: - channel_dim += x.ndim - sum_dims = [d for d in range(x.ndim) if d != channel_dim] - proportion_positive = torch.mean((x > 0).to(torch.float32), dim=sum_dims) - if min_positive == 0.0: - factor1 = 0.0 - else: - # 0 if proportion_positive >= min_positive, else can be - # as large as max_factor. - factor1 = ( - (min_positive - proportion_positive) * (gain_factor / min_positive) - ).clamp_(min=0, max=max_factor) - - if max_positive == 1.0: - factor2 = 0.0 - else: - # 0 if self.proportion_positive <= max_positive, else can be - # as large as -max_factor. - factor2 = ( - (proportion_positive - max_positive) - * (gain_factor / (1.0 - max_positive)) - ).clamp_(min=0, max=max_factor) - sign_factor = factor1 - factor2 - # require min_positive != 0 or max_positive != 1: - assert not isinstance(sign_factor, float) - return sign_factor - - -class ActivationScaleBalancerFunction(torch.autograd.Function): - """ - This object is used in class ActivationBalancer when the user specified - min_positive=0, max_positive=1, so there are no constraints on the signs - of the activations and only the absolute value has a constraint. - """ - - @staticmethod - def forward( - ctx, - x: Tensor, - sign_factor: Tensor, - scale_factor: Tensor, - channel_dim: int, - ) -> Tensor: - if channel_dim < 0: - channel_dim += x.ndim - ctx.channel_dim = channel_dim - xgt0 = x > 0 - ctx.save_for_backward(xgt0, sign_factor, scale_factor) - return x - - @staticmethod - def backward(ctx, x_grad: Tensor) -> Tuple[Tensor, None, None, None]: - xgt0, sign_factor, scale_factor = ctx.saved_tensors - for _ in range(ctx.channel_dim, x_grad.ndim - 1): - sign_factor = sign_factor.unsqueeze(-1) - scale_factor = scale_factor.unsqueeze(-1) - - factor = sign_factor + scale_factor * (xgt0.to(x_grad.dtype) - 0.5) - neg_delta_grad = x_grad.abs() * factor - return ( - x_grad - neg_delta_grad, - None, - None, - None, - ) - - -class RandomClampFunction(torch.autograd.Function): - @staticmethod - def forward( - ctx, - x: Tensor, - min: Optional[float], - max: Optional[float], - prob: float, - reflect: float, - ) -> Tensor: - x_clamped = torch.clamp(x, min=min, max=max) - mask = torch.rand_like(x) < prob - ans = torch.where(mask, x_clamped, x) - if x.requires_grad: - ctx.save_for_backward(ans == x) - ctx.reflect = reflect - if reflect != 0.0: - ans = ans * (1.0 + reflect) - (x * reflect) - return ans - - @staticmethod - def backward( - ctx, ans_grad: Tensor - ) -> Tuple[Tensor, None, None, None, None]: - (is_same,) = ctx.saved_tensors - x_grad = ans_grad * is_same.to(ans_grad.dtype) - reflect = ctx.reflect - if reflect != 0.0: - x_grad = x_grad * (1.0 + reflect) - (ans_grad * reflect) - return x_grad, None, None, None, None - - -def random_clamp( - x: Tensor, - min: Optional[float] = None, - max: Optional[float] = None, - prob: float = 0.5, - reflect: float = 0.0, -): - return RandomClampFunction.apply(x, min, max, prob, reflect) - - -def random_cast_to_half(x: Tensor, min_abs: float = 5.0e-06) -> Tensor: - """ - A randomized way of casting a floating point value to half precision. - """ - if x.dtype == torch.float16: - return x - x_abs = x.abs() - is_too_small = x_abs < min_abs - # for elements where is_too_small is true, random_val will contain +-min_abs with - # probability (x.abs() / min_abs), and 0.0 otherwise. [so this preserves expectations, - # for those elements]. - random_val = min_abs * x.sign() * (torch.rand_like(x) * min_abs < x_abs) - return torch.where(is_too_small, random_val, x).to(torch.float16) - - -class RandomGradFunction(torch.autograd.Function): - """ - Does nothing in forward pass; in backward pass, gets rid of very small grads using - randomized approach that preserves expectations (intended to reduce roundoff). - """ - - @staticmethod - def forward(ctx, x: Tensor, min_abs: float) -> Tensor: - ctx.min_abs = min_abs - return x - - @staticmethod - def backward(ctx, ans_grad: Tensor) -> Tuple[Tensor, None]: - if ans_grad.dtype == torch.float16: - return ( - random_cast_to_half( - ans_grad.to(torch.float32), min_abs=ctx.min_abs - ), - None, - ) - else: - return ans_grad, None - - -class RandomGrad(torch.nn.Module): - """ - Gets rid of very small gradients using an expectation-preserving method, intended to increase - accuracy of training when using amp (automatic mixed precision) - """ - - def __init__(self, min_abs: float = 5.0e-06): - super(RandomGrad, self).__init__() - self.min_abs = min_abs - - def forward(self, x: Tensor): - if ( - torch.jit.is_scripting() - or not self.training - or torch.jit.is_tracing() - ): - return x - else: - return RandomGradFunction.apply(x, self.min_abs) - - -class SoftmaxFunction(torch.autograd.Function): - """ - Tries to handle half-precision derivatives in a randomized way that should - be more accurate for training than the default behavior. - """ - - @staticmethod - def forward(ctx, x: Tensor, dim: int): - ans = x.softmax(dim=dim) - # if x dtype is float16, x.softmax() returns a float32 because - # (presumably) that op does not support float16, and autocast - # is enabled. - if torch.is_autocast_enabled(): - ans = ans.to(torch.float16) - ctx.save_for_backward(ans) - ctx.x_dtype = x.dtype - ctx.dim = dim - return ans - - @staticmethod - def backward(ctx, ans_grad: Tensor): - (ans,) = ctx.saved_tensors - with torch.cuda.amp.autocast(enabled=False): - ans_grad = ans_grad.to(torch.float32) - ans = ans.to(torch.float32) - x_grad = ans_grad * ans - x_grad = x_grad - ans * x_grad.sum(dim=ctx.dim, keepdim=True) - return x_grad, None - - -def softmax(x: Tensor, dim: int): - if torch.jit.is_scripting() or torch.jit.is_tracing(): - return x.softmax(dim) - - return SoftmaxFunction.apply(x, dim) - - -class MaxEigLimiterFunction(torch.autograd.Function): - @staticmethod - def forward( - ctx, - x: Tensor, - coeffs: Tensor, - direction: Tensor, - channel_dim: int, - grad_scale: float, - ) -> Tensor: - ctx.channel_dim = channel_dim - ctx.grad_scale = grad_scale - ctx.save_for_backward(x.detach(), coeffs.detach(), direction.detach()) - return x - - @staticmethod - def backward(ctx, x_grad, *args): - with torch.enable_grad(): - (x_orig, coeffs, new_direction) = ctx.saved_tensors - x_orig.requires_grad = True - num_channels = x_orig.shape[ctx.channel_dim] - x = x_orig.transpose(ctx.channel_dim, -1).reshape(-1, num_channels) - new_direction.requires_grad = False - x = x - x.mean(dim=0) - x_var = (x ** 2).mean() - x_residual = x - coeffs * new_direction - x_residual_var = (x_residual ** 2).mean() - # `variance_proportion` is the proportion of the variance accounted for - # by the top eigen-direction. This is to be minimized. - variance_proportion = (x_var - x_residual_var) / (x_var + 1.0e-20) - variance_proportion.backward() - x_orig_grad = x_orig.grad - x_extra_grad = ( - x_orig.grad - * ctx.grad_scale - * x_grad.norm() - / (x_orig_grad.norm() + 1.0e-20) - ) - return x_grad + x_extra_grad.detach(), None, None, None, None - - -class BasicNorm(torch.nn.Module): - """ - This is intended to be a simpler, and hopefully cheaper, replacement for - LayerNorm. The observation this is based on, is that Transformer-type - networks, especially with pre-norm, sometimes seem to set one of the - feature dimensions to a large constant value (e.g. 50), which "defeats" - the LayerNorm because the output magnitude is then not strongly dependent - on the other (useful) features. Presumably the weight and bias of the - LayerNorm are required to allow it to do this. - - So the idea is to introduce this large constant value as an explicit - parameter, that takes the role of the "eps" in LayerNorm, so the network - doesn't have to do this trick. We make the "eps" learnable. - - Args: - num_channels: the number of channels, e.g. 512. - channel_dim: the axis/dimension corresponding to the channel, - interprted as an offset from the input's ndim if negative. - shis is NOT the num_channels; it should typically be one of - {-2, -1, 0, 1, 2, 3}. - eps: the initial "epsilon" that we add as ballast in: - scale = ((input_vec**2).mean() + epsilon)**-0.5 - Note: our epsilon is actually large, but we keep the name - to indicate the connection with conventional LayerNorm. - learn_eps: if true, we learn epsilon; if false, we keep it - at the initial value. - eps_min: float - eps_max: float - """ - - def __init__( - self, - num_channels: int, - channel_dim: int = -1, # CAUTION: see documentation. - eps: float = 0.25, - learn_eps: bool = True, - eps_min: float = -3.0, - eps_max: float = 3.0, - ) -> None: - super(BasicNorm, self).__init__() - self.num_channels = num_channels - self.channel_dim = channel_dim - if learn_eps: - self.eps = nn.Parameter(torch.tensor(eps).log().detach()) - else: - self.register_buffer("eps", torch.tensor(eps).log().detach()) - self.eps_min = eps_min - self.eps_max = eps_max - - def forward(self, x: Tensor) -> Tensor: - assert x.shape[self.channel_dim] == self.num_channels - eps = self.eps - if self.training and random.random() < 0.25: - # with probability 0.25, in training mode, clamp eps between the min - # and max; this will encourage it to learn parameters within the - # allowed range by making parameters that are outside the allowed - # range noisy. - - # gradients to allow the parameter to get back into the allowed - # region if it happens to exit it. - eps = eps.clamp(min=self.eps_min, max=self.eps_max) - scales = ( - torch.mean(x ** 2, dim=self.channel_dim, keepdim=True) + eps.exp() - ) ** -0.5 - return x * scales - - -def ScaledLinear(*args, initial_scale: float = 1.0, **kwargs) -> nn.Linear: - """ - Behaves like a constructor of a modified version of nn.Linear - that gives an easy way to set the default initial parameter scale. - - Args: - Accepts the standard args and kwargs that nn.Linear accepts - e.g. in_features, out_features, bias=False. - - initial_scale: you can override this if you want to increase - or decrease the initial magnitude of the module's output - (affects the initialization of weight_scale and bias_scale). - Another option, if you want to do something like this, is - to re-initialize the parameters. - """ - ans = nn.Linear(*args, **kwargs) - with torch.no_grad(): - ans.weight[:] *= initial_scale - if ans.bias is not None: - torch.nn.init.uniform_( - ans.bias, -0.1 * initial_scale, 0.1 * initial_scale - ) - return ans - - -def ScaledConv1d( - *args, - initial_scale: float = 1.0, - kernel_size: int = 3, - padding: str = "same", - **kwargs, -) -> nn.Conv1d: - """ - Behaves like a constructor of a modified version of nn.Conv1d - that gives an easy way to set the default initial parameter scale. - - Args: - Accepts the standard args and kwargs that nn.Linear accepts - e.g. in_features, out_features, bias=False. - - initial_scale: you can override this if you want to increase - or decrease the initial magnitude of the module's output - (affects the initialization of weight_scale and bias_scale). - Another option, if you want to do something like this, is - to re-initialize the parameters. - """ - ans = nn.Conv1d(*args, kernel_size=kernel_size, padding=padding, **kwargs) - with torch.no_grad(): - ans.weight[:] *= initial_scale - if ans.bias is not None: - torch.nn.init.uniform_( - ans.bias, -0.1 * initial_scale, 0.1 * initial_scale - ) - return ans - - -def TransposeScaledConv1d( - *args, - initial_scale: float = 1.0, - kernel_size: int = 3, - padding: str = "same", - **kwargs, -) -> nn.Sequential: - """ - Transpose -> ScaledConv1d - """ - return nn.Sequential( - Transpose(), - ScaledConv1d( - *args, - initial_scale=initial_scale, - kernel_size=kernel_size, - padding=padding, - **kwargs, - ), - ) - - -def ScaledConv1dTranspose( - *args, - initial_scale: float = 1.0, - kernel_size: int = 3, - padding: str = "same", - **kwargs, -) -> nn.Sequential: - """ - Transpose -> ScaledConv1d - """ - return nn.Sequential( - ScaledConv1d( - *args, - initial_scale=initial_scale, - kernel_size=kernel_size, - padding=padding, - **kwargs, - ), - Transpose(), - ) - - -def TransposeConv1d( - *args, kernel_size: int = 3, padding: str = "same", **kwargs -) -> nn.Sequential: - """ - Transpose -> Conv1d - """ - return nn.Sequential( - Transpose(), - nn.Conv1d(*args, kernel_size=kernel_size, padding=padding, **kwargs), - ) - - -def Conv1dTranspose( - *args, kernel_size: int = 3, padding: str = "same", **kwargs -) -> nn.Sequential: - """ - ScaledConv1d -> Transpose - """ - return nn.Sequential( - nn.Conv1d(*args, kernel_size=kernel_size, padding=padding, **kwargs), - Transpose(), - ) - - -class SRLinear(nn.Linear): - """https://arxiv.org/abs/2303.06296 - Stabilizing Transformer Training by Preventing Attention Entropy Collapse - """ - - def __init__(self, in_features, out_features, bias=True, **kwargs): - super().__init__(in_features, out_features, bias=bias, **kwargs) - self.register_buffer( - "u", nn.functional.normalize(torch.randn(in_features), dim=0) - ) - with torch.no_grad(): - sigma = self.get_sigma() - self.register_buffer("spectral_norm", sigma) - self.sigma = nn.Parameter(torch.ones(1)) - - def get_sigma(self): - with torch.no_grad(): - u = self.u - v = self.weight.mv(u) - v = nn.functional.normalize(v, dim=0) - u = self.weight.T.mv(v) - u = nn.functional.normalize(u, dim=0) - self.u.data.copy_(u) - return torch.einsum("c,cd,d->", v, self.weight, u) - - def get_weight(self): - sigma = self.get_sigma() - if self.training: - self.spectral_norm.data.copy_(sigma) - weight = (self.sigma / sigma) * self.weight - return weight - - def forward(self, x): - return nn.functional.linear(x, self.get_weight(), self.bias) - - -class SRConv1d(SRLinear): - def __init__( - self, - in_features, - out_features, - kernel_size, - stride: int = 1, - padding: str = "same", - bias: bool = True, - **kwargs, - ): - in_features = in_features * kernel_size - super().__init__(in_features, out_features, bias=bias, **kwargs) - nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - - def forward(self, x): - in_features = self.in_features // self.kernel_size - weight = self.get_weight().view( - self.out_features, in_features, self.kernel_size - ) - return nn.functional.conv1d( - x, weight, bias=self.bias, stride=self.stride, padding=self.padding - ) - - -def TransposeSRConv1d( - *args, kernel_size: int = 3, padding: str = "same", **kwargs -) -> nn.Sequential: - """ - Transpose -> SRConv1d - """ - return nn.Sequential( - Transpose(), - SRConv1d(*args, kernel_size=kernel_size, padding=padding, **kwargs), - ) - - -def SRConv1dTranspose( - *args, kernel_size: int = 3, padding: str = "same", **kwargs -) -> nn.Sequential: - """ - SRConv1d -> Transpose - """ - return nn.Sequential( - SRConv1d(*args, kernel_size=kernel_size, padding=padding, **kwargs), - Transpose(), - ) - - -class ActivationBalancer(torch.nn.Module): - """ - Modifies the backpropped derivatives of a function to try to encourage, for - each channel, that it is positive at least a proportion `threshold` of the - time. It does this by multiplying negative derivative values by up to - (1+max_factor), and positive derivative values by up to (1-max_factor), - interpolated from 1 at the threshold to those extremal values when none - of the inputs are positive. - - Args: - num_channels: the number of channels - channel_dim: the dimension/axis corresponding to the channel, e.g. - -1, 0, 1, 2; will be interpreted as an offset from x.ndim if negative. - min_positive: the minimum, per channel, of the proportion of the time - that (x > 0), below which we start to modify the derivatives. - max_positive: the maximum, per channel, of the proportion of the time - that (x > 0), above which we start to modify the derivatives. - max_factor: the maximum factor by which we modify the derivatives for - either the sign constraint or the magnitude constraint; - e.g. with max_factor=0.02, the the derivatives would be multiplied by - values in the range [0.98..1.02]. - sign_gain_factor: determines the 'gain' with which we increase the - change in gradient once the constraints on min_positive and max_positive - are violated. - scale_gain_factor: determines the 'gain' with which we increase the - change in gradient once the constraints on min_abs and max_abs - are violated. - min_abs: the minimum average-absolute-value difference from the mean - value per channel, which we allow, before we start to modify - the derivatives to prevent this. - max_abs: the maximum average-absolute-value difference from the mean - value per channel, which we allow, before we start to modify - the derivatives to prevent this. - min_prob: determines the minimum probability with which we modify the - gradients for the {min,max}_positive and {min,max}_abs constraints, - on each forward(). This is done randomly to prevent all layers - from doing it at the same time. Early in training we may use - higher probabilities than this; it will decay to this value. - """ - - def __init__( - self, - num_channels: int, - channel_dim: int, - min_positive: float = 0.05, - max_positive: float = 0.95, - max_factor: float = 0.04, - sign_gain_factor: float = 0.01, - scale_gain_factor: float = 0.02, - min_abs: float = 0.2, - max_abs: float = 100.0, - min_prob: float = 0.1, - ): - super(ActivationBalancer, self).__init__() - self.num_channels = num_channels - self.channel_dim = channel_dim - self.min_positive = min_positive - self.max_positive = max_positive - self.max_factor = max_factor - self.min_abs = min_abs - self.max_abs = max_abs - self.min_prob = min_prob - self.sign_gain_factor = sign_gain_factor - self.scale_gain_factor = scale_gain_factor - - # count measures how many times the forward() function has been called. - # We occasionally sync this to a tensor called `count`, that exists to - # make sure it is synced to disk when we load and save the model. - self.cpu_count = 0 - self.register_buffer("count", torch.tensor(0, dtype=torch.int64)) - - def forward(self, x: Tensor) -> Tensor: - if ( - torch.jit.is_scripting() - or not x.requires_grad - or torch.jit.is_tracing() - ): - return _no_op(x) - - count = self.cpu_count - self.cpu_count += 1 - - if random.random() < 0.01: - # Occasionally sync self.cpu_count with self.count. - # count affects the decay of 'prob'. don't do this on every iter, - # because syncing with the GPU is slow. - self.cpu_count = max(self.cpu_count, self.count.item()) - self.count.fill_(self.cpu_count) - - # the prob of doing some work exponentially decreases from 0.5 till it hits - # a floor at min_prob (==0.1, by default) - prob = max(self.min_prob, 0.5 ** (1 + (count / 4000.0))) - - if random.random() < prob: - sign_gain_factor = 0.5 - if self.min_positive != 0.0 or self.max_positive != 1.0: - sign_factor = _compute_sign_factor( - x, - self.channel_dim, - self.min_positive, - self.max_positive, - gain_factor=self.sign_gain_factor / prob, - max_factor=self.max_factor, - ) - else: - sign_factor = None - - scale_factor = _compute_scale_factor( - x.detach(), - self.channel_dim, - min_abs=self.min_abs, - max_abs=self.max_abs, - gain_factor=self.scale_gain_factor / prob, - max_factor=self.max_factor, - ) - return ActivationBalancerFunction.apply( - x, - scale_factor, - sign_factor, - self.channel_dim, - ) - else: - return _no_op(x) - - -def penalize_abs_values_gt(x: Tensor, limit: float, penalty: float) -> Tensor: - """ - Returns x unmodified, but in backprop will put a penalty for the excess of - the absolute values of elements of x over the limit "limit". E.g. if - limit == 10.0, then if x has any values over 10 it will get a penalty. - - Caution: the value of this penalty will be affected by grad scaling used - in automatic mixed precision training. For this reasons we use this, - it shouldn't really matter, or may even be helpful; we just use this - to disallow really implausible values of scores to be given to softmax. - """ - x_sign = x.sign() - over_limit = (x.abs() - limit) > 0 - # The following is a memory efficient way to penalize the absolute values of - # x that's over the limit. (The memory efficiency comes when you think - # about which items torch needs to cache for the autograd, and which ones it - # can throw away). The numerical value of aux_loss as computed here will - # actually be larger than it should be, by limit * over_limit.sum(), but it - # has the same derivative as the real aux_loss which is penalty * (x.abs() - - # limit).relu(). - aux_loss = penalty * ((x_sign * over_limit).to(torch.int8) * x) - # note: we don't do sum() here on aux)_loss, but it's as if we had done - # sum() due to how with_loss() works. - x = with_loss(x, aux_loss) - # you must use x for something, or this will be ineffective. - return x - - -def _diag(x: Tensor): # like .diag(), but works for tensors with 3 dims. - if x.ndim == 2: - return x.diag() - else: - (batch, dim, dim) = x.shape - x = x.reshape(batch, dim * dim) - x = x[:, :: dim + 1] - assert x.shape == (batch, dim) - return x - - -def _whitening_metric(x: Tensor, num_groups: int): - """ - Computes the "whitening metric", a value which will be 1.0 if all the eigenvalues of - of the centered feature covariance are the same within each group's covariance matrix - and also between groups. - Args: - x: a Tensor of shape (*, num_channels) - num_groups: the number of groups of channels, a number >=1 that divides num_channels - Returns: - Returns a scalar Tensor that will be 1.0 if the data is "perfectly white" and - greater than 1.0 otherwise. - """ - assert x.dtype != torch.float16 - x = x.reshape(-1, x.shape[-1]) - (num_frames, num_channels) = x.shape - assert num_channels % num_groups == 0 - channels_per_group = num_channels // num_groups - x = x.reshape(num_frames, num_groups, channels_per_group).transpose(0, 1) - # x now has shape (num_groups, num_frames, channels_per_group) - # subtract the mean so we use the centered, not uncentered, covariance. - # My experience has been that when we "mess with the gradients" like this, - # it's better not do anything that tries to move the mean around, because - # that can easily cause instability. - x = x - x.mean(dim=1, keepdim=True) - # x_covar: (num_groups, channels_per_group, channels_per_group) - x_covar = torch.matmul(x.transpose(1, 2), x) - x_covar_mean_diag = _diag(x_covar).mean() - # the following expression is what we'd get if we took the matrix product - # of each covariance and measured the mean of its trace, i.e. - # the same as _diag(torch.matmul(x_covar, x_covar)).mean(). - x_covarsq_mean_diag = (x_covar ** 2).sum() / ( - num_groups * channels_per_group - ) - # this metric will be >= 1.0; the larger it is, the less 'white' the data was. - metric = x_covarsq_mean_diag / (x_covar_mean_diag ** 2 + 1.0e-20) - return metric - - -class WhiteningPenaltyFunction(torch.autograd.Function): - @staticmethod - def forward( - ctx, - x: Tensor, - num_groups: int, - whitening_limit: float, - grad_scale: float, - ) -> Tensor: - ctx.save_for_backward(x) - ctx.num_groups = num_groups - ctx.whitening_limit = whitening_limit - ctx.grad_scale = grad_scale - return x - - @staticmethod - def backward(ctx, x_grad: Tensor): - (x_orig,) = ctx.saved_tensors - with torch.enable_grad(): - with torch.cuda.amp.autocast(enabled=False): - x_detached = x_orig.to(torch.float32).detach() - x_detached.requires_grad = True - - metric = _whitening_metric(x_detached, ctx.num_groups) - - if random.random() < 0.005 or __name__ == "__main__": - logging.info( - f"Whitening: num_groups={ctx.num_groups}, num_channels={x_orig.shape[-1]}, " - f"metric={metric.item():.2f} vs. limit={ctx.whitening_limit}" - ) - - (metric - ctx.whitening_limit).relu().backward() - penalty_grad = x_detached.grad - scale = ctx.grad_scale * ( - x_grad.to(torch.float32).norm() - / (penalty_grad.norm() + 1.0e-20) - ) - penalty_grad = penalty_grad * scale - return x_grad + penalty_grad.to(x_grad.dtype), None, None, None - - -class Whiten(nn.Module): - def __init__( - self, - num_groups: int, - whitening_limit: float, - prob: Union[float, Tuple[float, float]], - grad_scale: float, - ): - """ - Args: - num_groups: the number of groups to divide the channel dim into before - whitening. We will attempt to make the feature covariance - within each group, after mean subtraction, as "white" as possible, - while having the same trace across all groups. - whitening_limit: a value greater than 1.0, that dictates how much - freedom we have to violate the constraints. 1.0 would mean perfectly - white, with exactly the same trace across groups; larger values - give more freedom. E.g. 2.0. - prob: the probability with which we apply the gradient modification - (also affects the grad scale). May be supplied as a float, - or as a pair (min_prob, max_prob) - - grad_scale: determines the scale on the gradient term from this object, - relative to the rest of the gradient on the attention weights. - E.g. 0.02 (you may want to use smaller values than this if prob is large) - """ - super(Whiten, self).__init__() - assert num_groups >= 1 - assert whitening_limit >= 1 - assert grad_scale >= 0 - self.num_groups = num_groups - self.whitening_limit = whitening_limit - if isinstance(prob, float): - assert 0 < prob <= 1 - self.prob = prob - else: - (self.min_prob, self.max_prob) = prob - assert 0 < self.min_prob < self.max_prob <= 1 - self.prob = self.max_prob - - self.grad_scale = grad_scale - - def forward(self, x: Tensor) -> Tensor: - """ - In the forward pass, this function just returns the input unmodified. - In the backward pass, it will modify the gradients to ensure that the - distribution in each group has close to (lambda times I) as the covariance - after mean subtraction, with the same lambda across groups. - For whitening_limit > 1, there will be more freedom to violate this - constraint. - - Args: - x: the input of shape (*, num_channels) - - Returns: - x, unmodified. You should make sure - you use the returned value, or the graph will be freed - and nothing will happen in backprop. - """ - if ( - not x.requires_grad - or random.random() > self.prob - or self.grad_scale == 0 - ): - return _no_op(x) - else: - if hasattr(self, "min_prob") and random.random() < 0.25: - # occasionally switch between min_prob and max_prob, based on whether - # we are above or below the threshold. - if ( - _whitening_metric(x.to(torch.float32), self.num_groups) - > self.whitening_limit - ): - # there would be a change to the grad. - self.prob = self.max_prob - else: - self.prob = self.min_prob - - return WhiteningPenaltyFunction.apply( - x, self.num_groups, self.whitening_limit, self.grad_scale - ) - - -class WithLoss(torch.autograd.Function): - @staticmethod - def forward(ctx, x: Tensor, y: Tensor): - ctx.y_shape = y.shape - return x - - @staticmethod - def backward(ctx, ans_grad: Tensor): - return ans_grad, torch.ones( - ctx.y_shape, dtype=ans_grad.dtype, device=ans_grad.device - ) - - -def with_loss(x, y): - if torch.jit.is_scripting() or torch.jit.is_tracing(): - return x - # returns x but adds y.sum() to the loss function. - return WithLoss.apply(x, y) - - -def _no_op(x: Tensor) -> Tensor: - if torch.jit.is_scripting() or torch.jit.is_tracing(): - return x - else: - # a no-op function that will have a node in the autograd graph, - # to avoid certain bugs relating to backward hooks - return x.chunk(1, dim=-1)[0] - - -class Identity(torch.nn.Module): - def __init__(self): - super(Identity, self).__init__() - - def forward(self, x): - return _no_op(x) - - -class MaxEig(torch.nn.Module): - """ - Modifies the backpropped derivatives of a function to try to discourage - that any given direction in activation space accounts for more than - a specified proportion of the covariance (e.g. 0.2). - - - Args: - num_channels: the number of channels - channel_dim: the dimension/axis corresponding to the channel, e.g. - -1, 0, 1, 2; will be interpreted as an offset from x.ndim if negative. - max_var_per_eig: the maximum proportion of the variance of the - features/channels, after mean subtraction, that can come from - any given eigenvalue. - min_prob: the minimum probability with which we apply this during any invocation - of forward(), assuming last time we applied the constraint it was - not active; supplied for speed. - scale: determines the scale with which we modify the gradients, relative - to the existing / unmodified gradients - """ - - def __init__( - self, - num_channels: int, - channel_dim: int, - max_var_per_eig: float = 0.2, - min_prob: float = 0.01, - scale: float = 0.01, - ): - super(MaxEig, self).__init__() - self.num_channels = num_channels - self.channel_dim = channel_dim - self.scale = scale - assert max_var_per_eig == 0.0 or max_var_per_eig > 1.0 / num_channels - self.max_var_per_eig = max_var_per_eig - - # we figure out the dominant direction using the power method: starting with - # a random vector, keep multiplying by the covariance and renormalizing. - with torch.no_grad(): - # arbitrary.. would use randn() but want to leave the rest of the model's - # random parameters unchanged for comparison - direction = torch.arange(num_channels).to(torch.float) - direction = direction / direction.norm() - self.register_buffer("max_eig_direction", direction) - - self.min_prob = min_prob - # cur_prob is the current probability we'll use to apply the ActivationBalancer. - # We'll regress this towards prob, each tiem we try to apply it and it is not - # active. - self.cur_prob = 1.0 - - def forward(self, x: Tensor) -> Tensor: - if ( - torch.jit.is_scripting() - or self.max_var_per_eig <= 0 - or random.random() > self.cur_prob - or torch.jit.is_tracing() - ): - return _no_op(x) - - with torch.cuda.amp.autocast(enabled=False): - eps = 1.0e-20 - orig_x = x - x = x.to(torch.float32) - with torch.no_grad(): - x = x.transpose(self.channel_dim, -1).reshape( - -1, self.num_channels - ) - x = x - x.mean(dim=0) - new_direction, coeffs = self._find_direction_coeffs( - x, self.max_eig_direction - ) - x_var = (x ** 2).mean() - x_residual = x - coeffs * new_direction - x_residual_var = (x_residual ** 2).mean() - - # `variance_proportion` is the proportion of the variance accounted for - # by the top eigen-direction. - variance_proportion = (x_var - x_residual_var) / ( - x_var + 1.0e-20 - ) - - # ensure new direction is nonzero even if x == 0, by including `direction`. - self._set_direction( - 0.1 * self.max_eig_direction + new_direction - ) - - if random.random() < 0.01 or __name__ == "__main__": - logging.info( - f"variance_proportion = {variance_proportion.item()}, shape={tuple(orig_x.shape)}, cur_prob={self.cur_prob}" - ) - - if variance_proportion >= self.max_var_per_eig: - # The constraint is active. Note, we should quite rarely - # reach here, only near the beginning of training if we are - # starting to diverge, should this constraint be active. - cur_prob = self.cur_prob - self.cur_prob = ( - 1.0 # next time, do the update with probability 1.0. - ) - return MaxEigLimiterFunction.apply( - orig_x, coeffs, new_direction, self.channel_dim, self.scale - ) - else: - # let self.cur_prob exponentially approach self.min_prob, as - # long as the constraint is inactive. - self.cur_prob = 0.75 * self.cur_prob + 0.25 * self.min_prob - return orig_x - - def _set_direction(self, direction: Tensor): - """ - Sets self.max_eig_direction to a normalized version of `direction` - """ - direction = direction.detach() - direction = direction / direction.norm() - direction_sum = direction.sum().item() - if direction_sum - direction_sum == 0: # no inf/nan - self.max_eig_direction[:] = direction - else: - logging.info( - f"Warning: sum of direction in MaxEig is {direction_sum}, " - "num_channels={self.num_channels}, channel_dim={self.channel_dim}" - ) - - def _find_direction_coeffs( - self, x: Tensor, prev_direction: Tensor - ) -> Tuple[Tensor, Tensor, Tensor]: - """ - Figure out (an approximation to) the proportion of the variance of a set of - feature vectors that can be attributed to the top eigen-direction. - Args: - x: a Tensor of shape (num_frames, num_channels), with num_frames > 1. - prev_direction: a Tensor of shape (num_channels,), that is our previous estimate - of the top eigen-direction, or a random direction if this is the first - iteration. Does not have to be normalized, but should be nonzero. - - Returns: (cur_direction, coeffs), where: - cur_direction: a Tensor of shape (num_channels,) that is the current - estimate of the top eigen-direction. - coeffs: a Tensor of shape (num_frames, 1) that minimizes, or - approximately minimizes, (x - coeffs * cur_direction).norm() - """ - (num_frames, num_channels) = x.shape - assert num_channels > 1 and num_frames > 1 - assert prev_direction.shape == (num_channels,) - # `coeffs` are the coefficients of `prev_direction` in x. - # actually represent the coeffs up to a constant positive factor. - coeffs = (x * prev_direction).sum(dim=1, keepdim=True) + 1.0e-10 - cur_direction = (x * coeffs).sum(dim=0) / ( - (coeffs ** 2).sum() + 1.0e-20 - ) - return cur_direction, coeffs - - -class DoubleSwishFunction(torch.autograd.Function): - """ - double_swish(x) = x * torch.sigmoid(x-1) - This is a definition, originally motivated by its close numerical - similarity to swish(swish(x)), where swish(x) = x * sigmoid(x). - - Memory-efficient derivative computation: - double_swish(x) = x * s, where s(x) = torch.sigmoid(x-1) - double_swish'(x) = d/dx double_swish(x) = x * s'(x) + x' * s(x) = x * s'(x) + s(x). - Now, s'(x) = s(x) * (1-s(x)). - double_swish'(x) = x * s'(x) + s(x). - = x * s(x) * (1-s(x)) + s(x). - = double_swish(x) * (1-s(x)) + s(x) - ... so we just need to remember s(x) but not x itself. - """ - - @staticmethod - def forward(ctx, x: Tensor) -> Tensor: - requires_grad = x.requires_grad - x_dtype = x.dtype - if x.dtype == torch.float16: - x = x.to(torch.float32) - - s = torch.sigmoid(x - 1.0) - y = x * s - - if requires_grad: - deriv = y * (1 - s) + s - # notes on derivative of x * sigmoid(x - 1): - # https://www.wolframalpha.com/input?i=d%2Fdx+%28x+*+sigmoid%28x-1%29%29 - # min \simeq -0.043638. Take floor as -0.043637 so it's a lower bund - # max \simeq 1.1990. Take ceil to be 1.2 so it's an upper bound. - # the combination of "+ torch.rand_like(deriv)" and casting to torch.uint8 (which - # floors), should be expectation-preserving. - floor = -0.043637 - ceil = 1.2 - d_scaled = (deriv - floor) * ( - 255.0 / (ceil - floor) - ) + torch.rand_like(deriv) - if __name__ == "__main__": - # for self-testing only. - assert d_scaled.min() >= 0.0 - assert d_scaled.max() < 256.0 - d_int = d_scaled.to(torch.uint8) - ctx.save_for_backward(d_int) - if x.dtype == torch.float16 or torch.is_autocast_enabled(): - y = y.to(torch.float16) - return y - - @staticmethod - def backward(ctx, y_grad: Tensor) -> Tensor: - (d,) = ctx.saved_tensors - # the same constants as used in forward pass. - floor = -0.043637 - ceil = 1.2 - d = d * ((ceil - floor) / 255.0) + floor - return y_grad * d - - -class DoubleSwish(torch.nn.Module): - def forward(self, x: Tensor) -> Tensor: - """Return double-swish activation function which is an approximation to Swish(Swish(x)), - that we approximate closely with x * sigmoid(x-1). - """ - if torch.jit.is_scripting() or torch.jit.is_tracing(): - return x * torch.sigmoid(x - 1.0) - return DoubleSwishFunction.apply(x) - - -def BalancedDoubleSwish( - d_model, channel_dim=-1, max_abs=10.0, min_prob=0.25 -) -> nn.Sequential: - """ - ActivationBalancer -> DoubleSwish - """ - balancer = ActivationBalancer( - d_model, channel_dim=channel_dim, max_abs=max_abs, min_prob=min_prob - ) - return nn.Sequential( - balancer, - DoubleSwish(), - ) - - -def _test_max_eig(): - for proportion in [0.1, 0.5, 10.0]: - logging.info(f"proportion = {proportion}") - x = torch.randn(100, 128) - direction = torch.randn(128) - coeffs = torch.randn(100, 1) - x += proportion * direction * coeffs - - x.requires_grad = True - - num_channels = 128 - m = MaxEig( - num_channels, 1, 0.5, scale=0.1 # channel_dim # max_var_per_eig - ) # grad_scale - - for _ in range(4): - y = m(x) - - y_grad = torch.randn_like(x) - y.backward(gradient=y_grad) - - if proportion < 0.2: - assert torch.allclose(x.grad, y_grad, atol=1.0e-02) - elif proportion > 1.0: - assert not torch.allclose(x.grad, y_grad) - - -def _test_whiten(): - for proportion in [0.1, 0.5, 10.0]: - logging.info(f"_test_whiten(): proportion = {proportion}") - x = torch.randn(100, 128) - direction = torch.randn(128) - coeffs = torch.randn(100, 1) - x += proportion * direction * coeffs - - x.requires_grad = True - - num_channels = 128 - m = Whiten( - 1, 5.0, prob=1.0, grad_scale=0.1 # num_groups # whitening_limit, - ) # grad_scale - - for _ in range(4): - y = m(x) - - y_grad = torch.randn_like(x) - y.backward(gradient=y_grad) - - if proportion < 0.2: - assert torch.allclose(x.grad, y_grad) - elif proportion > 1.0: - assert not torch.allclose(x.grad, y_grad) - - -def _test_activation_balancer_sign(): - probs = torch.arange(0, 1, 0.01) - N = 1000 - x = 1.0 * ( - (2.0 * (torch.rand(probs.numel(), N) < probs.unsqueeze(-1))) - 1.0 - ) - x = x.detach() - x.requires_grad = True - m = ActivationBalancer( - probs.numel(), - channel_dim=0, - min_positive=0.05, - max_positive=0.95, - max_factor=0.2, - min_abs=0.0, - ) - - y_grad = torch.sign(torch.randn(probs.numel(), N)) - - y = m(x) - y.backward(gradient=y_grad) - print("_test_activation_balancer_sign: x = ", x) - print("_test_activation_balancer_sign: y grad = ", y_grad) - print("_test_activation_balancer_sign: x grad = ", x.grad) - - -def _test_activation_balancer_magnitude(): - magnitudes = torch.arange(0, 1, 0.01) - N = 1000 - x = torch.sign(torch.randn(magnitudes.numel(), N)) * magnitudes.unsqueeze( - -1 - ) - x = x.detach() - x.requires_grad = True - m = ActivationBalancer( - magnitudes.numel(), - channel_dim=0, - min_positive=0.0, - max_positive=1.0, - max_factor=0.2, - min_abs=0.2, - max_abs=0.8, - min_prob=1.0, - ) - - y_grad = torch.sign(torch.randn(magnitudes.numel(), N)) - - y = m(x) - y.backward(gradient=y_grad) - print("_test_activation_balancer_magnitude: x = ", x) - print("_test_activation_balancer_magnitude: y grad = ", y_grad) - print("_test_activation_balancer_magnitude: x grad = ", x.grad) - - -def _test_basic_norm(): - num_channels = 128 - m = BasicNorm(num_channels=num_channels, channel_dim=1) - - x = torch.randn(500, num_channels) - - y = m(x) - - assert y.shape == x.shape - x_rms = (x ** 2).mean().sqrt() - y_rms = (y ** 2).mean().sqrt() - print("x rms = ", x_rms) - print("y rms = ", y_rms) - assert y_rms < x_rms - assert y_rms > 0.5 * x_rms - - -def _test_double_swish_deriv(): - x = torch.randn(10, 12, dtype=torch.double) * 3.0 - x.requires_grad = True - m = DoubleSwish() - - tol = (1.2 - (-0.043637)) / 255.0 - torch.autograd.gradcheck(m, x, atol=tol) - - # for self-test. - x = torch.randn(1000, 1000, dtype=torch.double) * 3.0 - x.requires_grad = True - y = m(x) - - -def _test_softmax(): - a = torch.randn(2, 10, dtype=torch.float64) - b = a.clone() - a.requires_grad = True - b.requires_grad = True - a.softmax(dim=1)[:, 0].sum().backward() - print("a grad = ", a.grad) - softmax(b, dim=1)[:, 0].sum().backward() - print("b grad = ", b.grad) - assert torch.allclose(a.grad, b.grad) - - -if __name__ == "__main__": - logging.getLogger().setLevel(logging.INFO) - torch.set_num_threads(1) - torch.set_num_interop_threads(1) - _test_softmax() - _test_whiten() - _test_max_eig() - _test_activation_balancer_sign() - _test_activation_balancer_magnitude() - _test_basic_norm() - _test_double_swish_deriv() diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder_train.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder_train.py deleted file mode 100644 index f618ee00d8f774ecf821b9714932acc7e99aa5d5..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder_train.py +++ /dev/null @@ -1,92 +0,0 @@ -from utils.argutils import print_args -from vocoder.wavernn.train import train -from vocoder.hifigan.train import train as train_hifigan -from vocoder.fregan.train import train as train_fregan -from utils.util import AttrDict -from pathlib import Path -import argparse -import json -import torch -import torch.multiprocessing as mp - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Trains the vocoder from the synthesizer audios and the GTA synthesized mels, " - "or ground truth mels.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - - parser.add_argument("run_id", type=str, help= \ - "Name for this model instance. If a model state from the same run ID was previously " - "saved, the training will restart from there. Pass -f to overwrite saved states and " - "restart from scratch.") - parser.add_argument("datasets_root", type=str, help= \ - "Path to the directory containing your SV2TTS directory. Specifying --syn_dir or --voc_dir " - "will take priority over this argument.") - parser.add_argument("vocoder_type", type=str, default="wavernn", help= \ - "Choose the vocoder type for train. Defaults to wavernn" - "Now, Support and for choose") - parser.add_argument("--syn_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the ground truth mel spectrograms, " - "the wavs and the embeds. Defaults to /SV2TTS/synthesizer/.") - parser.add_argument("--voc_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the vocoder directory that contains the GTA synthesized mel spectrograms. " - "Defaults to /SV2TTS/vocoder/. Unused if --ground_truth is passed.") - parser.add_argument("-m", "--models_dir", type=str, default="vocoder/saved_models/", help=\ - "Path to the directory that will contain the saved model weights, as well as backups " - "of those weights and wavs generated during training.") - parser.add_argument("-g", "--ground_truth", action="store_true", help= \ - "Train on ground truth spectrograms (/SV2TTS/synthesizer/mels).") - parser.add_argument("-s", "--save_every", type=int, default=1000, help= \ - "Number of steps between updates of the model on the disk. Set to 0 to never save the " - "model.") - parser.add_argument("-b", "--backup_every", type=int, default=25000, help= \ - "Number of steps between backups of the model. Set to 0 to never make backups of the " - "model.") - parser.add_argument("-f", "--force_restart", action="store_true", help= \ - "Do not load any saved model and restart from scratch.") - parser.add_argument("--config", type=str, default="vocoder/hifigan/config_16k_.json") - args = parser.parse_args() - - if not hasattr(args, "syn_dir"): - args.syn_dir = Path(args.datasets_root, "SV2TTS", "synthesizer") - args.syn_dir = Path(args.syn_dir) - if not hasattr(args, "voc_dir"): - args.voc_dir = Path(args.datasets_root, "SV2TTS", "vocoder") - args.voc_dir = Path(args.voc_dir) - del args.datasets_root - args.models_dir = Path(args.models_dir) - args.models_dir.mkdir(exist_ok=True) - - print_args(args, parser) - - # Process the arguments - if args.vocoder_type == "wavernn": - # Run the training wavernn - delattr(args, 'vocoder_type') - delattr(args, 'config') - train(**vars(args)) - elif args.vocoder_type == "hifigan": - with open(args.config) as f: - json_config = json.load(f) - h = AttrDict(json_config) - if h.num_gpus > 1: - h.num_gpus = torch.cuda.device_count() - h.batch_size = int(h.batch_size / h.num_gpus) - print('Batch size per GPU :', h.batch_size) - mp.spawn(train_hifigan, nprocs=h.num_gpus, args=(args, h,)) - else: - train_hifigan(0, args, h) - elif args.vocoder_type == "fregan": - with open('vocoder/fregan/config.json') as f: - json_config = json.load(f) - h = AttrDict(json_config) - if h.num_gpus > 1: - h.num_gpus = torch.cuda.device_count() - h.batch_size = int(h.batch_size / h.num_gpus) - print('Batch size per GPU :', h.batch_size) - mp.spawn(train_fregan, nprocs=h.num_gpus, args=(args, h,)) - else: - train_fregan(0, args, h) - - \ No newline at end of file diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/ops/__init__.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/ops/__init__.py deleted file mode 100644 index bec51c75b9363a9a19e9fb5c35f4e7dbd6f7751c..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/ops/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .encoding import Encoding -from .wrappers import Upsample, resize - -__all__ = ['Upsample', 'resize', 'Encoding'] diff --git a/spaces/kokofixcomputers/chat-ui/PRIVACY.md b/spaces/kokofixcomputers/chat-ui/PRIVACY.md deleted file mode 100644 index 5c122ac9f239bdac85b573eec7f204a62f9d74fd..0000000000000000000000000000000000000000 --- a/spaces/kokofixcomputers/chat-ui/PRIVACY.md +++ /dev/null @@ -1,38 +0,0 @@ -## Privacy - -> Last updated: May 15, 2023 - -Starting with `v0.2` of HuggingChat, users are authenticated through their HF user account. - -By default, your conversations are shared with the model's authors (for the `v0.2` model, to Open Assistant) to improve their training data and model over time. Model authors are the custodians of the data collected by their model, even if it's hosted on our platform. - -If you disable data sharing in your settings, your conversations will not be used for any downstream usage (including for research or model training purposes), and they will only be stored to let you access past conversations. You can click on the Delete icon to delete any past conversation at any moment. - -🗓 Please also consult huggingface.co's main privacy policy at https://huggingface.co/privacy. To exercise any of your legal privacy rights, please send an email to privacy@huggingface.co. - -## About available LLMs - -The goal of this app is to showcase that it is now (May 2023) possible to build an open source alternative to ChatGPT. 💪 - -For now, it's running OpenAssistant's [latest LLaMA based model](https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor) (which is one of the current best open source chat models), but the plan in the longer-term is to expose all good-quality chat models from the Hub. - -We are not affiliated with Open Assistant, but if you want to contribute to the training data for the next generation of open models, please consider contributing to https://open-assistant.io/ ❤️ - -## Technical details - -This app is running in a [Space](https://huggingface.co/docs/hub/spaces-overview), which entails that the code for this UI is publicly visible [inside the Space repo](https://huggingface.co/spaces/huggingchat/chat-ui/tree/main). - -**Further development takes place on the [huggingface/chat-ui GitHub repo](https://github.com/huggingface/chat-ui).** - -The inference backend is running the optimized [text-generation-inference](https://github.com/huggingface/text-generation-inference) on HuggingFace's Inference API infrastructure. - -It is therefore possible to deploy a copy of this app to a Space and customize it (swap model, add some UI elements, or store user messages according to your own Terms and conditions) - -We welcome any feedback on this app: please participate to the public discussion at https://huggingface.co/spaces/huggingchat/chat-ui/discussions - - - -## Coming soon - -- User setting to share conversations with model authors (done ✅) -- LLM watermarking diff --git a/spaces/kukuhtw/AutoGPT/autogpt/configurator.py b/spaces/kukuhtw/AutoGPT/autogpt/configurator.py deleted file mode 100644 index 1dc3be124f638b8859eb459bcb2d46696f62e2b7..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/configurator.py +++ /dev/null @@ -1,134 +0,0 @@ -"""Configurator module.""" -import click -from colorama import Back, Fore, Style - -from autogpt import utils -from autogpt.config import Config -from autogpt.logs import logger -from autogpt.memory import get_supported_memory_backends - -CFG = Config() - - -def create_config( - continuous: bool, - continuous_limit: int, - ai_settings_file: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """Updates the config object with the given arguments. - - Args: - continuous (bool): Whether to run in continuous mode - continuous_limit (int): The number of times to run in continuous mode - ai_settings_file (str): The path to the ai_settings.yaml file - skip_reprompt (bool): Whether to skip the re-prompting messages at the beginning of the script - speak (bool): Whether to enable speak mode - debug (bool): Whether to enable debug mode - gpt3only (bool): Whether to enable GPT3.5 only mode - gpt4only (bool): Whether to enable GPT4 only mode - memory_type (str): The type of memory backend to use - browser_name (str): The name of the browser to use when using selenium to scrape the web - allow_downloads (bool): Whether to allow Auto-GPT to download files natively - skips_news (bool): Whether to suppress the output of latest news on startup - """ - CFG.set_debug_mode(False) - CFG.set_continuous_mode(False) - CFG.set_speak_mode(False) - - if debug: - logger.typewriter_log("Debug Mode: ", Fore.GREEN, "ENABLED") - CFG.set_debug_mode(True) - - if continuous: - logger.typewriter_log("Continuous Mode: ", Fore.RED, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.RED, - "Continuous mode is not recommended. It is potentially dangerous and may" - " cause your AI to run forever or carry out actions you would not usually" - " authorise. Use at your own risk.", - ) - CFG.set_continuous_mode(True) - - if continuous_limit: - logger.typewriter_log( - "Continuous Limit: ", Fore.GREEN, f"{continuous_limit}" - ) - CFG.set_continuous_limit(continuous_limit) - - # Check if continuous limit is used without continuous mode - if continuous_limit and not continuous: - raise click.UsageError("--continuous-limit can only be used with --continuous") - - if speak: - logger.typewriter_log("Speak Mode: ", Fore.GREEN, "ENABLED") - CFG.set_speak_mode(True) - - if gpt3only: - logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_smart_llm_model(CFG.fast_llm_model) - - if gpt4only: - logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_fast_llm_model(CFG.smart_llm_model) - - if memory_type: - supported_memory = get_supported_memory_backends() - chosen = memory_type - if chosen not in supported_memory: - logger.typewriter_log( - "ONLY THE FOLLOWING MEMORY BACKENDS ARE SUPPORTED: ", - Fore.RED, - f"{supported_memory}", - ) - logger.typewriter_log("Defaulting to: ", Fore.YELLOW, CFG.memory_backend) - else: - CFG.memory_backend = chosen - - if skip_reprompt: - logger.typewriter_log("Skip Re-prompt: ", Fore.GREEN, "ENABLED") - CFG.skip_reprompt = True - - if ai_settings_file: - file = ai_settings_file - - # Validate file - (validated, message) = utils.validate_yaml_file(file) - if not validated: - logger.typewriter_log("FAILED FILE VALIDATION", Fore.RED, message) - logger.double_check() - exit(1) - - logger.typewriter_log("Using AI Settings File:", Fore.GREEN, file) - CFG.ai_settings_file = file - CFG.skip_reprompt = True - - if allow_downloads: - logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} " - + "It is recommended that you monitor any files it downloads carefully.", - ) - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}", - ) - CFG.allow_downloads = True - - if skip_news: - CFG.skip_news = True - - if browser_name: - CFG.selenium_web_browser = browser_name diff --git a/spaces/kukuhtw/AutoGPT/ui/utils.py b/spaces/kukuhtw/AutoGPT/ui/utils.py deleted file mode 100644 index 71703e2009afac0582300f5d99a91ddec4119e04..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/ui/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -import re - -def format_directory(directory): - output = [] - def helper(directory, level, output): - files = os.listdir(directory) - for i, item in enumerate(files): - is_folder = os.path.isdir(os.path.join(directory, item)) - joiner = "├── " if i < len(files) - 1 else "└── " - item_html = item + "/" if is_folder else f"{item}" - output.append("│ " * level + joiner + item_html) - if is_folder: - helper(os.path.join(directory, item), level + 1, output) - output.append(os.path.basename(directory) + "/") - helper(directory, 1, output) - return "\n".join(output) - -DOWNLOAD_OUTPUTS_JS = """ -() => { - const a = document.createElement('a'); - a.href = 'file=outputs.zip'; - a.download = 'outputs.zip'; - document.body.appendChild(a); - a.click(); - document.body.removeChild(a); -}""" - -def remove_color(text): - ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])') - return ansi_escape.sub('', text) \ No newline at end of file diff --git a/spaces/kxqt/Expedit-SAM/scripts/amg.py b/spaces/kxqt/Expedit-SAM/scripts/amg.py deleted file mode 100644 index ab2662037e7a6160bfd766bc414233133e357ae4..0000000000000000000000000000000000000000 --- a/spaces/kxqt/Expedit-SAM/scripts/amg.py +++ /dev/null @@ -1,335 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import cv2 # type: ignore - -from segment_anything import SamAutomaticMaskGenerator, sam_model_registry - -import argparse -import json -import os -from typing import Any, Dict, List - -import numpy as np -import matplotlib.pyplot as plt -import time - -parser = argparse.ArgumentParser( - description=( - "Runs automatic mask generation on an input image or directory of images, " - "and outputs masks as either PNGs or COCO-style RLEs. Requires open-cv, " - "as well as pycocotools if saving in RLE format." - ) -) - -parser.add_argument( - "--input", - type=str, - required=True, - help="Path to either a single input image or folder of images.", -) - -parser.add_argument( - "--output", - type=str, - required=True, - help=( - "Path to the directory where masks will be output. Output will be either a folder " - "of PNGs per image or a single json with COCO-style masks." - ), -) - -parser.add_argument( - "--model-type", - type=str, - default="default", - help="The type of model to load, in ['default', 'vit_l', 'vit_b']", -) - -parser.add_argument( - "--checkpoint", - type=str, - required=True, - help="The path to the SAM checkpoint to use for mask generation.", -) - -parser.add_argument("--device", type=str, default="cuda", help="The device to run generation on.") - -parser.add_argument( - "--convert-to-rle", - action="store_true", - help=( - "Save masks as COCO RLEs in a single json instead of as a folder of PNGs. " - "Requires pycocotools." - ), -) - -amg_settings = parser.add_argument_group("AMG Settings") - -amg_settings.add_argument( - "--points-per-side", - type=int, - default=None, - help="Generate masks by sampling a grid over the image with this many points to a side.", -) - -amg_settings.add_argument( - "--points-per-batch", - type=int, - default=None, - help="How many input points to process simultaneously in one batch.", -) - -amg_settings.add_argument( - "--pred-iou-thresh", - type=float, - default=None, - help="Exclude masks with a predicted score from the model that is lower than this threshold.", -) - -amg_settings.add_argument( - "--stability-score-thresh", - type=float, - default=None, - help="Exclude masks with a stability score lower than this threshold.", -) - -amg_settings.add_argument( - "--stability-score-offset", - type=float, - default=None, - help="Larger values perturb the mask more when measuring stability score.", -) - -amg_settings.add_argument( - "--box-nms-thresh", - type=float, - default=None, - help="The overlap threshold for excluding a duplicate mask.", -) - -amg_settings.add_argument( - "--crop-n-layers", - type=int, - default=None, - help=( - "If >0, mask generation is run on smaller crops of the image to generate more masks. " - "The value sets how many different scales to crop at." - ), -) - -amg_settings.add_argument( - "--crop-nms-thresh", - type=float, - default=None, - help="The overlap threshold for excluding duplicate masks across different crops.", -) - -amg_settings.add_argument( - "--crop-overlap-ratio", - type=int, - default=None, - help="Larger numbers mean image crops will overlap more.", -) - -amg_settings.add_argument( - "--crop-n-points-downscale-factor", - type=int, - default=None, - help="The number of points-per-side in each layer of crop is reduced by this factor.", -) - -amg_settings.add_argument( - "--min-mask-region-area", - type=int, - default=None, - help=( - "Disconnected mask regions or holes with area smaller than this value " - "in pixels are removed by postprocessing." - ), -) - -# add hourglass settings -amg_settings.add_argument( - "--use_hourglass", - action="store_true", - help="Use hourglass method to expedite mask generation.", -) - -amg_settings.add_argument( - "--hourglass_clustering_location", - type=int, - default=6, - help="location of clustering, ranging from [0, num of layers of transformer)" -) - -amg_settings.add_argument( - "--hourglass_num_cluster", - type=int, - default=100, - help="num of clusters, no more than total number of features" -) - -amg_settings.add_argument( - "--hourglass_cluster_iters", - type=int, - default=5, - help="num of iterations in clustering" -) - -amg_settings.add_argument( - "--hourglass_temperture", - type=float, - default=5e-3, - help="temperture in clustering and reconstruction" -) - -amg_settings.add_argument( - "--hourglass_cluster_window_size", - type=int, - default=5, - help="window size in clustering" -) - -amg_settings.add_argument( - "--hourglass_reconstruction_k", - type=int, - default=20, - help="k in token reconstruction layer of hourglass vit" -) - -def write_masks_to_folder(masks: List[Dict[str, Any]], path: str) -> None: - header = "id,area,bbox_x0,bbox_y0,bbox_w,bbox_h,point_input_x,point_input_y,predicted_iou,stability_score,crop_box_x0,crop_box_y0,crop_box_w,crop_box_h" # noqa - metadata = [header] - for i, mask_data in enumerate(masks): - mask = mask_data["segmentation"] - filename = f"{i}.png" - cv2.imwrite(os.path.join(path, filename), mask * 255) - mask_metadata = [ - str(i), - str(mask_data["area"]), - *[str(x) for x in mask_data["bbox"]], - *[str(x) for x in mask_data["point_coords"][0]], - str(mask_data["predicted_iou"]), - str(mask_data["stability_score"]), - *[str(x) for x in mask_data["crop_box"]], - ] - row = ",".join(mask_metadata) - metadata.append(row) - metadata_path = os.path.join(path, "metadata.csv") - with open(metadata_path, "w") as f: - f.write("\n".join(metadata)) - - return - - -def get_amg_kwargs(args): - amg_kwargs = { - "points_per_side": args.points_per_side, - "points_per_batch": args.points_per_batch, - "pred_iou_thresh": args.pred_iou_thresh, - "stability_score_thresh": args.stability_score_thresh, - "stability_score_offset": args.stability_score_offset, - "box_nms_thresh": args.box_nms_thresh, - "crop_n_layers": args.crop_n_layers, - "crop_nms_thresh": args.crop_nms_thresh, - "crop_overlap_ratio": args.crop_overlap_ratio, - "crop_n_points_downscale_factor": args.crop_n_points_downscale_factor, - "min_mask_region_area": args.min_mask_region_area, - } - amg_kwargs = {k: v for k, v in amg_kwargs.items() if v is not None} - return amg_kwargs - - -def get_hourglass_kwargs(args): - hourglass_kwargs = { - "use_hourglass": args.use_hourglass, - "hourglass_clustering_location": args.hourglass_clustering_location, - "hourglass_num_cluster": args.hourglass_num_cluster, - "hourglass_cluster_iters": args.hourglass_cluster_iters, - "hourglass_temperture": args.hourglass_temperture, - "hourglass_cluster_window_size": args.hourglass_cluster_window_size, - "hourglass_reconstruction_k": args.hourglass_reconstruction_k, - } - hourglass_kwargs = {k: v for k, v in hourglass_kwargs.items() if v is not None} - return hourglass_kwargs - - -def show_anns(anns): - if len(anns) == 0: - return - sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True) - ax = plt.gca() - ax.set_autoscale_on(False) - for ann in sorted_anns: - m = ann['segmentation'] - img = np.ones((m.shape[0], m.shape[1], 3)) - color_mask = np.random.random((1, 3)).tolist()[0] - for i in range(3): - img[:,:,i] = color_mask[i] - ax.imshow(np.dstack((img, m*0.35))) - - -def main(args: argparse.Namespace) -> None: - print("Loading model...") - hourglass_kwargs = get_hourglass_kwargs(args) - sam = sam_model_registry[args.model_type](checkpoint=args.checkpoint, **hourglass_kwargs) - _ = sam.to(device=args.device) - output_mode = "coco_rle" if args.convert_to_rle else "binary_mask" - amg_kwargs = get_amg_kwargs(args) - generator = SamAutomaticMaskGenerator(sam, output_mode=output_mode, **amg_kwargs) - - if not os.path.isdir(args.input): - targets = [args.input] - else: - targets = [ - f for f in os.listdir(args.input) if not os.path.isdir(os.path.join(args.input, f)) - ] - targets = [os.path.join(args.input, f) for f in targets] - - os.makedirs(args.output, exist_ok=True) - - plt.figure(figsize=(20,20)) - - total_time = 0 - warmup = 0 - for i, t in enumerate(targets): - print(f"Processing '{t}'...") - image = cv2.imread(t) - if image is None: - print(f"Could not load '{t}' as an image, skipping...") - continue - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - - start = time.perf_counter() - masks = generator.generate(image) - eta = time.perf_counter() - start - if i > warmup: - total_time += eta - - base = os.path.basename(t) - base = os.path.splitext(base)[0] - save_base = os.path.join(args.output, base) - if output_mode == "binary_mask": - os.makedirs(save_base, exist_ok=True) - write_masks_to_folder(masks, save_base) - else: - save_file = save_base + ".json" - with open(save_file, "w") as f: - json.dump(masks, f) - - plt.clf() - plt.imshow(image) - show_anns(masks) - plt.axis('off') - plt.savefig(os.path.join(save_base, base + '.png'), bbox_inches='tight', pad_inches=0) - print("Done!") - print(f"Average time per image: {total_time / (len(targets) - warmup)} seconds") - - -if __name__ == "__main__": - args = parser.parse_args() - main(args) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/http_parser.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/http_parser.py deleted file mode 100644 index 5a66ce4b9eec19777800ddc3c0f5e66b2270f9d3..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/http_parser.py +++ /dev/null @@ -1,969 +0,0 @@ -import abc -import asyncio -import collections -import re -import string -import zlib -from contextlib import suppress -from enum import IntEnum -from typing import ( - Any, - Generic, - List, - NamedTuple, - Optional, - Pattern, - Set, - Tuple, - Type, - TypeVar, - Union, - cast, -) - -from multidict import CIMultiDict, CIMultiDictProxy, istr -from yarl import URL - -from . import hdrs -from .base_protocol import BaseProtocol -from .helpers import NO_EXTENSIONS, BaseTimerContext -from .http_exceptions import ( - BadHttpMessage, - BadStatusLine, - ContentEncodingError, - ContentLengthError, - InvalidHeader, - LineTooLong, - TransferEncodingError, -) -from .http_writer import HttpVersion, HttpVersion10 -from .log import internal_logger -from .streams import EMPTY_PAYLOAD, StreamReader -from .typedefs import Final, RawHeaders - -try: - import brotli - - HAS_BROTLI = True -except ImportError: # pragma: no cover - HAS_BROTLI = False - - -__all__ = ( - "HeadersParser", - "HttpParser", - "HttpRequestParser", - "HttpResponseParser", - "RawRequestMessage", - "RawResponseMessage", -) - -ASCIISET: Final[Set[str]] = set(string.printable) - -# See https://tools.ietf.org/html/rfc7230#section-3.1.1 -# and https://tools.ietf.org/html/rfc7230#appendix-B -# -# method = token -# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." / -# "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA -# token = 1*tchar -METHRE: Final[Pattern[str]] = re.compile(r"[!#$%&'*+\-.^_`|~0-9A-Za-z]+") -VERSRE: Final[Pattern[str]] = re.compile(r"HTTP/(\d+).(\d+)") -HDRRE: Final[Pattern[bytes]] = re.compile(rb"[\x00-\x1F\x7F()<>@,;:\[\]={} \t\\\\\"]") - - -class RawRequestMessage(NamedTuple): - method: str - path: str - version: HttpVersion - headers: "CIMultiDictProxy[str]" - raw_headers: RawHeaders - should_close: bool - compression: Optional[str] - upgrade: bool - chunked: bool - url: URL - - -RawResponseMessage = collections.namedtuple( - "RawResponseMessage", - [ - "version", - "code", - "reason", - "headers", - "raw_headers", - "should_close", - "compression", - "upgrade", - "chunked", - ], -) - - -_MsgT = TypeVar("_MsgT", RawRequestMessage, RawResponseMessage) - - -class ParseState(IntEnum): - - PARSE_NONE = 0 - PARSE_LENGTH = 1 - PARSE_CHUNKED = 2 - PARSE_UNTIL_EOF = 3 - - -class ChunkState(IntEnum): - PARSE_CHUNKED_SIZE = 0 - PARSE_CHUNKED_CHUNK = 1 - PARSE_CHUNKED_CHUNK_EOF = 2 - PARSE_MAYBE_TRAILERS = 3 - PARSE_TRAILERS = 4 - - -class HeadersParser: - def __init__( - self, - max_line_size: int = 8190, - max_headers: int = 32768, - max_field_size: int = 8190, - ) -> None: - self.max_line_size = max_line_size - self.max_headers = max_headers - self.max_field_size = max_field_size - - def parse_headers( - self, lines: List[bytes] - ) -> Tuple["CIMultiDictProxy[str]", RawHeaders]: - headers: CIMultiDict[str] = CIMultiDict() - raw_headers = [] - - lines_idx = 1 - line = lines[1] - line_count = len(lines) - - while line: - # Parse initial header name : value pair. - try: - bname, bvalue = line.split(b":", 1) - except ValueError: - raise InvalidHeader(line) from None - - bname = bname.strip(b" \t") - bvalue = bvalue.lstrip() - if HDRRE.search(bname): - raise InvalidHeader(bname) - if len(bname) > self.max_field_size: - raise LineTooLong( - "request header name {}".format( - bname.decode("utf8", "xmlcharrefreplace") - ), - str(self.max_field_size), - str(len(bname)), - ) - - header_length = len(bvalue) - - # next line - lines_idx += 1 - line = lines[lines_idx] - - # consume continuation lines - continuation = line and line[0] in (32, 9) # (' ', '\t') - - if continuation: - bvalue_lst = [bvalue] - while continuation: - header_length += len(line) - if header_length > self.max_field_size: - raise LineTooLong( - "request header field {}".format( - bname.decode("utf8", "xmlcharrefreplace") - ), - str(self.max_field_size), - str(header_length), - ) - bvalue_lst.append(line) - - # next line - lines_idx += 1 - if lines_idx < line_count: - line = lines[lines_idx] - if line: - continuation = line[0] in (32, 9) # (' ', '\t') - else: - line = b"" - break - bvalue = b"".join(bvalue_lst) - else: - if header_length > self.max_field_size: - raise LineTooLong( - "request header field {}".format( - bname.decode("utf8", "xmlcharrefreplace") - ), - str(self.max_field_size), - str(header_length), - ) - - bvalue = bvalue.strip() - name = bname.decode("utf-8", "surrogateescape") - value = bvalue.decode("utf-8", "surrogateescape") - - headers.add(name, value) - raw_headers.append((bname, bvalue)) - - return (CIMultiDictProxy(headers), tuple(raw_headers)) - - -class HttpParser(abc.ABC, Generic[_MsgT]): - def __init__( - self, - protocol: Optional[BaseProtocol] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - limit: int = 2**16, - max_line_size: int = 8190, - max_headers: int = 32768, - max_field_size: int = 8190, - timer: Optional[BaseTimerContext] = None, - code: Optional[int] = None, - method: Optional[str] = None, - readall: bool = False, - payload_exception: Optional[Type[BaseException]] = None, - response_with_body: bool = True, - read_until_eof: bool = False, - auto_decompress: bool = True, - ) -> None: - self.protocol = protocol - self.loop = loop - self.max_line_size = max_line_size - self.max_headers = max_headers - self.max_field_size = max_field_size - self.timer = timer - self.code = code - self.method = method - self.readall = readall - self.payload_exception = payload_exception - self.response_with_body = response_with_body - self.read_until_eof = read_until_eof - - self._lines: List[bytes] = [] - self._tail = b"" - self._upgraded = False - self._payload = None - self._payload_parser: Optional[HttpPayloadParser] = None - self._auto_decompress = auto_decompress - self._limit = limit - self._headers_parser = HeadersParser(max_line_size, max_headers, max_field_size) - - @abc.abstractmethod - def parse_message(self, lines: List[bytes]) -> _MsgT: - pass - - def feed_eof(self) -> Optional[_MsgT]: - if self._payload_parser is not None: - self._payload_parser.feed_eof() - self._payload_parser = None - else: - # try to extract partial message - if self._tail: - self._lines.append(self._tail) - - if self._lines: - if self._lines[-1] != "\r\n": - self._lines.append(b"") - with suppress(Exception): - return self.parse_message(self._lines) - return None - - def feed_data( - self, - data: bytes, - SEP: bytes = b"\r\n", - EMPTY: bytes = b"", - CONTENT_LENGTH: istr = hdrs.CONTENT_LENGTH, - METH_CONNECT: str = hdrs.METH_CONNECT, - SEC_WEBSOCKET_KEY1: istr = hdrs.SEC_WEBSOCKET_KEY1, - ) -> Tuple[List[Tuple[_MsgT, StreamReader]], bool, bytes]: - - messages = [] - - if self._tail: - data, self._tail = self._tail + data, b"" - - data_len = len(data) - start_pos = 0 - loop = self.loop - - while start_pos < data_len: - - # read HTTP message (request/response line + headers), \r\n\r\n - # and split by lines - if self._payload_parser is None and not self._upgraded: - pos = data.find(SEP, start_pos) - # consume \r\n - if pos == start_pos and not self._lines: - start_pos = pos + 2 - continue - - if pos >= start_pos: - # line found - self._lines.append(data[start_pos:pos]) - start_pos = pos + 2 - - # \r\n\r\n found - if self._lines[-1] == EMPTY: - try: - msg: _MsgT = self.parse_message(self._lines) - finally: - self._lines.clear() - - def get_content_length() -> Optional[int]: - # payload length - length_hdr = msg.headers.get(CONTENT_LENGTH) - if length_hdr is None: - return None - - try: - length = int(length_hdr) - except ValueError: - raise InvalidHeader(CONTENT_LENGTH) - - if length < 0: - raise InvalidHeader(CONTENT_LENGTH) - - return length - - length = get_content_length() - # do not support old websocket spec - if SEC_WEBSOCKET_KEY1 in msg.headers: - raise InvalidHeader(SEC_WEBSOCKET_KEY1) - - self._upgraded = msg.upgrade - - method = getattr(msg, "method", self.method) - - assert self.protocol is not None - # calculate payload - if ( - (length is not None and length > 0) - or msg.chunked - and not msg.upgrade - ): - payload = StreamReader( - self.protocol, - timer=self.timer, - loop=loop, - limit=self._limit, - ) - payload_parser = HttpPayloadParser( - payload, - length=length, - chunked=msg.chunked, - method=method, - compression=msg.compression, - code=self.code, - readall=self.readall, - response_with_body=self.response_with_body, - auto_decompress=self._auto_decompress, - ) - if not payload_parser.done: - self._payload_parser = payload_parser - elif method == METH_CONNECT: - assert isinstance(msg, RawRequestMessage) - payload = StreamReader( - self.protocol, - timer=self.timer, - loop=loop, - limit=self._limit, - ) - self._upgraded = True - self._payload_parser = HttpPayloadParser( - payload, - method=msg.method, - compression=msg.compression, - readall=True, - auto_decompress=self._auto_decompress, - ) - else: - if ( - getattr(msg, "code", 100) >= 199 - and length is None - and self.read_until_eof - ): - payload = StreamReader( - self.protocol, - timer=self.timer, - loop=loop, - limit=self._limit, - ) - payload_parser = HttpPayloadParser( - payload, - length=length, - chunked=msg.chunked, - method=method, - compression=msg.compression, - code=self.code, - readall=True, - response_with_body=self.response_with_body, - auto_decompress=self._auto_decompress, - ) - if not payload_parser.done: - self._payload_parser = payload_parser - else: - payload = EMPTY_PAYLOAD - - messages.append((msg, payload)) - else: - self._tail = data[start_pos:] - data = EMPTY - break - - # no parser, just store - elif self._payload_parser is None and self._upgraded: - assert not self._lines - break - - # feed payload - elif data and start_pos < data_len: - assert not self._lines - assert self._payload_parser is not None - try: - eof, data = self._payload_parser.feed_data(data[start_pos:]) - except BaseException as exc: - if self.payload_exception is not None: - self._payload_parser.payload.set_exception( - self.payload_exception(str(exc)) - ) - else: - self._payload_parser.payload.set_exception(exc) - - eof = True - data = b"" - - if eof: - start_pos = 0 - data_len = len(data) - self._payload_parser = None - continue - else: - break - - if data and start_pos < data_len: - data = data[start_pos:] - else: - data = EMPTY - - return messages, self._upgraded, data - - def parse_headers( - self, lines: List[bytes] - ) -> Tuple[ - "CIMultiDictProxy[str]", RawHeaders, Optional[bool], Optional[str], bool, bool - ]: - """Parses RFC 5322 headers from a stream. - - Line continuations are supported. Returns list of header name - and value pairs. Header name is in upper case. - """ - headers, raw_headers = self._headers_parser.parse_headers(lines) - close_conn = None - encoding = None - upgrade = False - chunked = False - - # keep-alive - conn = headers.get(hdrs.CONNECTION) - if conn: - v = conn.lower() - if v == "close": - close_conn = True - elif v == "keep-alive": - close_conn = False - elif v == "upgrade": - upgrade = True - - # encoding - enc = headers.get(hdrs.CONTENT_ENCODING) - if enc: - enc = enc.lower() - if enc in ("gzip", "deflate", "br"): - encoding = enc - - # chunking - te = headers.get(hdrs.TRANSFER_ENCODING) - if te is not None: - if "chunked" == te.lower(): - chunked = True - else: - raise BadHttpMessage("Request has invalid `Transfer-Encoding`") - - if hdrs.CONTENT_LENGTH in headers: - raise BadHttpMessage( - "Content-Length can't be present with Transfer-Encoding", - ) - - return (headers, raw_headers, close_conn, encoding, upgrade, chunked) - - def set_upgraded(self, val: bool) -> None: - """Set connection upgraded (to websocket) mode. - - :param bool val: new state. - """ - self._upgraded = val - - -class HttpRequestParser(HttpParser[RawRequestMessage]): - """Read request status line. - - Exception .http_exceptions.BadStatusLine - could be raised in case of any errors in status line. - Returns RawRequestMessage. - """ - - def parse_message(self, lines: List[bytes]) -> RawRequestMessage: - # request line - line = lines[0].decode("utf-8", "surrogateescape") - try: - method, path, version = line.split(None, 2) - except ValueError: - raise BadStatusLine(line) from None - - if len(path) > self.max_line_size: - raise LineTooLong( - "Status line is too long", str(self.max_line_size), str(len(path)) - ) - - # method - if not METHRE.match(method): - raise BadStatusLine(method) - - # version - try: - if version.startswith("HTTP/"): - n1, n2 = version[5:].split(".", 1) - version_o = HttpVersion(int(n1), int(n2)) - else: - raise BadStatusLine(version) - except Exception: - raise BadStatusLine(version) - - if method == "CONNECT": - # authority-form, - # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.3 - url = URL.build(authority=path, encoded=True) - elif path.startswith("/"): - # origin-form, - # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.1 - path_part, _hash_separator, url_fragment = path.partition("#") - path_part, _question_mark_separator, qs_part = path_part.partition("?") - - # NOTE: `yarl.URL.build()` is used to mimic what the Cython-based - # NOTE: parser does, otherwise it results into the same - # NOTE: HTTP Request-Line input producing different - # NOTE: `yarl.URL()` objects - url = URL.build( - path=path_part, - query_string=qs_part, - fragment=url_fragment, - encoded=True, - ) - else: - # absolute-form for proxy maybe, - # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2 - url = URL(path, encoded=True) - - # read headers - ( - headers, - raw_headers, - close, - compression, - upgrade, - chunked, - ) = self.parse_headers(lines) - - if close is None: # then the headers weren't set in the request - if version_o <= HttpVersion10: # HTTP 1.0 must asks to not close - close = True - else: # HTTP 1.1 must ask to close. - close = False - - return RawRequestMessage( - method, - path, - version_o, - headers, - raw_headers, - close, - compression, - upgrade, - chunked, - url, - ) - - -class HttpResponseParser(HttpParser[RawResponseMessage]): - """Read response status line and headers. - - BadStatusLine could be raised in case of any errors in status line. - Returns RawResponseMessage. - """ - - def parse_message(self, lines: List[bytes]) -> RawResponseMessage: - line = lines[0].decode("utf-8", "surrogateescape") - try: - version, status = line.split(None, 1) - except ValueError: - raise BadStatusLine(line) from None - - try: - status, reason = status.split(None, 1) - except ValueError: - reason = "" - - if len(reason) > self.max_line_size: - raise LineTooLong( - "Status line is too long", str(self.max_line_size), str(len(reason)) - ) - - # version - match = VERSRE.match(version) - if match is None: - raise BadStatusLine(line) - version_o = HttpVersion(int(match.group(1)), int(match.group(2))) - - # The status code is a three-digit number - try: - status_i = int(status) - except ValueError: - raise BadStatusLine(line) from None - - if status_i > 999: - raise BadStatusLine(line) - - # read headers - ( - headers, - raw_headers, - close, - compression, - upgrade, - chunked, - ) = self.parse_headers(lines) - - if close is None: - close = version_o <= HttpVersion10 - - return RawResponseMessage( - version_o, - status_i, - reason.strip(), - headers, - raw_headers, - close, - compression, - upgrade, - chunked, - ) - - -class HttpPayloadParser: - def __init__( - self, - payload: StreamReader, - length: Optional[int] = None, - chunked: bool = False, - compression: Optional[str] = None, - code: Optional[int] = None, - method: Optional[str] = None, - readall: bool = False, - response_with_body: bool = True, - auto_decompress: bool = True, - ) -> None: - self._length = 0 - self._type = ParseState.PARSE_NONE - self._chunk = ChunkState.PARSE_CHUNKED_SIZE - self._chunk_size = 0 - self._chunk_tail = b"" - self._auto_decompress = auto_decompress - self.done = False - - # payload decompression wrapper - if response_with_body and compression and self._auto_decompress: - real_payload: Union[StreamReader, DeflateBuffer] = DeflateBuffer( - payload, compression - ) - else: - real_payload = payload - - # payload parser - if not response_with_body: - # don't parse payload if it's not expected to be received - self._type = ParseState.PARSE_NONE - real_payload.feed_eof() - self.done = True - - elif chunked: - self._type = ParseState.PARSE_CHUNKED - elif length is not None: - self._type = ParseState.PARSE_LENGTH - self._length = length - if self._length == 0: - real_payload.feed_eof() - self.done = True - else: - if readall and code != 204: - self._type = ParseState.PARSE_UNTIL_EOF - elif method in ("PUT", "POST"): - internal_logger.warning( # pragma: no cover - "Content-Length or Transfer-Encoding header is required" - ) - self._type = ParseState.PARSE_NONE - real_payload.feed_eof() - self.done = True - - self.payload = real_payload - - def feed_eof(self) -> None: - if self._type == ParseState.PARSE_UNTIL_EOF: - self.payload.feed_eof() - elif self._type == ParseState.PARSE_LENGTH: - raise ContentLengthError( - "Not enough data for satisfy content length header." - ) - elif self._type == ParseState.PARSE_CHUNKED: - raise TransferEncodingError( - "Not enough data for satisfy transfer length header." - ) - - def feed_data( - self, chunk: bytes, SEP: bytes = b"\r\n", CHUNK_EXT: bytes = b";" - ) -> Tuple[bool, bytes]: - # Read specified amount of bytes - if self._type == ParseState.PARSE_LENGTH: - required = self._length - chunk_len = len(chunk) - - if required >= chunk_len: - self._length = required - chunk_len - self.payload.feed_data(chunk, chunk_len) - if self._length == 0: - self.payload.feed_eof() - return True, b"" - else: - self._length = 0 - self.payload.feed_data(chunk[:required], required) - self.payload.feed_eof() - return True, chunk[required:] - - # Chunked transfer encoding parser - elif self._type == ParseState.PARSE_CHUNKED: - if self._chunk_tail: - chunk = self._chunk_tail + chunk - self._chunk_tail = b"" - - while chunk: - - # read next chunk size - if self._chunk == ChunkState.PARSE_CHUNKED_SIZE: - pos = chunk.find(SEP) - if pos >= 0: - i = chunk.find(CHUNK_EXT, 0, pos) - if i >= 0: - size_b = chunk[:i] # strip chunk-extensions - else: - size_b = chunk[:pos] - - try: - size = int(bytes(size_b), 16) - except ValueError: - exc = TransferEncodingError( - chunk[:pos].decode("ascii", "surrogateescape") - ) - self.payload.set_exception(exc) - raise exc from None - - chunk = chunk[pos + 2 :] - if size == 0: # eof marker - self._chunk = ChunkState.PARSE_MAYBE_TRAILERS - else: - self._chunk = ChunkState.PARSE_CHUNKED_CHUNK - self._chunk_size = size - self.payload.begin_http_chunk_receiving() - else: - self._chunk_tail = chunk - return False, b"" - - # read chunk and feed buffer - if self._chunk == ChunkState.PARSE_CHUNKED_CHUNK: - required = self._chunk_size - chunk_len = len(chunk) - - if required > chunk_len: - self._chunk_size = required - chunk_len - self.payload.feed_data(chunk, chunk_len) - return False, b"" - else: - self._chunk_size = 0 - self.payload.feed_data(chunk[:required], required) - chunk = chunk[required:] - self._chunk = ChunkState.PARSE_CHUNKED_CHUNK_EOF - self.payload.end_http_chunk_receiving() - - # toss the CRLF at the end of the chunk - if self._chunk == ChunkState.PARSE_CHUNKED_CHUNK_EOF: - if chunk[:2] == SEP: - chunk = chunk[2:] - self._chunk = ChunkState.PARSE_CHUNKED_SIZE - else: - self._chunk_tail = chunk - return False, b"" - - # if stream does not contain trailer, after 0\r\n - # we should get another \r\n otherwise - # trailers needs to be skiped until \r\n\r\n - if self._chunk == ChunkState.PARSE_MAYBE_TRAILERS: - head = chunk[:2] - if head == SEP: - # end of stream - self.payload.feed_eof() - return True, chunk[2:] - # Both CR and LF, or only LF may not be received yet. It is - # expected that CRLF or LF will be shown at the very first - # byte next time, otherwise trailers should come. The last - # CRLF which marks the end of response might not be - # contained in the same TCP segment which delivered the - # size indicator. - if not head: - return False, b"" - if head == SEP[:1]: - self._chunk_tail = head - return False, b"" - self._chunk = ChunkState.PARSE_TRAILERS - - # read and discard trailer up to the CRLF terminator - if self._chunk == ChunkState.PARSE_TRAILERS: - pos = chunk.find(SEP) - if pos >= 0: - chunk = chunk[pos + 2 :] - self._chunk = ChunkState.PARSE_MAYBE_TRAILERS - else: - self._chunk_tail = chunk - return False, b"" - - # Read all bytes until eof - elif self._type == ParseState.PARSE_UNTIL_EOF: - self.payload.feed_data(chunk, len(chunk)) - - return False, b"" - - -class DeflateBuffer: - """DeflateStream decompress stream and feed data into specified stream.""" - - decompressor: Any - - def __init__(self, out: StreamReader, encoding: Optional[str]) -> None: - self.out = out - self.size = 0 - self.encoding = encoding - self._started_decoding = False - - if encoding == "br": - if not HAS_BROTLI: # pragma: no cover - raise ContentEncodingError( - "Can not decode content-encoding: brotli (br). " - "Please install `Brotli`" - ) - - class BrotliDecoder: - # Supports both 'brotlipy' and 'Brotli' packages - # since they share an import name. The top branches - # are for 'brotlipy' and bottom branches for 'Brotli' - def __init__(self) -> None: - self._obj = brotli.Decompressor() - - def decompress(self, data: bytes) -> bytes: - if hasattr(self._obj, "decompress"): - return cast(bytes, self._obj.decompress(data)) - return cast(bytes, self._obj.process(data)) - - def flush(self) -> bytes: - if hasattr(self._obj, "flush"): - return cast(bytes, self._obj.flush()) - return b"" - - self.decompressor = BrotliDecoder() - else: - zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else zlib.MAX_WBITS - self.decompressor = zlib.decompressobj(wbits=zlib_mode) - - def set_exception(self, exc: BaseException) -> None: - self.out.set_exception(exc) - - def feed_data(self, chunk: bytes, size: int) -> None: - if not size: - return - - self.size += size - - # RFC1950 - # bits 0..3 = CM = 0b1000 = 8 = "deflate" - # bits 4..7 = CINFO = 1..7 = windows size. - if ( - not self._started_decoding - and self.encoding == "deflate" - and chunk[0] & 0xF != 8 - ): - # Change the decoder to decompress incorrectly compressed data - # Actually we should issue a warning about non-RFC-compliant data. - self.decompressor = zlib.decompressobj(wbits=-zlib.MAX_WBITS) - - try: - chunk = self.decompressor.decompress(chunk) - except Exception: - raise ContentEncodingError( - "Can not decode content-encoding: %s" % self.encoding - ) - - self._started_decoding = True - - if chunk: - self.out.feed_data(chunk, len(chunk)) - - def feed_eof(self) -> None: - chunk = self.decompressor.flush() - - if chunk or self.size > 0: - self.out.feed_data(chunk, len(chunk)) - if self.encoding == "deflate" and not self.decompressor.eof: - raise ContentEncodingError("deflate") - - self.out.feed_eof() - - def begin_http_chunk_receiving(self) -> None: - self.out.begin_http_chunk_receiving() - - def end_http_chunk_receiving(self) -> None: - self.out.end_http_chunk_receiving() - - -HttpRequestParserPy = HttpRequestParser -HttpResponseParserPy = HttpResponseParser -RawRequestMessagePy = RawRequestMessage -RawResponseMessagePy = RawResponseMessage - -try: - if not NO_EXTENSIONS: - from ._http_parser import ( # type: ignore[import,no-redef] - HttpRequestParser, - HttpResponseParser, - RawRequestMessage, - RawResponseMessage, - ) - - HttpRequestParserC = HttpRequestParser - HttpResponseParserC = HttpResponseParser - RawRequestMessageC = RawRequestMessage - RawResponseMessageC = RawResponseMessage -except ImportError: # pragma: no cover - pass diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/ADCD 1.13 Torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/ADCD 1.13 Torrent.md deleted file mode 100644 index 76f217bc573834fd1ddb50ca8311a98714fba2ba..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/ADCD 1.13 Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

              ADCD 1.13 Torrent


              Download Ziphttps://bytlly.com/2uGvCa



              -
              - 8a78ff9644
              -
              -
              -

              diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/IWisoft Flash SWF To Video Converter 3.4.md b/spaces/lincquiQcaudo/Top-20-Diffusion/IWisoft Flash SWF To Video Converter 3.4.md deleted file mode 100644 index dda93fe4d3c15d2db6d916807e02b36cb6a2f585..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/IWisoft Flash SWF To Video Converter 3.4.md +++ /dev/null @@ -1,24 +0,0 @@ - -

              How to Convert Flash SWF to Video with iWisoft Flash SWF to Video Converter 3.4

              -

              If you have some Flash SWF files that you want to convert to video formats, such as AVI, MPEG, WMV, MOV, MP4, etc., you may need a powerful and easy-to-use tool to help you. iWisoft Flash SWF to Video Converter 3.4 is one of the best options for SWF to video conversion. It can convert Macromedia Flash SWF to video/audio/picture files of most popular formats with excellent quality and fast speed.

              -

              iWisoft Flash SWF to Video Converter 3.4


              Downloadhttps://bytlly.com/2uGwmM



              -

              In this article, we will show you how to use iWisoft Flash SWF to Video Converter 3.4 to convert Flash SWF to video in a few simple steps.

              -

              Step 1: Download and install iWisoft Flash SWF to Video Converter 3.4

              -

              You can download iWisoft Flash SWF to Video Converter 3.4 from the official website: https://www.flash-swf-converter.com/. It is a free trial version that allows you to convert up to 30 seconds of each SWF file. If you want to convert longer SWF files, you need to buy the full version for $49.

              -

              After downloading the setup file, run it and follow the instructions to install the software on your computer.

              -

              Step 2: Add SWF files to the converter

              -

              Launch iWisoft Flash SWF to Video Converter 3.4 and click the "Add" button on the toolbar to browse and select the SWF files that you want to convert. You can also drag and drop SWF files from Windows Explorer to the converter. You can add multiple SWF files and convert them in batch mode.

              -

              You can preview the SWF files in the built-in Flash player and take snapshots of any frame. You can also trim and crop the SWF files by clicking the "Edit" button on the toolbar.

              -

              Step 3: Choose output format and settings

              -

              Click the "Profile" drop-down list and choose the output format that you want, such as AVI, MPEG, WMV, MOV, MP4, etc. You can also choose a preset profile for specific devices, such as iPod, iPhone, PSP, Zune, etc.

              -

              -

              Click the "Settings" button next to the "Profile" list to customize the output video and audio parameters, such as type, size, bit rate, frame rate, aspect ratio, sample frequency rate, channel mode and volume.

              -

              You can also add watermarks or adjust the background color of the output video by clicking the "Effect" button on the toolbar.

              -

              Step 4: Start converting SWF to video

              -

              Click the "Browse" button at the bottom of the interface and choose a folder where you want to save the converted video files. Then click the "Start" button on the toolbar to begin converting SWF to video.

              -

              The conversion process will be shown in a progress bar. You can pause or stop it at any time. When the conversion is done, you can open the output folder and enjoy your videos.

              -

              Conclusion

              -

              iWisoft Flash SWF to Video Converter 3.4 is a powerful and easy-to-use tool that can convert Flash SWF to video/audio/picture files of most popular formats with high quality and fast speed. It supports batch conversion and has many useful features, such as editing, watermarking, cropping, trimming, etc. It is compatible with Windows All systems and supports all kinds of Flash movies including action scripts, movie clips and sound.

              -

              If you are looking for a reliable and efficient way to convert Flash SWF to video formats, you should give iWisoft Flash SWF to Video Converter 3.4 a try.

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/ljjggr/bingo/src/lib/isomorphic/index.ts b/spaces/ljjggr/bingo/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/ljjggr/bingo/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug diff --git a/spaces/lkeab/transfiner/configs/common/models/panoptic_fpn.py b/spaces/lkeab/transfiner/configs/common/models/panoptic_fpn.py deleted file mode 100644 index 88f55d2ce9db62e61445d6a3700067d9d864ecae..0000000000000000000000000000000000000000 --- a/spaces/lkeab/transfiner/configs/common/models/panoptic_fpn.py +++ /dev/null @@ -1,20 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling import PanopticFPN -from detectron2.modeling.meta_arch.semantic_seg import SemSegFPNHead - -from .mask_rcnn_fpn import model - -model._target_ = PanopticFPN -model.sem_seg_head = L(SemSegFPNHead)( - input_shape={ - f: L(ShapeSpec)(stride=s, channels="${....backbone.out_channels}") - for f, s in zip(["p2", "p3", "p4", "p5"], [4, 8, 16, 32]) - }, - ignore_value=255, - num_classes=54, # COCO stuff + 1 - conv_dims=128, - common_stride=4, - loss_weight=0.5, - norm="GN", -) diff --git a/spaces/lusea/rvc-Qinggan/config.py b/spaces/lusea/rvc-Qinggan/config.py deleted file mode 100644 index 040a64d2c5ce4d7802bdf7f69321483b81008f08..0000000000000000000000000000000000000000 --- a/spaces/lusea/rvc-Qinggan/config.py +++ /dev/null @@ -1,106 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.colab, - self.noparallel, - self.noautoopen, - self.api - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument("--api", action="store_true", help="Launch with api") - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.api - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/allocator/destroy_range.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/allocator/destroy_range.h deleted file mode 100644 index bf00037cecb06d17aef1125138fdfcbbcc242655..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/allocator/destroy_range.h +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -namespace thrust -{ -namespace detail -{ - -template -__host__ __device__ - inline void destroy_range(Allocator &a, Pointer p, Size n); - -} // end detail -} // end thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/cstdint.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/cstdint.h deleted file mode 100644 index 248390a528d5885a2a6f00e6a34cec5185cfbdcf..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/cstdint.h +++ /dev/null @@ -1,79 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC) || (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_CLANG) -#include -#endif - -namespace thrust -{ -namespace detail -{ - -#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC) - -#if (_MSC_VER < 1300) - typedef signed char int8_t; - typedef signed short int16_t; - typedef signed int int32_t; - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; -#else - typedef signed __int8 int8_t; - typedef signed __int16 int16_t; - typedef signed __int32 int32_t; - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; -#endif -typedef signed __int64 int64_t; -typedef unsigned __int64 uint64_t; - -#else - -typedef ::int8_t int8_t; -typedef ::int16_t int16_t; -typedef ::int32_t int32_t; -typedef ::int64_t int64_t; -typedef ::uint8_t uint8_t; -typedef ::uint16_t uint16_t; -typedef ::uint32_t uint32_t; -typedef ::uint64_t uint64_t; - -#endif - - -// an oracle to tell us how to define intptr_t -template struct divine_intptr_t; -template struct divine_uintptr_t; - -// 32b platforms -template<> struct divine_intptr_t<4> { typedef thrust::detail::int32_t type; }; -template<> struct divine_uintptr_t<4> { typedef thrust::detail::uint32_t type; }; - -// 64b platforms -template<> struct divine_intptr_t<8> { typedef thrust::detail::int64_t type; }; -template<> struct divine_uintptr_t<8> { typedef thrust::detail::uint64_t type; }; - -typedef divine_intptr_t<>::type intptr_t; -typedef divine_uintptr_t<>::type uintptr_t; - -} // end detail -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/async/reduce.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/async/reduce.h deleted file mode 100644 index 906928b27f3107a72c68b57a6c532abe8e2af254..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/async/reduce.h +++ /dev/null @@ -1,350 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ - -// TODO: Optimize for thrust::plus - -// TODO: Move into system::cuda - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2014 - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC - -#include - -#include -#include -#include -#include -#include -#include - -#include - -namespace thrust -{ - -namespace system { namespace cuda { namespace detail -{ - -template < - typename DerivedPolicy -, typename ForwardIt, typename Size, typename T, typename BinaryOp -> -auto async_reduce_n( - execution_policy& policy -, ForwardIt first -, Size n -, T init -, BinaryOp op -) -> unique_eager_future> -{ - using U = remove_cvref_t; - - auto const device_alloc = get_async_device_allocator(policy); - - using pointer - = typename thrust::detail::allocator_traits:: - template rebind_traits::pointer; - - unique_eager_future_promise_pair fp; - - // Determine temporary device storage requirements. - - size_t tmp_size = 0; - thrust::cuda_cub::throw_on_error( - cub::DeviceReduce::Reduce( - nullptr - , tmp_size - , first - , static_cast(nullptr) - , n - , op - , init - , nullptr // Null stream, just for sizing. - , THRUST_DEBUG_SYNC_FLAG - ) - , "after reduction sizing" - ); - - // Allocate temporary storage. - - auto content = uninitialized_allocate_unique_n( - device_alloc, sizeof(U) + tmp_size - ); - - // The array was dynamically allocated, so we assume that it's suitably - // aligned for any type of data. `malloc`/`cudaMalloc`/`new`/`std::allocator` - // make this guarantee. - auto const content_ptr = content.get(); - U* const ret_ptr = thrust::detail::aligned_reinterpret_cast( - raw_pointer_cast(content_ptr) - ); - void* const tmp_ptr = static_cast( - raw_pointer_cast(content_ptr + sizeof(U)) - ); - - // Set up stream with dependencies. - - cudaStream_t const user_raw_stream = thrust::cuda_cub::stream(policy); - - if (thrust::cuda_cub::default_stream() != user_raw_stream) - { - fp = make_dependent_future( - [] (decltype(content) const& c) - { - return pointer( - thrust::detail::aligned_reinterpret_cast( - raw_pointer_cast(c.get()) - ) - ); - } - , std::tuple_cat( - std::make_tuple( - std::move(content) - , unique_stream(nonowning, user_raw_stream) - ) - , extract_dependencies( - std::move(thrust::detail::derived_cast(policy)) - ) - ) - ); - } - else - { - fp = make_dependent_future( - [] (decltype(content) const& c) - { - return pointer( - thrust::detail::aligned_reinterpret_cast( - raw_pointer_cast(c.get()) - ) - ); - } - , std::tuple_cat( - std::make_tuple( - std::move(content) - ) - , extract_dependencies( - std::move(thrust::detail::derived_cast(policy)) - ) - ) - ); - } - - // Run reduction. - - thrust::cuda_cub::throw_on_error( - cub::DeviceReduce::Reduce( - tmp_ptr - , tmp_size - , first - , ret_ptr - , n - , op - , init - , fp.future.stream().native_handle() - , THRUST_DEBUG_SYNC_FLAG - ) - , "after reduction launch" - ); - - return std::move(fp.future); -} - -}}} // namespace system::cuda::detail - -namespace cuda_cub -{ - -// ADL entry point. -template < - typename DerivedPolicy -, typename ForwardIt, typename Sentinel, typename T, typename BinaryOp -> -auto async_reduce( - execution_policy& policy -, ForwardIt first -, Sentinel last -, T init -, BinaryOp op -) -THRUST_RETURNS( - thrust::system::cuda::detail::async_reduce_n( - policy, first, distance(first, last), init, op - ) -) - -} // cuda_cub - -/////////////////////////////////////////////////////////////////////////////// - -namespace system { namespace cuda { namespace detail -{ - -template < - typename DerivedPolicy -, typename ForwardIt, typename Size, typename OutputIt -, typename T, typename BinaryOp -> -auto async_reduce_into_n( - execution_policy& policy -, ForwardIt first -, Size n -, OutputIt output -, T init -, BinaryOp op -) -> unique_eager_event -{ - using U = remove_cvref_t; - - auto const device_alloc = get_async_device_allocator(policy); - - unique_eager_event e; - - // Determine temporary device storage requirements. - - size_t tmp_size = 0; - thrust::cuda_cub::throw_on_error( - cub::DeviceReduce::Reduce( - nullptr - , tmp_size - , first - , static_cast(nullptr) - , n - , op - , init - , nullptr // Null stream, just for sizing. - , THRUST_DEBUG_SYNC_FLAG - ) - , "after reduction sizing" - ); - - // Allocate temporary storage. - - auto content = uninitialized_allocate_unique_n( - device_alloc, tmp_size - ); - - // The array was dynamically allocated, so we assume that it's suitably - // aligned for any type of data. `malloc`/`cudaMalloc`/`new`/`std::allocator` - // make this guarantee. - auto const content_ptr = content.get(); - - void* const tmp_ptr = static_cast( - raw_pointer_cast(content_ptr) - ); - - // Set up stream with dependencies. - - cudaStream_t const user_raw_stream = thrust::cuda_cub::stream(policy); - - if (thrust::cuda_cub::default_stream() != user_raw_stream) - { - e = make_dependent_event( - std::tuple_cat( - std::make_tuple( - std::move(content) - , unique_stream(nonowning, user_raw_stream) - ) - , extract_dependencies( - std::move(thrust::detail::derived_cast(policy)) - ) - ) - ); - } - else - { - e = make_dependent_event( - std::tuple_cat( - std::make_tuple( - std::move(content) - ) - , extract_dependencies( - std::move(thrust::detail::derived_cast(policy)) - ) - ) - ); - } - - // Run reduction. - - thrust::cuda_cub::throw_on_error( - cub::DeviceReduce::Reduce( - tmp_ptr - , tmp_size - , first - , output - , n - , op - , init - , e.stream().native_handle() - , THRUST_DEBUG_SYNC_FLAG - ) - , "after reduction launch" - ); - - return e; -} - -}}} // namespace system::cuda::detail - -namespace cuda_cub -{ - -// ADL entry point. -template < - typename DerivedPolicy -, typename ForwardIt, typename Sentinel, typename OutputIt -, typename T, typename BinaryOp -> -auto async_reduce_into( - execution_policy& policy -, ForwardIt first -, Sentinel last -, OutputIt output -, T init -, BinaryOp op -) -THRUST_RETURNS( - thrust::system::cuda::detail::async_reduce_into_n( - policy, first, distance(first, last), output, init, op - ) -) - -} // cuda_cub - -} // end namespace thrust - -#endif // THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC - -#endif - diff --git a/spaces/macaodha/batdetect2/bat_detect/utils/wavfile.py b/spaces/macaodha/batdetect2/bat_detect/utils/wavfile.py deleted file mode 100644 index a6715b0b1d4bc36d54733608301ee85e8770adc4..0000000000000000000000000000000000000000 --- a/spaces/macaodha/batdetect2/bat_detect/utils/wavfile.py +++ /dev/null @@ -1,291 +0,0 @@ -""" -Module to read / write wav files using numpy arrays - -Functions ---------- -`read`: Return the sample rate (in samples/sec) and data from a WAV file. - -`write`: Write a numpy array as a WAV file. - -""" -from __future__ import division, print_function, absolute_import - -import sys -import numpy -import struct -import warnings -import os - - -class WavFileWarning(UserWarning): - pass - -_big_endian = False - -WAVE_FORMAT_PCM = 0x0001 -WAVE_FORMAT_IEEE_FLOAT = 0x0003 -WAVE_FORMAT_EXTENSIBLE = 0xfffe -KNOWN_WAVE_FORMATS = (WAVE_FORMAT_PCM, WAVE_FORMAT_IEEE_FLOAT) - -# assumes file pointer is immediately -# after the 'fmt ' id - - -def _read_fmt_chunk(fid): - if _big_endian: - fmt = '>' - else: - fmt = '<' - res = struct.unpack(fmt+'iHHIIHH',fid.read(20)) - size, comp, noc, rate, sbytes, ba, bits = res - if comp not in KNOWN_WAVE_FORMATS or size > 16: - comp = WAVE_FORMAT_PCM - warnings.warn("Unknown wave file format", WavFileWarning) - if size > 16: - fid.read(size - 16) - - return size, comp, noc, rate, sbytes, ba, bits - - -# assumes file pointer is immediately -# after the 'data' id -def _read_data_chunk(fid, comp, noc, bits, mmap=False): - if _big_endian: - fmt = '>i' - else: - fmt = ' 1: - data = data.reshape(-1,noc) - return data - - -def _skip_unknown_chunk(fid): - if _big_endian: - fmt = '>i' - else: - fmt = '' or (data.dtype.byteorder == '=' and sys.byteorder == 'big'): - data = data.byteswap() - _array_tofile(fid, data) - - # Determine file size and place it in correct - # position at start of the file (replacing the 4 bytes of zeros) - size = fid.tell() - fid.seek(4) - fid.write(struct.pack('= 3: - def _array_tofile(fid, data): - # ravel gives a c-contiguous buffer - fid.write(data.ravel().view('b').data) -else: - def _array_tofile(fid, data): - fid.write(data.tostring()) diff --git a/spaces/magulux/openai-reverse-proxy-3/Dockerfile b/spaces/magulux/openai-reverse-proxy-3/Dockerfile deleted file mode 100644 index 5830056c19152565b822cb28a4cd554711cfad51..0000000000000000000000000000000000000000 --- a/spaces/magulux/openai-reverse-proxy-3/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18 - -WORKDIR /app - -RUN npm install express express-http-proxy - -COPY . . - -EXPOSE 8000 - -CMD [ "node", "server.js" ] \ No newline at end of file diff --git a/spaces/manhkhanhUIT/BOPBTL/Global/models/__init__.py b/spaces/manhkhanhUIT/BOPBTL/Global/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mareloraby/topic2poem/README.md b/spaces/mareloraby/topic2poem/README.md deleted file mode 100644 index e18868226023ba177a9a60738e9883287ba376df..0000000000000000000000000000000000000000 --- a/spaces/mareloraby/topic2poem/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Topic2poem -emoji: 💻 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.2 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/matthoffner/monacopilot/app/layout.tsx b/spaces/matthoffner/monacopilot/app/layout.tsx deleted file mode 100644 index f4d723604c03e5ad6de2c8a71b964a4b81ee41e7..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/monacopilot/app/layout.tsx +++ /dev/null @@ -1,16 +0,0 @@ -export const metadata = { - title: 'monacopilot', - description: 'monaco-editor and copilot', -} - -export default function RootLayout({ - children, -}: { - children: React.ReactNode -}) { - return ( - - {children} - - ) -} diff --git a/spaces/merve/data-leak/source/uncertainty-calibration/init.js b/spaces/merve/data-leak/source/uncertainty-calibration/init.js deleted file mode 100644 index d23a4fecea1bfa4fae6557043d8053dc3acc29ce..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/uncertainty-calibration/init.js +++ /dev/null @@ -1,36 +0,0 @@ -window.thresholds = [0, 0.2, 0.4, 0.6, 0.8, 1]; -window.emojis = ['☀️','🌧️']; -window.constant_score = 0.5; - -window.ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - - -window.init = function(){ - - var graphSel = d3.select('#graph') - var width = height = graphSel.node().offsetWidth - if (innerWidth <= 925){ - width = innerWidth - height = innerHeight*.65 - window.isMobile = true - } - fig_height = height/2 - fig_width = width - - - window.util = window.initUtil() - window.weatherGraph = window.drawWeatherGraph(graphSel, fig_height, fig_width); - window.calibrationCurve = window.drawCalibrationCurve(graphSel, fig_height, fig_width); - // window.calibrationSlider = window.drawCalibrationSlider(weatherGraph, calibrationCurve, fig_width/2) - // window.modelRemapper = window.drawModelRemapping(fig_width/2); - - - window.slides = window.drawSlides() - weatherGraph.renderThresholds() - -} - -window.init() - - - diff --git a/spaces/merve/fill-in-the-blank/README.md b/spaces/merve/fill-in-the-blank/README.md deleted file mode 100644 index ee91c5d4a6560acfe4003da561f54b05d20da76d..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: fill-in-the-blank -emoji: 🪄 -colorFrom: green -colorTo: purple -sdk: static -pinned: false -license: apache-2.0 -app_file: public/fill-in-the-blank/index.html ---- diff --git a/spaces/merve/fill-in-the-blank/source/third_party/seedrandom.min.js b/spaces/merve/fill-in-the-blank/source/third_party/seedrandom.min.js deleted file mode 100644 index 44073008bfb9d3ef533091d4b72db165c8071e84..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/third_party/seedrandom.min.js +++ /dev/null @@ -1,2 +0,0 @@ -// https://github.com/davidbau/seedrandom Copyright 2019 David Bau -!function(a,b){var l,c=eval("this"),d=256,g="random",h=b.pow(d,6),i=b.pow(2,52),j=2*i,k=d-1;function m(r,t,e){var u=[],f=q(function n(r,t){var e,o=[],i=typeof r;if(t&&"object"==i)for(e in r)try{o.push(n(r[e],t-1))}catch(n){}return o.length?o:"string"==i?r:r+"\0"}((t=1==t?{entropy:!0}:t||{}).entropy?[r,s(a)]:null==r?function(){try{var n;return l&&(n=l.randomBytes)?n=n(d):(n=new Uint8Array(d),(c.crypto||c.msCrypto).getRandomValues(n)),s(n)}catch(n){var r=c.navigator,t=r&&r.plugins;return[+new Date,c,t,c.screen,s(a)]}}():r,3),u),p=new n(u),m=function(){for(var n=p.g(6),r=h,t=0;n>>=1;return(n+t)/r};return m.int32=function(){return 0|p.g(4)},m.quick=function(){return p.g(4)/4294967296},m.double=m,q(s(p.S),a),(t.pass||e||function(n,r,t,e){return e&&(e.S&&o(e,p),n.state=function(){return o(p,{})}),t?(b[g]=n,r):n})(m,f,"global"in t?t.global:this==b,t.state)}function n(n){var r,t=n.length,u=this,e=0,o=u.i=u.j=0,i=u.S=[];for(t||(n=[t++]);e i == state.selectedTopIndex - ); - bottomExplainerButtonSel.classed( - "explainer-active-button", - (d, i) => i == state.selectedBottomIndex - ); -} - -// Preamble text -c.svg - .append("text.top-explainer-text") - .at({ - textAnchor: "left", - dominantBaseline: "top", - dy: ".33em", - }) - .translate([0, buttonHeight / 2]) - .text("All shapes are basically..."); - -c.svg - .append("text.bottom-explainer-text") - .at({ - textAnchor: "left", - dominantBaseline: "top", - dy: ".33em", - }) - .translate([0, buttonHeight * 1.5 + buttonBuffer]) - .text("Everything else should be labeled..."); - -// Buttons -var topExplainerButtonSel = c.svg - .appendMany("g.explainer-button", ["pointiness", "shape_name", "size"]) - .at({}) - .translate((d, i) => [topRightShift + i * (buttonWidth + buttonBuffer), 0]) - .on("click", function (d, i) { - updateState( - d, - state.selected.isRounding, - (topIndex = i), - (bottomIndex = state.selectedBottomIndex) - ); - setActiveButton(); - moveShapes(); - }); - -topExplainerButtonSel.append("rect").at({ - height: buttonHeight, - width: buttonWidth, - class: "explainer-rect", -}); - -topExplainerButtonSel - .append("text") - .at({ - textAnchor: "middle", - dy: ".33em", - x: buttonWidth / 2, - y: buttonHeight / 2, - class: "dropdown", - }) - .text((d, i) => toShortValueStringDict[d]); - -var bottomExplainerButtonSel = c.svg - .appendMany("g.explainer-button", ["true", "false"]) - .at({}) - .translate((d, i) => [ - bottomRightShift + i * (buttonWidth + buttonBuffer), - buttonHeight + buttonBuffer, - ]) - .on("click", function (d, i) { - updateState( - state.selected.categoryName, - d, - (topIndex = state.selectedTopIndex), - (bottomIndex = i) - ); - setActiveButton(); - moveShapes(); - }); - -bottomExplainerButtonSel.append("rect").at({ - height: buttonHeight, - width: buttonWidth, - class: "explainer-rect", -}); - -bottomExplainerButtonSel - .append("text") - .at({ - textAnchor: "middle", - dy: ".33em", - x: buttonWidth / 2, - y: buttonHeight / 2, - class: "dropdown", - }) - .text((d, i) => toDropdownValueRoundingStringDict[d]); - -var horizontalHeight = divHeight * (5 / 8); -var horizontalBuffer = 50; - -p = d3.line()([ - [horizontalBuffer, horizontalHeight], - [divWidth - horizontalBuffer, horizontalHeight], -]); - -var horizontal = c.svg - .append("path") - .at({ - d: p, - stroke: "black", - strokeWidth: 1, - }) - .translate([0, 0]) - .style("stroke-dasharray", "5, 5"); - - -c.svg - .append("text.label-correct") - .at({ - x: -400, - y: 90, - }) - .text("correctly classified") - .attr("transform", "rotate(-90)"); - -c.svg - .append("text.label-correct") - .at({ - x: -630, - y: 90, - }) - .text("incorrectly classified") - .attr("transform", "rotate(-90)"); - - -// Manually make some small adjustments to where particular shapes are placed -function getFineAdjustment(shape) { - if ( - shape.shape_name == "rt_rect" && - shape.correctness == "incorrect" && - shape.gt == "shaded" - ) { - return 4; - } - if ( - shape.shape_name == "rect" && - shape.correctness == "incorrect" && - shape.gt == "unshaded" - ) { - return -10; - } - if ( - shape.shape_name == "triangle" && - shape.correctness == "incorrect" && - shape.gt == "unshaded" - ) { - return 0; - } - if ( - shape.shape_name == "rt_circle" && - shape.correctness == "incorrect" && - shape.size == "small" - ) { - return -20; - } - if ( - shape.shape_name == "rt_triangle" && - shape.correctness == "incorrect" && - shape.size == "small" - ) { - return -20; - } - return 0; -} - -function getFinalCategory(labelName, isRounding) { - if (isRounding == true) { - return labelName.replace("rt_", ""); - } else { - if (labelName.includes("rt_")) { - return "other"; - } else { - return labelName; - } - } -} - -var startingCorrectHeight = horizontalHeight - 50; -var startingIncorrectHeight = horizontalHeight + 50; -var maxHeight = 180; -var xRowAdjustment = 50; -var heightBuffer = 10; - -function getPathHeight(inputPath) { - var placeholder = c.svg.append("path").at({ - d: scaleShapePath(inputPath, shapeScale), - }); - var height = placeholder.node().getBBox().height; - placeholder.remove(); - return height + heightBuffer; -} - -// Figure out where to put the shapes for all possible placements -function generatePlacements() { - for (selectionCriteria of data) { - // starting X positions - var nCategories = selectionCriteria.categories.length; - var centerX = []; - for (var i = 0; i < nCategories; i++) { - var startingX = divWidth * ((i + 1) / (nCategories + 1)); - centerX.push(startingX); - // Track where each label should be placed using a dictionary in the data - selectionCriteria["textPlacements"][ - selectionCriteria.categories[i] - ] = startingX; - } - - // For keeping of track of how we place items as we go - var locationParams = {}; - for (categoryIdx in selectionCriteria.categories) { - var categoryName = selectionCriteria.categories[categoryIdx]; - locationParams[categoryName] = { - correctX: centerX[categoryIdx], - incorrectX: centerX[categoryIdx], - lastCorrectY: startingCorrectHeight, - lastIncorrectY: startingIncorrectHeight, - }; - } - - for (shape of shapeParams) { - shapeCategory = getFinalCategory( - shape[selectionCriteria.categoryName], - selectionCriteria.isRounding - ); - var shapeHeight = getPathHeight(shape.path); - var shapeX, - shapeY = 0; - if (shape.correctness == "correct") { - shapeY = locationParams[shapeCategory]["lastCorrectY"]; - shapeX = locationParams[shapeCategory]["correctX"]; - // Check if we've reached the maximum height - if ( - startingCorrectHeight - - locationParams[shapeCategory]["lastCorrectY"] >= - maxHeight - ) { - // Reset height to baseline - locationParams[shapeCategory]["lastCorrectY"] = - startingCorrectHeight; - // Move next row over - locationParams[shapeCategory]["correctX"] = - locationParams[shapeCategory]["correctX"] + - xRowAdjustment; - } else { - locationParams[shapeCategory]["lastCorrectY"] += - -1 * shapeHeight; - } - } else { - shapeY = locationParams[shapeCategory]["lastIncorrectY"]; - shapeX = locationParams[shapeCategory]["incorrectX"]; - - if ( - locationParams[shapeCategory]["lastIncorrectY"] - - startingIncorrectHeight >= - maxHeight - ) { - // Reset height to baseline - locationParams[shapeCategory]["lastIncorrectY"] = - startingIncorrectHeight; - // Move next row over - locationParams[shapeCategory]["incorrectX"] = - locationParams[shapeCategory]["incorrectX"] + - xRowAdjustment; - } else { - locationParams[shapeCategory]["lastIncorrectY"] += - shapeHeight; - } - } - shapeY = shapeY + getFineAdjustment(shape); - shape[selectionCriteria.name + "_X"] = shapeX; - shape[selectionCriteria.name + "_Y"] = shapeY; - } - } -} - -generatePlacements(); - -function getLocation(shape) { - return [ - shape[state.selected.name + "_X"], - shape[state.selected.name + "_Y"], - ]; -} - -function scaleShapePath(shapePath, factor = 0.5) { - var newShapePath = ""; - for (var token of shapePath.split(" ")) { - if (parseInt(token)) { - newShapePath = newShapePath + parseInt(token) * factor; - } else { - newShapePath = newShapePath + token; - } - newShapePath = newShapePath + " "; - } - return newShapePath; -} - -// Add the shapes -var explainerShapeSel = c.svg - .appendMany("path.shape", shapeParams) - .at({ - d: (d) => scaleShapePath(d.path, shapeScale), - class: (d) => "gt-" + d.gt + " " + d.correctness, - }) - .translate(function (d) { - return getLocation(d); - }); - -explainerShapeSel.classed("is-classified", true); - -function getColor(d) { - var scaleRowValue = d3.scaleLinear().domain([0.3, 1.0]).range([0, 1]); - return d3.interpolateRdYlGn(scaleRowValue(d)); -} - -// Retrieve the results, for coloring the label boxes -function getResults() { - return calculateResults( - (property = state.selected.categoryName), - (useGuess = state.selected.isRounding) - ); -} - -function getCategoryAccuracy(results, category) { - for (var key of results) { - if (key.rawCategoryName == category) { - return key.accuracy; - } - } -} - -// Rename "large" and "rect" -function toExplainerDisplayString(categoryName) { - if (categoryName == "large") { - return "big"; - } - if (categoryName == "rect") { - return "rectangle"; - } - return categoryName; -} - -function getExplainerTextColor(d, i) { - console.log(d == "large"); - if (d == "large" && state.selected.isRounding == false) { - return "#ffccd8"; - } else { - return "#000000"; - } -} - -function updateText() { - var explainerResults = getResults(); - - d3.selectAll(".explainer-label-text").html(""); - d3.selectAll(".explainer-label-rect").remove(); - - var rectHeight = 30; - var rectWidth = 80; - var textRect = c.svg - .appendMany("rect.column-text-rect", state.selected.categories) - .at({ - fill: (d) => getColor(getCategoryAccuracy(explainerResults, d)), - height: rectHeight, - width: rectWidth, - class: "explainer-label-rect", - }) - .translate((d) => [ - state.selected.textPlacements[d] - rectWidth / 2, - horizontalHeight - rectHeight / 2, - ]); - - var text = c.svg - .appendMany("text.column-text", state.selected.categories) - .at({ - textAnchor: "middle", - dominantBaseline: "central", - class: "explainer-label-text", - }) - .st({ - fill: getExplainerTextColor, - }) - .text((d) => toExplainerDisplayString(d)) - .translate((d) => [state.selected.textPlacements[d], horizontalHeight]); -} - -function moveShapes() { - explainerShapeSel - .transition() - .duration(500) - .translate((d) => getLocation(d)); - updateText(); -} - -setActiveButton(); -updateText(); \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/post.js b/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/post.js deleted file mode 100644 index e546aef207dab4014e05732814a1f4b2ff78896a..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/post.js +++ /dev/null @@ -1,44 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -async function post(route, obj){ - var body = JSON.stringify(obj) - var cacheKey = body + route - // if (route == 'embed_zari_cda') return - // if (route != 'embed_group_top') return - // route = 'embed_group' - - if (!window.postCache) postCache = {} - if (postCache[cacheKey]) return postCache[cacheKey] - - - if (cacheKey2filename[cacheKey]){ - var res = await fetch('data/' + cacheKey2filename[cacheKey]) - } else { - // var root = 'http://' + location.hostname + ':5004/' - var root = 'https://helloworld-66dm2fxl4a-uk.a.run.app/' - var res = await fetch(root + route, {method: 'POST', body}) - } - - - var rv = await res.json() - postCache[cacheKey] = rv - - return rv -} - -// copy(postCache) -// data/post-cache.json \ No newline at end of file diff --git a/spaces/mmecheri/Rakuten_Streamlit/conclusion.py b/spaces/mmecheri/Rakuten_Streamlit/conclusion.py deleted file mode 100644 index e495137af52e29e2bdcd8424b134d5d3a4cffb2a..0000000000000000000000000000000000000000 --- a/spaces/mmecheri/Rakuten_Streamlit/conclusion.py +++ /dev/null @@ -1,15 +0,0 @@ - -import streamlit as st - - -def app(): - - st.title("Conclusion et pistes d'amélioration") - body_page(text_page ='./page_descriptions/conclusion_txt.md') -def body_page(text_page): - - '''The text page. Read from .md file ''' - with open(text_page, 'r', encoding='utf-8') as txtpage: - txtpage = txtpage.read().split('---Insersetion---') - - st.markdown(txtpage[0], unsafe_allow_html=True) diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/heads/pixel_decoder.py b/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/heads/pixel_decoder.py deleted file mode 100644 index 6b10089331785e937b79cf82af6d8fba55519082..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/heads/pixel_decoder.py +++ /dev/null @@ -1,308 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import logging -from typing import Callable, Dict, List, Optional, Tuple, Union - -import fvcore.nn.weight_init as weight_init -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, get_norm -from detectron2.modeling import SEM_SEG_HEADS_REGISTRY - -from ..transformer.position_encoding import PositionEmbeddingSine -from ..transformer.transformer import TransformerEncoder, TransformerEncoderLayer - - -def build_pixel_decoder(cfg, input_shape): - """ - Build a pixel decoder from `cfg.MODEL.MASK_FORMER.PIXEL_DECODER_NAME`. - """ - name = cfg.MODEL.SEM_SEG_HEAD.PIXEL_DECODER_NAME - model = SEM_SEG_HEADS_REGISTRY.get(name)(cfg, input_shape) - forward_features = getattr(model, "forward_features", None) - if not callable(forward_features): - raise ValueError( - "Only SEM_SEG_HEADS with forward_features method can be used as pixel decoder. " - f"Please implement forward_features for {name} to only return mask features." - ) - return model - - -@SEM_SEG_HEADS_REGISTRY.register() -class BasePixelDecoder(nn.Module): - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - conv_dim: int, - mask_dim: int, - norm: Optional[Union[str, Callable]] = None, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - conv_dims: number of output channels for the intermediate conv layers. - mask_dim: number of output channels for the final conv layer. - norm (str or callable): normalization for all conv layers - """ - super().__init__() - - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - feature_channels = [v.channels for k, v in input_shape] - - lateral_convs = [] - output_convs = [] - - use_bias = norm == "" - for idx, in_channels in enumerate(feature_channels): - if idx == len(self.in_features) - 1: - output_norm = get_norm(norm, conv_dim) - output_conv = Conv2d( - in_channels, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(output_conv) - self.add_module("layer_{}".format(idx + 1), output_conv) - - lateral_convs.append(None) - output_convs.append(output_conv) - else: - lateral_norm = get_norm(norm, conv_dim) - output_norm = get_norm(norm, conv_dim) - - lateral_conv = Conv2d( - in_channels, - conv_dim, - kernel_size=1, - bias=use_bias, - norm=lateral_norm, - ) - output_conv = Conv2d( - conv_dim, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(lateral_conv) - weight_init.c2_xavier_fill(output_conv) - self.add_module("adapter_{}".format(idx + 1), lateral_conv) - self.add_module("layer_{}".format(idx + 1), output_conv) - - lateral_convs.append(lateral_conv) - output_convs.append(output_conv) - # Place convs into top-down order (from low to high resolution) - # to make the top-down computation in forward clearer. - self.lateral_convs = lateral_convs[::-1] - self.output_convs = output_convs[::-1] - - self.mask_dim = mask_dim - self.mask_features = Conv2d( - conv_dim, - mask_dim, - kernel_size=3, - stride=1, - padding=1, - ) - weight_init.c2_xavier_fill(self.mask_features) - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = {} - ret["input_shape"] = { - k: v - for k, v in input_shape.items() - if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - } - ret["conv_dim"] = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - ret["norm"] = cfg.MODEL.SEM_SEG_HEAD.NORM - return ret - - def forward_features(self, features): - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.in_features[::-1]): - x = features[f] - lateral_conv = self.lateral_convs[idx] - output_conv = self.output_convs[idx] - if lateral_conv is None: - y = output_conv(x) - else: - cur_fpn = lateral_conv(x) - # Following FPN implementation, we use nearest upsampling here - y = cur_fpn + F.interpolate(y, size=cur_fpn.shape[-2:], mode="nearest") - y = output_conv(y) - return self.mask_features(y), None - - def forward(self, features, targets=None): - logger = logging.getLogger(__name__) - logger.warning( - "Calling forward() may cause unpredicted behavior of PixelDecoder module." - ) - return self.forward_features(features) - - -class TransformerEncoderOnly(nn.Module): - def __init__( - self, - d_model=512, - nhead=8, - num_encoder_layers=6, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - - encoder_layer = TransformerEncoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - self.encoder = TransformerEncoder( - encoder_layer, num_encoder_layers, encoder_norm - ) - - self._reset_parameters() - - self.d_model = d_model - self.nhead = nhead - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, src, mask, pos_embed): - # flatten NxCxHxW to HWxNxC - bs, c, h, w = src.shape - src = src.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - if mask is not None: - mask = mask.flatten(1) - - memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) - return memory.permute(1, 2, 0).view(bs, c, h, w) - - -@SEM_SEG_HEADS_REGISTRY.register() -class TransformerEncoderPixelDecoder(BasePixelDecoder): - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - transformer_dropout: float, - transformer_nheads: int, - transformer_dim_feedforward: int, - transformer_enc_layers: int, - transformer_pre_norm: bool, - conv_dim: int, - mask_dim: int, - norm: Optional[Union[str, Callable]] = None, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - transformer_dropout: dropout probability in transformer - transformer_nheads: number of heads in transformer - transformer_dim_feedforward: dimension of feedforward network - transformer_enc_layers: number of transformer encoder layers - transformer_pre_norm: whether to use pre-layernorm or not - conv_dims: number of output channels for the intermediate conv layers. - mask_dim: number of output channels for the final conv layer. - norm (str or callable): normalization for all conv layers - """ - super().__init__(input_shape, conv_dim=conv_dim, mask_dim=mask_dim, norm=norm) - - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - feature_strides = [v.stride for k, v in input_shape] - feature_channels = [v.channels for k, v in input_shape] - - in_channels = feature_channels[len(self.in_features) - 1] - self.input_proj = Conv2d(in_channels, conv_dim, kernel_size=1) - weight_init.c2_xavier_fill(self.input_proj) - self.transformer = TransformerEncoderOnly( - d_model=conv_dim, - dropout=transformer_dropout, - nhead=transformer_nheads, - dim_feedforward=transformer_dim_feedforward, - num_encoder_layers=transformer_enc_layers, - normalize_before=transformer_pre_norm, - ) - N_steps = conv_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - # update layer - use_bias = norm == "" - output_norm = get_norm(norm, conv_dim) - output_conv = Conv2d( - conv_dim, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(output_conv) - delattr(self, "layer_{}".format(len(self.in_features))) - self.add_module("layer_{}".format(len(self.in_features)), output_conv) - self.output_convs[0] = output_conv - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super().from_config(cfg, input_shape) - ret["transformer_dropout"] = cfg.MODEL.MASK_FORMER.DROPOUT - ret["transformer_nheads"] = cfg.MODEL.MASK_FORMER.NHEADS - ret["transformer_dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD - ret[ - "transformer_enc_layers" - ] = cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS # a separate config - ret["transformer_pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM - return ret - - def forward_features(self, features): - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.in_features[::-1]): - x = features[f] - lateral_conv = self.lateral_convs[idx] - output_conv = self.output_convs[idx] - if lateral_conv is None: - transformer = self.input_proj(x) - pos = self.pe_layer(x) - transformer = self.transformer(transformer, None, pos) - y = output_conv(transformer) - # save intermediate feature as input to Transformer decoder - transformer_encoder_features = transformer - else: - cur_fpn = lateral_conv(x) - # Following FPN implementation, we use nearest upsampling here - y = cur_fpn + F.interpolate(y, size=cur_fpn.shape[-2:], mode="nearest") - y = output_conv(y) - return self.mask_features(y), transformer_encoder_features - - def forward(self, features, targets=None): - logger = logging.getLogger(__name__) - logger.warning( - "Calling forward() may cause unpredicted behavior of PixelDecoder module." - ) - return self.forward_features(features) diff --git a/spaces/mohsenfayyaz/DecompX/DecompX/src/modeling_roberta.py b/spaces/mohsenfayyaz/DecompX/DecompX/src/modeling_roberta.py deleted file mode 100644 index 6b3ff6e2d51719103588ced036429451339b2e72..0000000000000000000000000000000000000000 --- a/spaces/mohsenfayyaz/DecompX/DecompX/src/modeling_roberta.py +++ /dev/null @@ -1,2142 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch RoBERTa model.""" - -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from packaging import version -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from .decompx_utils import DecompXConfig, DecompXOutput - -from transformers.activations import ACT2FN, gelu -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from transformers.models.roberta.configuration_roberta import RobertaConfig - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "roberta-base" -_CONFIG_FOR_DOC = "RobertaConfig" -_TOKENIZER_FOR_DOC = "RobertaTokenizer" - -ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "roberta-base", - "roberta-large", - "roberta-large-mnli", - "distilroberta-base", - "roberta-base-openai-detector", - "roberta-large-openai-detector", - # See all RoBERTa models at https://huggingface.co/models?filter=roberta -] - - -def output_builder(input_vector, output_mode): - if output_mode is None: - return None - elif output_mode == "vector": - return (input_vector,) - elif output_mode == "norm": - return (torch.norm(input_vector, dim=-1),) - elif output_mode == "both": - return ((torch.norm(input_vector, dim=-1), input_vector),) - elif output_mode == "distance_based": - recomposed_vectors = torch.sum(input_vector, dim=-2, keepdim=True) - importance_matrix = -torch.nn.functional.pairwise_distance(input_vector, recomposed_vectors, p=1) - norm_y = torch.norm(recomposed_vectors, dim=-1, p=1) - maxed = torch.maximum(torch.zeros(1, device=norm_y.device), norm_y + importance_matrix) - return (maxed / (torch.sum(maxed, dim=-2, keepdim=True) + 1e-12),) - - -class RobertaEmbeddings(nn.Module): - """ - Same as BertEmbeddings with a tiny tweak for positional embeddings indexing. - """ - - # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.__init__ - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - if version.parse(torch.__version__) > version.parse("1.6.0"): - self.register_buffer( - "token_type_ids", - torch.zeros(self.position_ids.size(), dtype=torch.long), - persistent=False, - ) - - # End copy - self.padding_idx = config.pad_token_id - self.position_embeddings = nn.Embedding( - config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx - ) - - def forward( - self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 - ): - if position_ids is None: - if input_ids is not None: - # Create the position ids from the input token ids. Any padded tokens remain padded. - position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) - else: - position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) - - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs - # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves - # issue #5664 - if token_type_ids is None: - if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - else: - return inputs_embeds - - def create_position_ids_from_inputs_embeds(self, inputs_embeds): - """ - We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. - - Args: - inputs_embeds: torch.Tensor - - Returns: torch.Tensor - """ - input_shape = inputs_embeds.size()[:-1] - sequence_length = input_shape[1] - - position_ids = torch.arange( - self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device - ) - return position_ids.unsqueeze(0).expand(input_shape) - - -# Copied from transformers.models.bert.modeling_bert.BertSelfAttention with Bert->Roberta -class RobertaSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = position_embedding_type or getattr( - config, "position_embedding_type", "absolute" - ) - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - - self.is_decoder = config.is_decoder - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def transpose_for_scores_for_decomposed(self, x): - # x: (B, N, N, H*V) - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - # x: (B, N, N, H, V) - x = x.view(new_x_shape) - # x: (B, H, N, N, V) - return x.permute(0, 3, 1, 2, 4) - - def forward( - self, - hidden_states: torch.Tensor, - attribution_vectors: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - decompx_ready: Optional[bool] = None, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - mixed_query_layer = self.query(hidden_states) - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - decomposed_value_layer = None - - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_layer = past_key_value[0] - value_layer = past_key_value[1] - attention_mask = encoder_attention_mask - elif is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - if attribution_vectors is not None: - decomposed_value_layer = torch.einsum("bijd,vd->bijv", attribution_vectors, self.value.weight) - decomposed_value_layer = self.transpose_for_scores_for_decomposed(decomposed_value_layer) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - # added by Fayyaz / Modarressi - # ------------------------------- - if decompx_ready: - outputs = (context_layer, attention_probs, value_layer, decomposed_value_layer) - return outputs - # ------------------------------- - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - if self.is_decoder: - outputs = outputs + (past_key_value,) - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertSelfOutput -class RobertaSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor, - decompx_ready=False): # added by Fayyaz / Modarressi - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - pre_ln_states = hidden_states + input_tensor # added by Fayyaz / Modarressi - post_ln_states = self.LayerNorm(pre_ln_states) # added by Fayyaz / Modarressi - # added by Fayyaz / Modarressi - if decompx_ready: - return post_ln_states, pre_ln_states - else: - return post_ln_states - - -# Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta -class RobertaAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - self.self = RobertaSelfAttention(config, position_embedding_type=position_embedding_type) - self.output = RobertaSelfOutput(config) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - attribution_vectors: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - decompx_ready: Optional[bool] = None, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - self_outputs = self.self( - hidden_states, - attribution_vectors, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - decompx_ready=decompx_ready, # added by Fayyaz / Modarressi - ) - attention_output = self.output( - self_outputs[0], - hidden_states, - decompx_ready=decompx_ready, # added by Goro Kobayashi (Edited by Fayyaz / Modarressi) - ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - if decompx_ready: - _, attention_probs, value_layer, decomposed_value_layer = self_outputs - attention_output, pre_ln_states = attention_output - outputs = (attention_output, attention_probs,) + ( - value_layer, decomposed_value_layer, pre_ln_states) # add attentions and norms if we output them - return outputs - # ------------------------------- - - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertIntermediate -class RobertaIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor, decompx_ready: Optional[bool] = False) -> torch.Tensor: - pre_act_hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(pre_act_hidden_states) - if decompx_ready: - return hidden_states, pre_act_hidden_states - return hidden_states, None - - -# Copied from transformers.models.bert.modeling_bert.BertOutput -class RobertaOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor, decompx_ready: Optional[bool] = False): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - # return hidden_states - # Added by Fayyaz / Modarressi - # ------------------------------- - pre_ln_states = hidden_states + input_tensor - hidden_states = self.LayerNorm(pre_ln_states) - if decompx_ready: - return hidden_states, pre_ln_states - return hidden_states, None - # ------------------------------- - - -# Copied from transformers.models.bert.modeling_bert.BertLayer with Bert->Roberta -class RobertaLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = RobertaAttention(config) - self.is_decoder = config.is_decoder - self.add_cross_attention = config.add_cross_attention - if self.add_cross_attention: - if not self.is_decoder: - raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.crossattention = RobertaAttention(config, position_embedding_type="absolute") - self.intermediate = RobertaIntermediate(config) - self.output = RobertaOutput(config) - self.similarity_fn = torch.nn.CosineSimilarity(dim=-1) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - def bias_decomposer(self, bias, attribution_vectors, bias_decomp_type="absdot"): - # Decomposes the input bias based on similarity to the attribution vectors - # Args: - # bias: a bias vector (all_head_size) - # attribution_vectors: the attribution vectors from token j to i (b, i, j, all_head_size) :: (batch, seq_length, seq_length, all_head_size) - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bskd,d->bsk", attribution_vectors, bias)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(attribution_vectors, bias, dim=-1)) - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(attribution_vectors, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(attribution_vectors.shape[:-1], device=attribution_vectors.device) - weights[:, :, 0] = 1.0 - elif bias_decomp_type == "dot": - weights = torch.einsum("bskd,d->bsk", attribution_vectors, bias) - elif bias_decomp_type == "biastoken": - attrib_shape = attribution_vectors.shape - if attrib_shape[1] == attrib_shape[2]: - attribution_vectors = torch.concat([attribution_vectors, - torch.zeros((attrib_shape[0], attrib_shape[1], 1, attrib_shape[3]), - device=attribution_vectors.device)], dim=-2) - attribution_vectors[:, :, -1] = attribution_vectors[:, :, -1] + bias - return attribution_vectors - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.matmul(weights.unsqueeze(dim=-1), bias.unsqueeze(dim=0)) - return attribution_vectors + weighted_bias - - def ln_decomposer(self, attribution_vectors, pre_ln_states, gamma, beta, eps, include_biases=True, - bias_decomp_type="absdot"): - mean = pre_ln_states.mean(-1, keepdim=True) # (batch, seq_len, 1) m(y=Σy_j) - var = (pre_ln_states - mean).pow(2).mean(-1, keepdim=True).unsqueeze(dim=2) # (batch, seq_len, 1, 1) s(y) - - each_mean = attribution_vectors.mean(-1, keepdim=True) # (batch, seq_len, seq_len, 1) m(y_j) - - normalized_layer = torch.div(attribution_vectors - each_mean, - (var + eps) ** (1 / 2)) # (batch, seq_len, seq_len, all_head_size) - - post_ln_layer = torch.einsum('bskd,d->bskd', normalized_layer, - gamma) # (batch, seq_len, seq_len, all_head_size) - - if include_biases: - return self.bias_decomposer(beta, post_ln_layer, bias_decomp_type=bias_decomp_type) - else: - return post_ln_layer - - def gelu_linear_approximation(self, intermediate_hidden_states, intermediate_output): - def phi(x): - return (1 + torch.erf(x / math.sqrt(2))) / 2. - - def normal_pdf(x): - return torch.exp(-(x ** 2) / 2) / math.sqrt(2. * math.pi) - - def gelu_deriv(x): - return phi(x) + x * normal_pdf(x) - - m = gelu_deriv(intermediate_hidden_states) - b = intermediate_output - m * intermediate_hidden_states - return m, b - - def gelu_decomposition(self, attribution_vectors, intermediate_hidden_states, intermediate_output, - bias_decomp_type): - m, b = self.gelu_linear_approximation(intermediate_hidden_states, intermediate_output) - mx = attribution_vectors * m.unsqueeze(dim=-2) - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bskl,bsl->bsk", mx, b)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(mx, b)) - weights = (torch.norm(mx, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(mx, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(mx, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(mx.shape[:-1], device=mx.device) - weights[:, :, 0] = 1.0 - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.einsum("bsl,bsk->bskl", b, weights) - return mx + weighted_bias - - def gelu_zo_decomposition(self, attribution_vectors, intermediate_hidden_states, intermediate_output): - m = intermediate_output / (intermediate_hidden_states + 1e-12) - mx = attribution_vectors * m.unsqueeze(dim=-2) - return mx - - def ffn_decomposer(self, attribution_vectors, intermediate_hidden_states, intermediate_output, include_biases=True, - approximation_type="GeLU_LA", bias_decomp_type="absdot"): - post_first_layer = torch.einsum("ld,bskd->bskl", self.intermediate.dense.weight, attribution_vectors) - if include_biases: - post_first_layer = self.bias_decomposer(self.intermediate.dense.bias, post_first_layer, - bias_decomp_type=bias_decomp_type) - - if approximation_type == "ReLU": - mask_for_gelu_approx = (intermediate_hidden_states > 0) - post_act_first_layer = torch.einsum("bskl, bsl->bskl", post_first_layer, mask_for_gelu_approx) - post_act_first_layer = post_first_layer * mask_for_gelu_approx.unsqueeze(dim=-2) - elif approximation_type == "GeLU_LA": - post_act_first_layer = self.gelu_decomposition(post_first_layer, intermediate_hidden_states, - intermediate_output, bias_decomp_type=bias_decomp_type) - elif approximation_type == "GeLU_ZO": - post_act_first_layer = self.gelu_zo_decomposition(post_first_layer, intermediate_hidden_states, - intermediate_output) - - post_second_layer = torch.einsum("bskl, dl->bskd", post_act_first_layer, self.output.dense.weight) - if include_biases: - post_second_layer = self.bias_decomposer(self.output.dense.bias, post_second_layer, - bias_decomp_type=bias_decomp_type) - - return post_second_layer - - def ffn_decomposer_fast(self, attribution_vectors, intermediate_hidden_states, intermediate_output, - include_biases=True, approximation_type="GeLU_LA", bias_decomp_type="absdot"): - if approximation_type == "ReLU": - theta = (intermediate_hidden_states > 0) - elif approximation_type == "GeLU_ZO": - theta = intermediate_output / (intermediate_hidden_states + 1e-12) - - scaled_W1 = torch.einsum("bsl,ld->bsld", theta, self.intermediate.dense.weight) - W_equiv = torch.einsum("bsld, zl->bszd", scaled_W1, self.output.dense.weight) - - post_ffn_layer = torch.einsum("bszd,bskd->bskz", W_equiv, attribution_vectors) - - if include_biases: - scaled_b1 = torch.einsum("bsl,l->bsl", theta, self.intermediate.dense.bias) - b_equiv = torch.einsum("bsl, dl->bsd", scaled_b1, self.output.dense.weight) - b_equiv = b_equiv + self.output.dense.bias - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bskd,bsd->bsk", post_ffn_layer, b_equiv)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(post_ffn_layer, b_equiv)) - weights = (torch.norm(post_ffn_layer, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(post_ffn_layer, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(post_ffn_layer, dim=-1) != 0) * 1.0 - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.einsum("bsd,bsk->bskd", b_equiv, weights) - - post_ffn_layer = post_ffn_layer + weighted_bias - - return post_ffn_layer - - def forward( - self, - hidden_states: torch.Tensor, - attribution_vectors: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - decompx_ready = decompx_config is not None - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - # self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - # self_attention_outputs = self.attention( - # hidden_states, - # attribution_vectors, - # attention_mask, - # head_mask, - # output_attentions=output_attentions, - # past_key_value=self_attn_past_key_value, - # decompx_ready=decompx_ready, - # ) - self_attention_outputs = self.attention( - hidden_states, - attribution_vectors, - attention_mask, - head_mask, - output_attentions=output_attentions, - decompx_ready=decompx_ready, - ) # changed by Goro Kobayashi - attention_output = self_attention_outputs[0] - - # if decoder, the last output is tuple of self-attn cache - if self.is_decoder: - outputs = self_attention_outputs[1:-1] - present_key_value = self_attention_outputs[-1] - else: - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - cross_attn_present_key_value = None - if self.is_decoder and encoder_hidden_states is not None: - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers by setting `config.add_cross_attention=True`" - ) - - # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple - cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - cross_attn_past_key_value, - output_attentions, - ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights - - # add cross-attn cache to positions 3,4 of present_key_value tuple - cross_attn_present_key_value = cross_attention_outputs[-1] - present_key_value = present_key_value + cross_attn_present_key_value - - # layer_output = apply_chunking_to_forward( - # self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - # ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - bias_decomp_type = "biastoken" if decompx_config.include_bias_token else decompx_config.bias_decomp_type - intermediate_output, pre_act_hidden_states = self.intermediate(attention_output, decompx_ready=decompx_ready) - layer_output, pre_ln2_states = self.output(intermediate_output, attention_output, decompx_ready=decompx_ready) - if decompx_ready: - attention_probs, value_layer, decomposed_value_layer, pre_ln_states = outputs - - headmixing_weight = self.attention.output.dense.weight.view(self.all_head_size, self.num_attention_heads, - self.attention_head_size) - - if decomposed_value_layer is None or decompx_config.aggregation != "vector": - transformed_layer = torch.einsum('bhsv,dhv->bhsd', value_layer, headmixing_weight) # V * W^o (z=(qk)v) - # Make weighted vectors αf(x) from transformed vectors (transformed_layer) - # and attention weights (attentions): - # (batch, num_heads, seq_length, seq_length, all_head_size) - weighted_layer = torch.einsum('bhks,bhsd->bhksd', attention_probs, - transformed_layer) # attention_probs(Q*K^t) * V * W^o - # Sum each weighted vectors αf(x) over all heads: - # (batch, seq_length, seq_length, all_head_size) - summed_weighted_layer = weighted_layer.sum(dim=1) # sum over heads - - # Make residual matrix (batch, seq_length, seq_length, all_head_size) - hidden_shape = hidden_states.size() # (batch, seq_length, all_head_size) - device = hidden_states.device - residual = torch.einsum('sk,bsd->bskd', torch.eye(hidden_shape[1]).to(device), - hidden_states) # diagonal representations (hidden states) - - # Make matrix of summed weighted vector + residual vectors - residual_weighted_layer = summed_weighted_layer + residual - accumulated_bias = self.attention.output.dense.bias - else: - transformed_layer = torch.einsum('bhsqv,dhv->bhsqd', decomposed_value_layer, headmixing_weight) - - weighted_layer = torch.einsum('bhks,bhsqd->bhkqd', attention_probs, - transformed_layer) # attention_probs(Q*K^t) * V * W^o - - summed_weighted_layer = weighted_layer.sum(dim=1) # sum over heads - - residual_weighted_layer = summed_weighted_layer + attribution_vectors - accumulated_bias = torch.matmul(self.attention.output.dense.weight, - self.attention.self.value.bias) + self.attention.output.dense.bias - - if decompx_config.include_biases: - residual_weighted_layer = self.bias_decomposer(accumulated_bias, residual_weighted_layer, - bias_decomp_type) - - if decompx_config.include_LN1: - post_ln_layer = self.ln_decomposer( - attribution_vectors=residual_weighted_layer, - pre_ln_states=pre_ln_states, - gamma=self.attention.output.LayerNorm.weight.data, - beta=self.attention.output.LayerNorm.bias.data, - eps=self.attention.output.LayerNorm.eps, - include_biases=decompx_config.include_biases, - bias_decomp_type=bias_decomp_type - ) - else: - post_ln_layer = residual_weighted_layer - - if decompx_config.include_FFN: - post_ffn_layer = self.ffn_decomposer_fast if decompx_config.FFN_fast_mode else self.ffn_decomposer( - attribution_vectors=post_ln_layer, - intermediate_hidden_states=pre_act_hidden_states, - intermediate_output=intermediate_output, - approximation_type=decompx_config.FFN_approx_type, - include_biases=decompx_config.include_biases, - bias_decomp_type=bias_decomp_type - ) - pre_ln2_layer = post_ln_layer + post_ffn_layer - else: - pre_ln2_layer = post_ln_layer - post_ffn_layer = None - - if decompx_config.include_LN2: - post_ln2_layer = self.ln_decomposer( - attribution_vectors=pre_ln2_layer, - pre_ln_states=pre_ln2_states, - gamma=self.output.LayerNorm.weight.data, - beta=self.output.LayerNorm.bias.data, - eps=self.output.LayerNorm.eps, - include_biases=decompx_config.include_biases, - bias_decomp_type=bias_decomp_type - ) - else: - post_ln2_layer = pre_ln2_layer - - new_outputs = DecompXOutput( - attention=output_builder(summed_weighted_layer, decompx_config.output_attention), - res1=output_builder(residual_weighted_layer, decompx_config.output_res1), - LN1=output_builder(post_ln_layer, decompx_config.output_res2), - FFN=output_builder(post_ffn_layer, decompx_config.output_FFN), - res2=output_builder(pre_ln2_layer, decompx_config.output_res2), - encoder=output_builder(post_ln2_layer, "both") - ) - return (layer_output,) + (new_outputs,) - # ------------------------------- - outputs = (layer_output,) + outputs - - # if decoder, return the attn key/values as the last output - if self.is_decoder: - outputs = outputs + (present_key_value,) - - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertEncoder with Bert->Roberta -class RobertaEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([RobertaLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - next_decoder_cache = () if use_cache else None - - aggregated_encoder_norms = None # added by Fayyaz / Modarressi - aggregated_encoder_vectors = None # added by Fayyaz / Modarressi - - # -- added by Fayyaz / Modarressi - if decompx_config and decompx_config.output_all_layers: - all_decompx_outputs = DecompXOutput( - attention=() if decompx_config.output_attention else None, - res1=() if decompx_config.output_res1 else None, - LN1=() if decompx_config.output_LN1 else None, - FFN=() if decompx_config.output_LN1 else None, - res2=() if decompx_config.output_res2 else None, - encoder=() if decompx_config.output_encoder else None, - aggregated=() if decompx_config.output_aggregated and decompx_config.aggregation else None, - ) - else: - all_decompx_outputs = None - # -- added by Fayyaz / Modarressi - - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warning( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, past_key_value, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - aggregated_encoder_vectors, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - decompx_config # added by Fayyaz / Modarressi - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - # added by Fayyaz / Modarressi - if decompx_config: - decompx_output = layer_outputs[1] - if decompx_config.aggregation == "rollout": - if decompx_config.include_classifier_w_pooler: - raise Exception("Classifier and pooler could be included in vector aggregation mode") - - encoder_norms = decompx_output.encoder[0][0] - - if aggregated_encoder_norms is None: - aggregated_encoder_norms = encoder_norms * torch.exp(attention_mask).view( - (-1, attention_mask.shape[-1], 1)) - else: - aggregated_encoder_norms = torch.einsum("ijk,ikm->ijm", encoder_norms, aggregated_encoder_norms) - - if decompx_config.output_aggregated == "norm": - decompx_output.aggregated = (aggregated_encoder_norms,) - elif decompx_config.output_aggregated is not None: - raise Exception( - "Rollout aggregated values are only available in norms. Set output_aggregated to 'norm'.") - - - elif decompx_config.aggregation == "vector": - aggregated_encoder_vectors = decompx_output.encoder[0][1] - - if decompx_config.include_classifier_w_pooler: - decompx_output.aggregated = (aggregated_encoder_vectors,) - else: - decompx_output.aggregated = output_builder(aggregated_encoder_vectors, - decompx_config.output_aggregated) - - decompx_output.encoder = output_builder(decompx_output.encoder[0][1], decompx_config.output_encoder) - - if decompx_config.output_all_layers: - all_decompx_outputs.attention = all_decompx_outputs.attention + decompx_output.attention if decompx_config.output_attention else None - all_decompx_outputs.res1 = all_decompx_outputs.res1 + decompx_output.res1 if decompx_config.output_res1 else None - all_decompx_outputs.LN1 = all_decompx_outputs.LN1 + decompx_output.LN1 if decompx_config.output_LN1 else None - all_decompx_outputs.FFN = all_decompx_outputs.FFN + decompx_output.FFN if decompx_config.output_FFN else None - all_decompx_outputs.res2 = all_decompx_outputs.res2 + decompx_output.res2 if decompx_config.output_res2 else None - all_decompx_outputs.encoder = all_decompx_outputs.encoder + decompx_output.encoder if decompx_config.output_encoder else None - - if decompx_config.include_classifier_w_pooler and decompx_config.aggregation == "vector": - all_decompx_outputs.aggregated = all_decompx_outputs.aggregated + output_builder( - aggregated_encoder_vectors, - decompx_config.output_aggregated) if decompx_config.output_aggregated else None - else: - all_decompx_outputs.aggregated = all_decompx_outputs.aggregated + decompx_output.aggregated if decompx_config.output_aggregated else None - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - decompx_output if decompx_config else None, - all_decompx_outputs - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -# Copied from transformers.models.bert.modeling_bert.BertPooler -class RobertaPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pre_pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pre_pooled_output) - return pooled_output - - -class RobertaPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = RobertaConfig - base_model_prefix = "roberta" - supports_gradient_checkpointing = True - - # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, RobertaEncoder): - module.gradient_checkpointing = value - - def update_keys_to_ignore(self, config, del_keys_to_ignore): - """Remove some keys from ignore list""" - if not config.tie_word_embeddings: - # must make a new list, or the class variable gets modified! - self._keys_to_ignore_on_save = [k for k in self._keys_to_ignore_on_save if k not in del_keys_to_ignore] - self._keys_to_ignore_on_load_missing = [ - k for k in self._keys_to_ignore_on_load_missing if k not in del_keys_to_ignore - ] - - -ROBERTA_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`RobertaConfig`]): Model configuration class with all the parameters of the - model. Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -ROBERTA_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`RobertaTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.", - ROBERTA_START_DOCSTRING, -) -class RobertaModel(RobertaPreTrainedModel): - """ - - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in *Attention is - all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz - Kaiser and Illia Polosukhin. - - To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set - to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and - `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. - - .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762 - - """ - - _keys_to_ignore_on_load_missing = [r"position_ids"] - - # Copied from transformers.models.bert.modeling_bert.BertModel.__init__ with Bert->Roberta - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embeddings = RobertaEmbeddings(config) - self.encoder = RobertaEncoder(config) - - self.pooler = RobertaPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - # Copied from transformers.models.bert.modeling_bert.BertModel.forward - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if attention_mask is None: - attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) - - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - decompx_config=decompx_config, # added by Fayyaz / Modarressi - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -@add_start_docstrings( - """RoBERTa Model with a `language modeling` head on top for CLM fine-tuning.""", ROBERTA_START_DOCSTRING -) -class RobertaForCausalLM(RobertaPreTrainedModel): - _keys_to_ignore_on_save = [r"lm_head.decoder.weight", r"lm_head.decoder.bias"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"lm_head.decoder.weight", r"lm_head.decoder.bias"] - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - - if not config.is_decoder: - logger.warning("If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.`") - - self.roberta = RobertaModel(config, add_pooling_layer=False) - self.lm_head = RobertaLMHead(config) - - # The LM head weights require special treatment only when they are tied with the word embeddings - self.update_keys_to_ignore(config, ["lm_head.decoder.weight"]) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.lm_head.decoder - - def set_output_embeddings(self, new_embeddings): - self.lm_head.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - past_key_values: Tuple[Tuple[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in - `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are - ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - - Returns: - - Example: - - ```python - >>> from transformers import RobertaTokenizer, RobertaForCausalLM, RobertaConfig - >>> import torch - - >>> tokenizer = RobertaTokenizer.from_pretrained("roberta-base") - >>> config = RobertaConfig.from_pretrained("roberta-base") - >>> config.is_decoder = True - >>> model = RobertaForCausalLM.from_pretrained("roberta-base", config=config) - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if labels is not None: - use_cache = False - - outputs = self.roberta( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - prediction_scores = self.lm_head(sequence_output) - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs): - input_shape = input_ids.shape - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_shape) - - # cut decoder_input_ids if past is used - if past is not None: - input_ids = input_ids[:, -1:] - - return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past} - - def _reorder_cache(self, past, beam_idx): - reordered_past = () - for layer_past in past: - reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) - return reordered_past - - -@add_start_docstrings("""RoBERTa Model with a `language modeling` head on top.""", ROBERTA_START_DOCSTRING) -class RobertaForMaskedLM(RobertaPreTrainedModel): - _keys_to_ignore_on_save = [r"lm_head.decoder.weight", r"lm_head.decoder.bias"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"lm_head.decoder.weight", r"lm_head.decoder.bias"] - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - - if config.is_decoder: - logger.warning( - "If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for " - "bi-directional self-attention." - ) - - self.roberta = RobertaModel(config, add_pooling_layer=False) - self.lm_head = RobertaLMHead(config) - - # The LM head weights require special treatment only when they are tied with the word embeddings - self.update_keys_to_ignore(config, ["lm_head.decoder.weight"]) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.lm_head.decoder - - def set_output_embeddings(self, new_embeddings): - self.lm_head.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - mask="", - expected_output="' Paris'", - expected_loss=0.1, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, MaskedLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - kwargs (`Dict[str, any]`, optional, defaults to *{}*): - Used to hide legacy arguments that have been deprecated. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.roberta( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = outputs[0] - prediction_scores = self.lm_head(sequence_output) - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -class RobertaLMHead(nn.Module): - """Roberta Head for masked language modeling.""" - - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - self.decoder = nn.Linear(config.hidden_size, config.vocab_size) - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - self.decoder.bias = self.bias - - def forward(self, features, **kwargs): - x = self.dense(features) - x = gelu(x) - x = self.layer_norm(x) - - # project back to size of vocabulary with bias - x = self.decoder(x) - - return x - - def _tie_weights(self): - # To tie those two weights if they get disconnected (on TPU or when the bias is resized) - self.bias = self.decoder.bias - - -@add_start_docstrings( - """ - RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the - pooled output) e.g. for GLUE tasks. - """, - ROBERTA_START_DOCSTRING, -) -class RobertaForSequenceClassification(RobertaPreTrainedModel): - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.roberta = RobertaModel(config, add_pooling_layer=False) - self.classifier = RobertaClassificationHead(config) - - # Initialize weights and apply final processing - self.post_init() - - def tanh_linear_approximation(self, pre_act_pooled, post_act_pooled): - def tanh_deriv(x): - return 1 - torch.tanh(x) ** 2.0 - - m = tanh_deriv(pre_act_pooled) - b = post_act_pooled - m * pre_act_pooled - return m, b - - def tanh_la_decomposition(self, attribution_vectors, pre_act_pooled, post_act_pooled, bias_decomp_type): - m, b = self.tanh_linear_approximation(pre_act_pooled, post_act_pooled) - mx = attribution_vectors * m.unsqueeze(dim=-2) - - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bkd,bd->bk", mx, b)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(mx, b, dim=-1)) - weights = (torch.norm(mx, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(mx, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(mx, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(mx.shape[:-1], device=mx.device) - weights[:, 0] = 1.0 - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.einsum("bd,bk->bkd", b, weights) - return mx + weighted_bias - - def tanh_zo_decomposition(self, attribution_vectors, pre_act_pooled, post_act_pooled): - m = post_act_pooled / (pre_act_pooled + 1e-12) - mx = attribution_vectors * m.unsqueeze(dim=-2) - return mx - - def pooler_decomposer(self, attribution_vectors, pre_act_pooled, post_act_pooled, include_biases=True, - bias_decomp_type="absdot", tanh_approx_type="LA"): - post_pool = torch.einsum("ld,bsd->bsl", self.classifier.dense.weight, attribution_vectors) - if include_biases: - post_pool = self.bias_decomposer(self.classifier.dense.bias, post_pool, bias_decomp_type=bias_decomp_type) - - if tanh_approx_type == "LA": - post_act_pool = self.tanh_la_decomposition(post_pool, pre_act_pooled, post_act_pooled, - bias_decomp_type=bias_decomp_type) - else: - post_act_pool = self.tanh_zo_decomposition(post_pool, pre_act_pooled, post_act_pooled) - - return post_act_pool - - def bias_decomposer(self, bias, attribution_vectors, bias_decomp_type="absdot"): - # Decomposes the input bias based on similarity to the attribution vectors - # Args: - # bias: a bias vector (all_head_size) - # attribution_vectors: the attribution vectors from token j to i (b, i, j, all_head_size) :: (batch, seq_length, seq_length, all_head_size) - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bkd,d->bk", attribution_vectors, bias)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(attribution_vectors, bias, dim=-1)) - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(attribution_vectors, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(attribution_vectors.shape[:-1], device=attribution_vectors.device) - weights[:, 0] = 1.0 - elif bias_decomp_type == "dot": - weights = torch.einsum("bkd,d->bk", attribution_vectors, bias) - elif bias_decomp_type == "biastoken": - attribution_vectors[:, -1] = attribution_vectors[:, -1] + bias - return attribution_vectors - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.matmul(weights.unsqueeze(dim=-1), bias.unsqueeze(dim=0)) - return attribution_vectors + weighted_bias - - def biastoken_decomposer(self, biastoken, attribution_vectors, bias_decomp_type="absdot"): - # Decomposes the input bias based on similarity to the attribution vectors - # Args: - # bias: a bias vector (all_head_size) - # attribution_vectors: the attribution vectors from token j to i (b, i, j, all_head_size) :: (batch, seq_length, seq_length, all_head_size) - if bias_decomp_type == "absdot": - weights = torch.abs(torch.einsum("bkd,bd->bk", attribution_vectors, biastoken)) - elif bias_decomp_type == "abssim": - weights = torch.abs(torch.nn.functional.cosine_similarity(attribution_vectors, biastoken, dim=-1)) - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * weights - elif bias_decomp_type == "norm": - weights = torch.norm(attribution_vectors, dim=-1) - elif bias_decomp_type == "equal": - weights = (torch.norm(attribution_vectors, dim=-1) != 0) * 1.0 - elif bias_decomp_type == "cls": - weights = torch.zeros(attribution_vectors.shape[:-1], device=attribution_vectors.device) - weights[:, 0] = 1.0 - elif bias_decomp_type == "dot": - weights = torch.einsum("bkd,d->bk", attribution_vectors, biastoken) - - weights = weights / (weights.sum(dim=-1, keepdim=True) + 1e-12) - weighted_bias = torch.matmul(weights.unsqueeze(dim=-1), biastoken.unsqueeze(dim=1)) - return attribution_vectors + weighted_bias - - def ffn_decomposer(self, attribution_vectors, include_biases=True, bias_decomp_type="absdot"): - post_classifier = torch.einsum("ld,bkd->bkl", self.classifier.out_proj.weight, attribution_vectors) - if include_biases: - post_classifier = self.bias_decomposer(self.classifier.out_proj.bias, post_classifier, - bias_decomp_type=bias_decomp_type) - - return post_classifier - - @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint="cardiffnlp/twitter-roberta-base-emotion", - output_type=SequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_output="'optimism'", - expected_loss=0.08, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - decompx_config: Optional[DecompXConfig] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple, SequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.roberta( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - decompx_config=decompx_config - ) - sequence_output = outputs[0] - logits, mid_classifier_outputs = self.classifier(sequence_output, decompx_ready=decompx_config is not None) - - if decompx_config is not None: - pre_act_pooled = mid_classifier_outputs[0] - pooled_output = mid_classifier_outputs[1] - - if decompx_config.include_classifier_w_pooler: - decompx_idx = -2 if decompx_config.output_all_layers else -1 - aggregated_attribution_vectors = outputs[decompx_idx].aggregated[0] - - outputs[decompx_idx].aggregated = output_builder(aggregated_attribution_vectors, - decompx_config.output_aggregated) - - pooler_decomposed = self.pooler_decomposer( - attribution_vectors=aggregated_attribution_vectors[:, 0], - pre_act_pooled=pre_act_pooled, - post_act_pooled=pooled_output, - include_biases=decompx_config.include_biases, - bias_decomp_type="biastoken" if decompx_config.include_bias_token else decompx_config.bias_decomp_type, - tanh_approx_type=decompx_config.tanh_approx_type - ) - - aggregated_attribution_vectors = pooler_decomposed - - outputs[decompx_idx].pooler = output_builder(pooler_decomposed, decompx_config.output_pooler) - - classifier_decomposed = self.ffn_decomposer( - attribution_vectors=aggregated_attribution_vectors, - include_biases=decompx_config.include_biases, - bias_decomp_type="biastoken" if decompx_config.include_bias_token else decompx_config.bias_decomp_type - ) - - if decompx_config.include_bias_token and decompx_config.bias_decomp_type is not None: - bias_token = classifier_decomposed[:, -1, :].detach().clone() - classifier_decomposed = classifier_decomposed[:, :-1, :] - classifier_decomposed = self.biastoken_decomposer( - bias_token, - classifier_decomposed, - bias_decomp_type=decompx_config.bias_decomp_type - ) - - outputs[decompx_idx].classifier = classifier_decomposed if decompx_config.output_classifier else None - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - ROBERTA_START_DOCSTRING, -) -class RobertaForMultipleChoice(RobertaPreTrainedModel): - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def __init__(self, config): - super().__init__(config) - - self.roberta = RobertaModel(config) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, 1) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, MultipleChoiceModelOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., - num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See - `input_ids` above) - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] - - flat_input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None - flat_position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None - flat_token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None - flat_attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None - flat_inputs_embeds = ( - inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1)) - if inputs_embeds is not None - else None - ) - - outputs = self.roberta( - flat_input_ids, - position_ids=flat_position_ids, - token_type_ids=flat_token_type_ids, - attention_mask=flat_attention_mask, - head_mask=head_mask, - inputs_embeds=flat_inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return MultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - ROBERTA_START_DOCSTRING, -) -class RobertaForTokenClassification(RobertaPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.roberta = RobertaModel(config, add_pooling_layer=False) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint="Jean-Baptiste/roberta-large-ner-english", - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_output="['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']", - expected_loss=0.01, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.roberta( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -class RobertaClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.out_proj = nn.Linear(config.hidden_size, config.num_labels) - - def forward(self, features, decompx_ready=False, **kwargs): - x = features[:, 0, :] # take token (equiv. to [CLS]) - x = self.dropout(x) - pre_act = self.dense(x) - post_act = torch.tanh(pre_act) - x = self.dropout(post_act) - x = self.out_proj(x) - if decompx_ready: - return x, (pre_act, post_act) - return x, None - - -@add_start_docstrings( - """ - Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - ROBERTA_START_DOCSTRING, -) -class RobertaForQuestionAnswering(RobertaPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.roberta = RobertaModel(config, add_pooling_layer=False) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint="deepset/roberta-base-squad2", - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - expected_output="' puppet'", - expected_loss=0.86, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.roberta( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0): - """ - Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols - are ignored. This is modified from fairseq's `utils.make_positions`. - - Args: - x: torch.Tensor x: - - Returns: torch.Tensor - """ - # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. - mask = input_ids.ne(padding_idx).int() - incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask - return incremental_indices.long() + padding_idx diff --git a/spaces/monra/freegpt-webui-chimera/server/backend.py b/spaces/monra/freegpt-webui-chimera/server/backend.py deleted file mode 100644 index fd80f3a15e00455aaa104206ba0d27286a634b14..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui-chimera/server/backend.py +++ /dev/null @@ -1,179 +0,0 @@ -import re -from datetime import datetime -from g4f import ChatCompletion -from flask import request, Response, stream_with_context -from requests import get -from server.config import special_instructions - - -class Backend_Api: - def __init__(self, bp, config: dict) -> None: - """ - Initialize the Backend_Api class. - :param app: Flask application instance - :param config: Configuration dictionary - """ - self.bp = bp - self.routes = { - '/backend-api/v2/conversation': { - 'function': self._conversation, - 'methods': ['POST'] - } - } - - def _conversation(self): - """ - Handles the conversation route. - - :return: Response object containing the generated conversation stream - """ - conversation_id = request.json['conversation_id'] - - try: - api_key = request.json['api_key'] - jailbreak = request.json['jailbreak'] - model = request.json['model'] - messages = build_messages(jailbreak) - - # Generate response - response = ChatCompletion.create( - api_key=api_key, - model=model, - stream=True, - chatId=conversation_id, - messages=messages - ) - - return Response(stream_with_context(generate_stream(response, jailbreak)), mimetype='text/event-stream') - - except Exception as e: - print(e) - print(e.__traceback__.tb_next) - - return { - '_action': '_ask', - 'success': False, - "error": f"an error occurred {str(e)}" - }, 400 - - -def build_messages(jailbreak): - """ - Build the messages for the conversation. - - :param jailbreak: Jailbreak instruction string - :return: List of messages for the conversation - """ - _conversation = request.json['meta']['content']['conversation'] - internet_access = request.json['meta']['content']['internet_access'] - prompt = request.json['meta']['content']['parts'][0] - - # Add the existing conversation - conversation = _conversation - - # Add web results if enabled - if internet_access: - current_date = datetime.now().strftime("%Y-%m-%d") - query = f'Current date: {current_date}. ' + prompt["content"] - search_results = fetch_search_results(query) - conversation.extend(search_results) - - # Add jailbreak instructions if enabled - if jailbreak_instructions := getJailbreak(jailbreak): - conversation.extend(jailbreak_instructions) - - # Add the prompt - conversation.append(prompt) - - # Reduce conversation size to avoid API Token quantity error - if len(conversation) > 3: - conversation = conversation[-4:] - - return conversation - - -def fetch_search_results(query): - """ - Fetch search results for a given query. - - :param query: Search query string - :return: List of search results - """ - search = get('https://ddg-api.herokuapp.com/search', - params={ - 'query': query, - 'limit': 3, - }) - - snippets = "" - for index, result in enumerate(search.json()): - snippet = f'[{index + 1}] "{result["snippet"]}" URL:{result["link"]}.' - snippets += snippet - - response = "Here are some updated web searches. Use this to improve user response:" - response += snippets - - return [{'role': 'system', 'content': response}] - - -def generate_stream(response, jailbreak): - """ - Generate the conversation stream. - - :param response: Response object from ChatCompletion.create - :param jailbreak: Jailbreak instruction string - :return: Generator object yielding messages in the conversation - """ - if getJailbreak(jailbreak): - response_jailbreak = '' - jailbroken_checked = False - for message in response: - response_jailbreak += message - if jailbroken_checked: - yield message - else: - if response_jailbroken_success(response_jailbreak): - jailbroken_checked = True - if response_jailbroken_failed(response_jailbreak): - yield response_jailbreak - jailbroken_checked = True - else: - yield from response - - -def response_jailbroken_success(response: str) -> bool: - """Check if the response has been jailbroken. - - :param response: Response string - :return: Boolean indicating if the response has been jailbroken - """ - act_match = re.search(r'ACT:', response, flags=re.DOTALL) - return bool(act_match) - - -def response_jailbroken_failed(response): - """ - Check if the response has not been jailbroken. - - :param response: Response string - :return: Boolean indicating if the response has not been jailbroken - """ - return False if len(response) < 4 else not (response.startswith("GPT:") or response.startswith("ACT:")) - - -def getJailbreak(jailbreak): - """ - Check if jailbreak instructions are provided. - - :param jailbreak: Jailbreak instruction string - :return: Jailbreak instructions if provided, otherwise None - """ - if jailbreak != "default": - special_instructions[jailbreak][0]['content'] += special_instructions['two_responses_instruction'] - if jailbreak in special_instructions: - special_instructions[jailbreak] - return special_instructions[jailbreak] - else: - return None - else: - return None diff --git a/spaces/mpatel57/ConceptBed/README.md b/spaces/mpatel57/ConceptBed/README.md deleted file mode 100644 index 4ed32e099b0092985af8828c8de155f634a7536c..0000000000000000000000000000000000000000 --- a/spaces/mpatel57/ConceptBed/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: ConceptBed -emoji: 💻 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: mit - -tags: - - Text-to-Image - - Diffusion Modeling - - Concept Learning - - Benchmark - - Evaluations ---- - -Demo for the paper: [ConceptBed](https://arxiv.org/abs/2306.04695) - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/distributed/__init__.py b/spaces/mshukor/UnIVAL/fairseq/tests/distributed/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mygyasir/remove-photo-object/src/__init__.py b/spaces/mygyasir/remove-photo-object/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mygyasir/remove-photo-object/src/core.py b/spaces/mygyasir/remove-photo-object/src/core.py deleted file mode 100644 index 9706f344d99877b9f8ea6d383ef030c0a4aebdfa..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/remove-photo-object/src/core.py +++ /dev/null @@ -1,466 +0,0 @@ -import base64 -import json -import os -import re -import time -import uuid -from io import BytesIO -from pathlib import Path -import cv2 - -# For inpainting - -import numpy as np -import pandas as pd -import streamlit as st -from PIL import Image -from streamlit_drawable_canvas import st_canvas - - -import argparse -import io -import multiprocessing -from typing import Union - -import torch - -try: - torch._C._jit_override_can_fuse_on_cpu(False) - torch._C._jit_override_can_fuse_on_gpu(False) - torch._C._jit_set_texpr_fuser_enabled(False) - torch._C._jit_set_nvfuser_enabled(False) -except: - pass - -from src.helper import ( - download_model, - load_img, - norm_img, - numpy_to_bytes, - pad_img_to_modulo, - resize_max_size, -) - -NUM_THREADS = str(multiprocessing.cpu_count()) - -os.environ["OMP_NUM_THREADS"] = NUM_THREADS -os.environ["OPENBLAS_NUM_THREADS"] = NUM_THREADS -os.environ["MKL_NUM_THREADS"] = NUM_THREADS -os.environ["VECLIB_MAXIMUM_THREADS"] = NUM_THREADS -os.environ["NUMEXPR_NUM_THREADS"] = NUM_THREADS -if os.environ.get("CACHE_DIR"): - os.environ["TORCH_HOME"] = os.environ["CACHE_DIR"] - -#BUILD_DIR = os.environ.get("LAMA_CLEANER_BUILD_DIR", "./lama_cleaner/app/build") - -# For Seam-carving - -from scipy import ndimage as ndi - -SEAM_COLOR = np.array([255, 200, 200]) # seam visualization color (BGR) -SHOULD_DOWNSIZE = True # if True, downsize image for faster carving -DOWNSIZE_WIDTH = 500 # resized image width if SHOULD_DOWNSIZE is True -ENERGY_MASK_CONST = 100000.0 # large energy value for protective masking -MASK_THRESHOLD = 10 # minimum pixel intensity for binary mask -USE_FORWARD_ENERGY = True # if True, use forward energy algorithm - -device = torch.device("cpu") -model_path = "./assets/big-lama.pt" -model = torch.jit.load(model_path, map_location="cpu") -model = model.to(device) -model.eval() - - -######################################## -# UTILITY CODE -######################################## - - -def visualize(im, boolmask=None, rotate=False): - vis = im.astype(np.uint8) - if boolmask is not None: - vis[np.where(boolmask == False)] = SEAM_COLOR - if rotate: - vis = rotate_image(vis, False) - cv2.imshow("visualization", vis) - cv2.waitKey(1) - return vis - -def resize(image, width): - dim = None - h, w = image.shape[:2] - dim = (width, int(h * width / float(w))) - image = image.astype('float32') - return cv2.resize(image, dim) - -def rotate_image(image, clockwise): - k = 1 if clockwise else 3 - return np.rot90(image, k) - - -######################################## -# ENERGY FUNCTIONS -######################################## - -def backward_energy(im): - """ - Simple gradient magnitude energy map. - """ - xgrad = ndi.convolve1d(im, np.array([1, 0, -1]), axis=1, mode='wrap') - ygrad = ndi.convolve1d(im, np.array([1, 0, -1]), axis=0, mode='wrap') - - grad_mag = np.sqrt(np.sum(xgrad**2, axis=2) + np.sum(ygrad**2, axis=2)) - - # vis = visualize(grad_mag) - # cv2.imwrite("backward_energy_demo.jpg", vis) - - return grad_mag - -def forward_energy(im): - """ - Forward energy algorithm as described in "Improved Seam Carving for Video Retargeting" - by Rubinstein, Shamir, Avidan. - Vectorized code adapted from - https://github.com/axu2/improved-seam-carving. - """ - h, w = im.shape[:2] - im = cv2.cvtColor(im.astype(np.uint8), cv2.COLOR_BGR2GRAY).astype(np.float64) - - energy = np.zeros((h, w)) - m = np.zeros((h, w)) - - U = np.roll(im, 1, axis=0) - L = np.roll(im, 1, axis=1) - R = np.roll(im, -1, axis=1) - - cU = np.abs(R - L) - cL = np.abs(U - L) + cU - cR = np.abs(U - R) + cU - - for i in range(1, h): - mU = m[i-1] - mL = np.roll(mU, 1) - mR = np.roll(mU, -1) - - mULR = np.array([mU, mL, mR]) - cULR = np.array([cU[i], cL[i], cR[i]]) - mULR += cULR - - argmins = np.argmin(mULR, axis=0) - m[i] = np.choose(argmins, mULR) - energy[i] = np.choose(argmins, cULR) - - # vis = visualize(energy) - # cv2.imwrite("forward_energy_demo.jpg", vis) - - return energy - -######################################## -# SEAM HELPER FUNCTIONS -######################################## - -def add_seam(im, seam_idx): - """ - Add a vertical seam to a 3-channel color image at the indices provided - by averaging the pixels values to the left and right of the seam. - Code adapted from https://github.com/vivianhylee/seam-carving. - """ - h, w = im.shape[:2] - output = np.zeros((h, w + 1, 3)) - for row in range(h): - col = seam_idx[row] - for ch in range(3): - if col == 0: - p = np.mean(im[row, col: col + 2, ch]) - output[row, col, ch] = im[row, col, ch] - output[row, col + 1, ch] = p - output[row, col + 1:, ch] = im[row, col:, ch] - else: - p = np.mean(im[row, col - 1: col + 1, ch]) - output[row, : col, ch] = im[row, : col, ch] - output[row, col, ch] = p - output[row, col + 1:, ch] = im[row, col:, ch] - - return output - -def add_seam_grayscale(im, seam_idx): - """ - Add a vertical seam to a grayscale image at the indices provided - by averaging the pixels values to the left and right of the seam. - """ - h, w = im.shape[:2] - output = np.zeros((h, w + 1)) - for row in range(h): - col = seam_idx[row] - if col == 0: - p = np.mean(im[row, col: col + 2]) - output[row, col] = im[row, col] - output[row, col + 1] = p - output[row, col + 1:] = im[row, col:] - else: - p = np.mean(im[row, col - 1: col + 1]) - output[row, : col] = im[row, : col] - output[row, col] = p - output[row, col + 1:] = im[row, col:] - - return output - -def remove_seam(im, boolmask): - h, w = im.shape[:2] - boolmask3c = np.stack([boolmask] * 3, axis=2) - return im[boolmask3c].reshape((h, w - 1, 3)) - -def remove_seam_grayscale(im, boolmask): - h, w = im.shape[:2] - return im[boolmask].reshape((h, w - 1)) - -def get_minimum_seam(im, mask=None, remove_mask=None): - """ - DP algorithm for finding the seam of minimum energy. Code adapted from - https://karthikkaranth.me/blog/implementing-seam-carving-with-python/ - """ - h, w = im.shape[:2] - energyfn = forward_energy if USE_FORWARD_ENERGY else backward_energy - M = energyfn(im) - - if mask is not None: - M[np.where(mask > MASK_THRESHOLD)] = ENERGY_MASK_CONST - - # give removal mask priority over protective mask by using larger negative value - if remove_mask is not None: - M[np.where(remove_mask > MASK_THRESHOLD)] = -ENERGY_MASK_CONST * 100 - - seam_idx, boolmask = compute_shortest_path(M, im, h, w) - - return np.array(seam_idx), boolmask - -def compute_shortest_path(M, im, h, w): - backtrack = np.zeros_like(M, dtype=np.int_) - - - # populate DP matrix - for i in range(1, h): - for j in range(0, w): - if j == 0: - idx = np.argmin(M[i - 1, j:j + 2]) - backtrack[i, j] = idx + j - min_energy = M[i-1, idx + j] - else: - idx = np.argmin(M[i - 1, j - 1:j + 2]) - backtrack[i, j] = idx + j - 1 - min_energy = M[i - 1, idx + j - 1] - - M[i, j] += min_energy - - # backtrack to find path - seam_idx = [] - boolmask = np.ones((h, w), dtype=np.bool_) - j = np.argmin(M[-1]) - for i in range(h-1, -1, -1): - boolmask[i, j] = False - seam_idx.append(j) - j = backtrack[i, j] - - seam_idx.reverse() - return seam_idx, boolmask - -######################################## -# MAIN ALGORITHM -######################################## - -def seams_removal(im, num_remove, mask=None, vis=False, rot=False): - for _ in range(num_remove): - seam_idx, boolmask = get_minimum_seam(im, mask) - if vis: - visualize(im, boolmask, rotate=rot) - im = remove_seam(im, boolmask) - if mask is not None: - mask = remove_seam_grayscale(mask, boolmask) - return im, mask - - -def seams_insertion(im, num_add, mask=None, vis=False, rot=False): - seams_record = [] - temp_im = im.copy() - temp_mask = mask.copy() if mask is not None else None - - for _ in range(num_add): - seam_idx, boolmask = get_minimum_seam(temp_im, temp_mask) - if vis: - visualize(temp_im, boolmask, rotate=rot) - - seams_record.append(seam_idx) - temp_im = remove_seam(temp_im, boolmask) - if temp_mask is not None: - temp_mask = remove_seam_grayscale(temp_mask, boolmask) - - seams_record.reverse() - - for _ in range(num_add): - seam = seams_record.pop() - im = add_seam(im, seam) - if vis: - visualize(im, rotate=rot) - if mask is not None: - mask = add_seam_grayscale(mask, seam) - - # update the remaining seam indices - for remaining_seam in seams_record: - remaining_seam[np.where(remaining_seam >= seam)] += 2 - - return im, mask - -######################################## -# MAIN DRIVER FUNCTIONS -######################################## - -def seam_carve(im, dy, dx, mask=None, vis=False): - im = im.astype(np.float64) - h, w = im.shape[:2] - assert h + dy > 0 and w + dx > 0 and dy <= h and dx <= w - - if mask is not None: - mask = mask.astype(np.float64) - - output = im - - if dx < 0: - output, mask = seams_removal(output, -dx, mask, vis) - - elif dx > 0: - output, mask = seams_insertion(output, dx, mask, vis) - - if dy < 0: - output = rotate_image(output, True) - if mask is not None: - mask = rotate_image(mask, True) - output, mask = seams_removal(output, -dy, mask, vis, rot=True) - output = rotate_image(output, False) - - elif dy > 0: - output = rotate_image(output, True) - if mask is not None: - mask = rotate_image(mask, True) - output, mask = seams_insertion(output, dy, mask, vis, rot=True) - output = rotate_image(output, False) - - return output - - -def object_removal(im, rmask, mask=None, vis=False, horizontal_removal=False): - im = im.astype(np.float64) - rmask = rmask.astype(np.float64) - if mask is not None: - mask = mask.astype(np.float64) - output = im - - h, w = im.shape[:2] - - if horizontal_removal: - output = rotate_image(output, True) - rmask = rotate_image(rmask, True) - if mask is not None: - mask = rotate_image(mask, True) - - while len(np.where(rmask > MASK_THRESHOLD)[0]) > 0: - seam_idx, boolmask = get_minimum_seam(output, mask, rmask) - if vis: - visualize(output, boolmask, rotate=horizontal_removal) - output = remove_seam(output, boolmask) - rmask = remove_seam_grayscale(rmask, boolmask) - if mask is not None: - mask = remove_seam_grayscale(mask, boolmask) - - num_add = (h if horizontal_removal else w) - output.shape[1] - output, mask = seams_insertion(output, num_add, mask, vis, rot=horizontal_removal) - if horizontal_removal: - output = rotate_image(output, False) - - return output - - - -def s_image(im,mask,vs,hs,mode="resize"): - im = cv2.cvtColor(im, cv2.COLOR_RGBA2RGB) - mask = 255-mask[:,:,3] - h, w = im.shape[:2] - if SHOULD_DOWNSIZE and w > DOWNSIZE_WIDTH: - im = resize(im, width=DOWNSIZE_WIDTH) - if mask is not None: - mask = resize(mask, width=DOWNSIZE_WIDTH) - - # image resize mode - if mode=="resize": - dy = hs#reverse - dx = vs#reverse - assert dy is not None and dx is not None - output = seam_carve(im, dy, dx, mask, False) - - - # object removal mode - elif mode=="remove": - assert mask is not None - output = object_removal(im, mask, None, False, True) - - return output - - -##### Inpainting helper code - -def run(image, mask): - """ - image: [C, H, W] - mask: [1, H, W] - return: BGR IMAGE - """ - origin_height, origin_width = image.shape[1:] - image = pad_img_to_modulo(image, mod=8) - mask = pad_img_to_modulo(mask, mod=8) - - mask = (mask > 0) * 1 - image = torch.from_numpy(image).unsqueeze(0).to(device) - mask = torch.from_numpy(mask).unsqueeze(0).to(device) - - start = time.time() - with torch.no_grad(): - inpainted_image = model(image, mask) - - print(f"process time: {(time.time() - start)*1000}ms") - cur_res = inpainted_image[0].permute(1, 2, 0).detach().cpu().numpy() - cur_res = cur_res[0:origin_height, 0:origin_width, :] - cur_res = np.clip(cur_res * 255, 0, 255).astype("uint8") - cur_res = cv2.cvtColor(cur_res, cv2.COLOR_BGR2RGB) - return cur_res - - -def get_args_parser(): - parser = argparse.ArgumentParser() - parser.add_argument("--port", default=8080, type=int) - parser.add_argument("--device", default="cuda", type=str) - parser.add_argument("--debug", action="store_true") - return parser.parse_args() - - -def process_inpaint(image, mask): - image = cv2.cvtColor(image, cv2.COLOR_RGBA2RGB) - original_shape = image.shape - interpolation = cv2.INTER_CUBIC - - #size_limit: Union[int, str] = request.form.get("sizeLimit", "1080") - #if size_limit == "Original": - size_limit = max(image.shape) - #else: - # size_limit = int(size_limit) - - print(f"Origin image shape: {original_shape}") - image = resize_max_size(image, size_limit=size_limit, interpolation=interpolation) - print(f"Resized image shape: {image.shape}") - image = norm_img(image) - - mask = 255-mask[:,:,3] - mask = resize_max_size(mask, size_limit=size_limit, interpolation=interpolation) - mask = norm_img(mask) - - res_np_img = run(image, mask) - - return cv2.cvtColor(res_np_img, cv2.COLOR_BGR2RGB) \ No newline at end of file diff --git a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/augmentations.py b/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/augmentations.py deleted file mode 100644 index 0311b97b63db29d482eac00573b1de774a974338..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/augmentations.py +++ /dev/null @@ -1,277 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Image augmentation functions -""" - -import math -import random - -import cv2 -import numpy as np - -from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box -from utils.metrics import bbox_ioa - - -class Albumentations: - # YOLOv5 Albumentations class (optional, only used if package is installed) - def __init__(self): - self.transform = None - try: - import albumentations as A - check_version(A.__version__, '1.0.3', hard=True) # version requirement - - self.transform = A.Compose([ - A.Blur(p=0.01), - A.MedianBlur(p=0.01), - A.ToGray(p=0.01), - A.CLAHE(p=0.01), - A.RandomBrightnessContrast(p=0.0), - A.RandomGamma(p=0.0), - A.ImageCompression(quality_lower=75, p=0.0)], - bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) - - LOGGER.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p)) - except ImportError: # package not installed, skip - pass - except Exception as e: - LOGGER.info(colorstr('albumentations: ') + f'{e}') - - def __call__(self, im, labels, p=1.0): - if self.transform and random.random() < p: - new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed - im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) - return im, labels - - -def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5): - # HSV color-space augmentation - if hgain or sgain or vgain: - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV)) - dtype = im.dtype # uint8 - - x = np.arange(0, 256, dtype=r.dtype) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))) - cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed - - -def hist_equalize(im, clahe=True, bgr=False): - # Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255 - yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV) - if clahe: - c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - yuv[:, :, 0] = c.apply(yuv[:, :, 0]) - else: - yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram - return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB - - -def replicate(im, labels): - # Replicate labels - h, w = im.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b] # im4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return im, labels - - -def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32): - # Resize and pad image while meeting stride-multiple constraints - shape = im.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better val mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return im, ratio, (dw, dh) - - -def random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, - border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = im.shape[0] + border[0] * 2 # shape(h,w,c) - width = im.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -im.shape[1] / 2 # x translation (pixels) - C[1, 2] = -im.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(im[:, :, ::-1]) # base - # ax[1].imshow(im2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - use_segments = any(x.any() for x in segments) - new = np.zeros((n, 4)) - if use_segments: # warp segments - segments = resample_segments(segments) # upsample - for i, segment in enumerate(segments): - xy = np.ones((len(segment), 3)) - xy[:, :2] = segment - xy = xy @ M.T # transform - xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine - - # clip - new[i] = segment2box(xy, width, height) - - else: # warp boxes - xy = np.ones((n * 4, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # clip - new[:, [0, 2]] = new[:, [0, 2]].clip(0, width) - new[:, [1, 3]] = new[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10) - targets = targets[i] - targets[:, 1:5] = new[i] - - return im, targets - - -def copy_paste(im, labels, segments, p=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - if p and n: - h, w, c = im.shape # height, width, channels - im_new = np.zeros(im.shape, np.uint8) - for j in random.sample(range(n), k=round(p * n)): - l, s = labels[j], segments[j] - box = w - l[3], l[2], w - l[1], l[4] - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - if (ioa < 0.30).all(): # allow 30% obscuration of existing labels - labels = np.concatenate((labels, [[l[0], *box]]), 0) - segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=im, src2=im_new) - result = cv2.flip(result, 1) # augment segments (flip left-right) - i = result > 0 # pixels to replace - # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch - im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug - - return im, labels, segments - - -def cutout(im, labels, p=0.5): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - if random.random() < p: - h, w = im.shape[:2] - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) # create random masks - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def mixup(im, labels, im2, labels2): - # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf - r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0 - im = (im * r + im2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - return im, labels - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Blender 2.80 REPACK Crack.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Blender 2.80 REPACK Crack.md deleted file mode 100644 index fc092f77d5ba3ae23db6b24555fa83b0f0b8042c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Blender 2.80 REPACK Crack.md +++ /dev/null @@ -1,23 +0,0 @@ - -Here is a possible title and article for the keyword "Blender 2.80": - -

              Blender 2.80: A New Era of 3D Creation

              -

              Blender 2.80 is a major update to the popular open source 3D software that brings a redesigned user interface, a new real-time render engine, improved tools and gizmos, and much more. Whether you are into animation, modeling, VFX, games, or any other aspect of 3D creation, Blender 2.80 has something for you.

              -

              A Fresh Start

              -

              One of the most noticeable changes in Blender 2.80 is the new user interface that puts the focus on the artwork that you create. A new dark theme and modern icon set were introduced, along with a new toolbar and quick favorites menu that provide rapid access to often-used tools. Keyboard, mouse and tablet interaction got a refresh with left click select as the new default[^1^].

              -

              Blender 2.80 Crack


              DOWNLOAD ❤❤❤ https://urlcod.com/2uI9BE



              -

              Blender 2.80 also introduces templates and workspaces that let you quickly get started with tasks like sculpting, texture painting or motion tracking. They can be customized to create your own efficient working environment[^1^].

              -

              A Whole New Workspace

              -

              Thanks to the new modern 3D viewport, you will be able to display a scene optimized for the task you are performing. A new Workbench render engine was designed for getting work done in the viewport, supporting tasks like scene layout, modeling and sculpting. The engine also features overlays, providing fine control over which utilities are visible on top of the render[^1^].

              -

              Overlays also work on top of Eevee and Cycles render previews, so you can edit and paint the scene with full shading. Eevee is a new physically based real-time renderer that works both as a renderer for final frames, and as the engine driving Blender’s realtime viewport for creating assets. It has advanced features such as volumetrics, screen-space reflections and refractions, subsurface scattering, soft and contact shadows, depth of field, camera motion blur and bloom[^1^] [^2^].

              -

              Cycles is Blender's built-in powerful unbiased path-tracer engine that offers stunning ultra-realistic rendering. It supports GPU rendering and has many features such as adaptive sampling, denoising, hair rendering, motion blur, caustics and more[^2^].

              -

              Tools & Gizmos

              -

              The 3D viewport and UV editor have new interactive tools and gizmos, along with a new toolbar. These make it easier for new users to start using Blender, and for existing users to discover and use tools that previously required obscure key combinations. Besides gizmos for tools, various elements like lights, camera, and the compositing backdrop image now have handles to adjust their shape or other attributes[^1^].

              -

              Blender 2.80 also features a new Grease Pencil system that is now a full 2D drawing and animation tool. You can draw directly in the 3D viewport with brushes and colors, create vector or raster layers, use onion skinning and keyframes to animate your drawings, and use modifiers and effects to enhance your artwork[^1^] [^3^].

              -

              Download Blender 2.80

              -

              Blender 2.80 is free to use, share, change and sell your work. It is made by hundreds of contributors from around the world who are passionate about 3D creation. You can download Blender 2.80 from the official website[^2^] or from one of the many mirrors available online.

              -

              -

              If you want to learn more about Blender 2.80 and its features, you can check out the online manual, watch tutorials on YouTube or other platforms, join online communities like Blender Artists or Blender Stack Exchange, or enroll in courses offered by Blender Cloud or other providers.

              -

              Blender 2.80 is a new era of 3D creation that offers you the freedom to create anything you can imagine. Download it today and start your journey!

              7196e7f11a
              -
              -
              \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Descargar Driver De Red Sony Vaio Pcg-61A11U __HOT__.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Descargar Driver De Red Sony Vaio Pcg-61A11U __HOT__.md deleted file mode 100644 index 5c420cbe804cd816f307d52bbc7e91179a7ab586..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Descargar Driver De Red Sony Vaio Pcg-61A11U __HOT__.md +++ /dev/null @@ -1,18 +0,0 @@ - -

              ¿Cómo descargar el driver de red para Sony Vaio PCG-61A11U?

              -

              Si tienes una laptop Sony Vaio PCG-61A11U y quieres instalar el driver de red para conectarte a internet, hay varias opciones que puedes seguir. En este artículo te explicamos cómo hacerlo paso a paso.

              -

              El driver de red es el software que permite que tu laptop se comunique con el adaptador de red inalámbrica o el cable ethernet. Sin este driver, no podrás acceder a internet ni a otras redes locales. Por eso, es importante tenerlo actualizado y compatible con tu sistema operativo.

              -

              Descargar Driver De Red Sony Vaio Pcg-61A11U


              DOWNLOAD >>>>> https://urlcod.com/2uIaYX



              -

              Para descargar el driver de red para Sony Vaio PCG-61A11U, puedes usar uno de estos métodos:

              -
                -
              1. Visitar la página oficial de Sony y buscar tu modelo de laptop. Allí podrás encontrar los controladores y el software disponibles para tu equipo. Solo tienes que seleccionar el tipo de producto, el modelo y el sistema operativo que usas. Luego, busca la sección de "Red" y descarga el archivo que corresponda. Por ejemplo, si usas Windows 7 de 32 bits, puedes descargar el controlador de la tarjeta inalámbrica desde este enlace[^1^]. Después de descargarlo, ejecuta el archivo y sigue las instrucciones para instalarlo.
              2. -
              3. Usar un programa como Driver Easy o Driver Booster que te ayuda a detectar y actualizar los drivers que faltan o están obsoletos en tu laptop. Estos programas escanean tu equipo y te muestran una lista de los drivers que necesitas actualizar. Solo tienes que elegir el driver de red y hacer clic en "Actualizar". El programa se encargará de descargar e instalar el driver automáticamente.
              4. -
              5. Ver un video tutorial en YouTube que te guíe paso a paso cómo descargar e instalar el driver de red para Sony Vaio PCG-61A11U. Hay varios videos que puedes ver, pero te recomendamos este[^2^] que tiene más de 45 mil vistas y explica la forma fácil y sencilla de hacerlo. Solo tienes que seguir las indicaciones del video y descargar los archivos que se muestran en la descripción.
              6. -
              -

              Esperamos que este artículo te haya sido útil para descargar el driver de red para Sony Vaio PCG-61A11U. Recuerda que si tienes algún problema o duda, puedes contactar con el soporte técnico de Sony o consultar los foros de ayuda en línea.

              - -

              Además del driver de red, es posible que necesites actualizar otros drivers para mejorar el rendimiento y la seguridad de tu laptop Sony Vaio PCG-61A11U. Algunos de los drivers más importantes son los de la tarjeta gráfica, el sonido, el teclado, el touchpad, la cámara web y el lector de tarjetas. Estos drivers te permiten aprovechar al máximo las funciones y características de tu equipo.

              -

              Para actualizar estos drivers, puedes seguir el mismo método que usaste para el driver de red. Es decir, puedes visitar la página oficial de Sony, usar un programa como Driver Easy o Driver Booster o ver un video tutorial en YouTube. Solo tienes que buscar el driver que corresponda a tu modelo y sistema operativo y descargarlo e instalarlo.

              -

              Te recomendamos que actualices los drivers de tu laptop Sony Vaio PCG-61A11U con regularidad, al menos una vez al mes. Así podrás evitar problemas de compatibilidad, errores, fallas o pérdida de datos. También podrás disfrutar de una mejor experiencia de usuario y una mayor velocidad y estabilidad en tu conexión a internet.

              cec2833e83
              -
              -
              \ No newline at end of file diff --git a/spaces/nightfury/img2audio_video_prompt_tags/constants.py b/spaces/nightfury/img2audio_video_prompt_tags/constants.py deleted file mode 100644 index fc12c414f60abf02e7543d902404c742c6eda6ec..0000000000000000000000000000000000000000 --- a/spaces/nightfury/img2audio_video_prompt_tags/constants.py +++ /dev/null @@ -1,10 +0,0 @@ -import numpy as np -import os - -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) - -MUBERT_MODE = "loop" - -MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE') -MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN') \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py b/spaces/nikitaPDL2023/assignment4/detectron2/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py deleted file mode 100644 index 2a7c376da5f9269197c44079f3e0f3b09cdc63fa..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_R_50_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 2 # 100ep -> 200ep - -lr_multiplier.scheduler.milestones = [ - milestone * 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/nsarrazin/agents-js-oasst/Dockerfile b/spaces/nsarrazin/agents-js-oasst/Dockerfile deleted file mode 100644 index 897c8b883c33fdad2671a03ea6125dd19916db5d..0000000000000000000000000000000000000000 --- a/spaces/nsarrazin/agents-js-oasst/Dockerfile +++ /dev/null @@ -1,31 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile -FROM node:19 as builder-production - -WORKDIR /app - -COPY --link --chown=1000 package-lock.json package.json ./ -RUN --mount=type=cache,target=/app/.npm \ - npm set cache /app/.npm && \ - npm ci --omit=dev - -FROM builder-production as builder - -RUN --mount=type=cache,target=/app/.npm \ - npm set cache /app/.npm && \ - npm ci - -COPY --link --chown=1000 . . - -RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local \ - npm run build - -FROM node:19-slim - -RUN npm install -g pm2 - -COPY --from=builder-production /app/node_modules /app/node_modules -COPY --link --chown=1000 package.json /app/package.json -COPY --from=builder /app/build /app/build - -CMD pm2 start /app/build/index.js -i $CPU_CORES --no-daemon diff --git a/spaces/nuttella/test/Dockerfile b/spaces/nuttella/test/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/nuttella/test/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/scripts/run_eval.sh b/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/scripts/run_eval.sh deleted file mode 100644 index d347d0a4adabe92f9414bd0663bb0b15770c585d..0000000000000000000000000000000000000000 --- a/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/scripts/run_eval.sh +++ /dev/null @@ -1,74 +0,0 @@ -#!/bin/bash -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=2 -#SBATCH --gpus-per-task=1 - -<= %i characters. You have %i.' %(maxChar, len(input))) - - # make sure output num characters is integer - if type(text_len) != int: - raise gr.Error('Number of generated characters must be an integer!') - - # clean input data - input = clean_data(input) - - # load desired model and set maxChar limit -- change these as we generate new models! - - model = keras.models.load_model('nasrudin_v1.0.0.hdf5') - - # grab last maxChar characters - sentence = input[-maxChar:] - - # initalize generated string - generated = '' - #generated += input - - # randomly pick diversity parameter - diversities = [0.2, 0.5, 1.0, 1.2] - div_index = int(np.random.random()*(len(diversities))) - diversity = diversities[div_index] - # print('diversity:', diversity) - # sys.stdout.write(input) - - # generate text_len characters worth of test - for i in range(text_len): - # prepare chosen sentence as part of new dataset - x_pred = np.zeros((1, len(sentence), len(alphabet))) - for t, char in enumerate(sentence): - x_pred[0, t, char_to_int[char]] = 1.0 - - # use the current model to predict what outputs are - preds = model.predict(x_pred, verbose=0)[0] - # call the function above to interpret the probabilities and add a degree of freedom - next_index = sample(preds, diversity) - #convert predicted number to character - next_char = int_to_char[next_index] - - # append to existing string so as to build it up - generated += next_char - # append new character to previous sentence and delete the old one in front; now we train on predictions - sentence = sentence[1:] + next_char - - # print the new character as we create it - # sys.stdout.write(next_char) - # sys.stdout.flush() - print() - - return generated - -# call hugging space interactive interface; use Blocks - -with gr.Blocks() as think: - # have intro blurb - gr.Markdown("Hi! I'm Thinking Parrot, a text generating AI! 🦜" ) - - # have accordian blurb - with gr.Accordion("Click for more details!"): - gr.Markdown("Simply type at least 40 characters into the box labeled 'Your Input Text' below and then select the number of output characters you want (note: try lower values for a faster response). Then click 'Think'! My response will appear in the box labeled 'My Response'.") - - # setup user interface - input = [gr.Textbox(label = 'Your Input Text'), gr.Slider(minimum=40, maximum =500, label='Number of output characters', step=10)] - output = gr.Textbox(label = 'My Response') - think_btn = gr.Button('Think!') - think_btn.click(fn= generate_text, inputs = input, outputs = output) - -# enable queing if heavy traffic -think.queue(concurrency_count=3) -think.launch() \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/kandinsky2_2/text_to_image/train_text_to_image_lora_prior.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/kandinsky2_2/text_to_image/train_text_to_image_lora_prior.py deleted file mode 100644 index e4aec111b8f7eb481fc699cfb42ebda7e14e0e89..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/kandinsky2_2/text_to_image/train_text_to_image_lora_prior.py +++ /dev/null @@ -1,850 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Fine-tuning script for Stable Diffusion for text2image with support for LoRA.""" - -import argparse -import logging -import math -import os -import random -import shutil -from pathlib import Path - -import datasets -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from datasets import load_dataset -from huggingface_hub import create_repo, upload_folder -from tqdm import tqdm -from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTokenizer, CLIPVisionModelWithProjection - -import diffusers -from diffusers import AutoPipelineForText2Image, DDPMScheduler, PriorTransformer -from diffusers.loaders import AttnProcsLayers -from diffusers.models.attention_processor import LoRAAttnProcessor -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.21.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def save_model_card(repo_id: str, images=None, base_model=str, dataset_name=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- kandinsky -- text-to-image -- diffusers -- lora -inference: true ---- - """ - model_card = f""" -# LoRA text2image fine-tuning - {repo_id} -These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of finetuning Kandinsky 2.2.") - parser.add_argument( - "--pretrained_decoder_model_name_or_path", - type=str, - default="kandinsky-community/kandinsky-2-2-decoder", - required=False, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--pretrained_prior_model_name_or_path", - type=str, - default="kandinsky-community/kandinsky-2-2-prior", - required=False, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--image_column", type=str, default="image", help="The column of the dataset containing an image." - ) - parser.add_argument( - "--caption_column", - type=str, - default="text", - help="The column of the dataset containing a caption or a list of captions.", - ) - parser.add_argument( - "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference." - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=1, - help=( - "Run fine-tuning validation every X epochs. The validation process consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="kandi_2_2-model-finetuned-lora", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--train_batch_size", type=int, default=1, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="learning rate", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--snr_gamma", - type=float, - default=None, - help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. " - "More details here: https://arxiv.org/abs/2303.09556.", - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument( - "--adam_weight_decay", - type=float, - default=0.0, - required=False, - help="weight decay_to_use", - ) - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=("Max number of checkpoints to store."), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--rank", - type=int, - default=4, - help=("The dimension of the LoRA update matrices."), - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - return args - - -DATASET_NAME_MAPPING = { - "lambdalabs/pokemon-blip-captions": ("image", "text"), -} - - -def main(): - args = parse_args() - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration( - total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir - ) - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - # Load scheduler, image_processor, tokenizer and models. - noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") - image_processor = CLIPImageProcessor.from_pretrained( - args.pretrained_prior_model_name_or_path, subfolder="image_processor" - ) - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") - image_encoder = CLIPVisionModelWithProjection.from_pretrained( - args.pretrained_prior_model_name_or_path, subfolder="image_encoder" - ) - text_encoder = CLIPTextModelWithProjection.from_pretrained( - args.pretrained_prior_model_name_or_path, subfolder="text_encoder" - ) - prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") - # freeze parameters of models to save more memory - image_encoder.requires_grad_(False) - prior.requires_grad_(False) - text_encoder.requires_grad_(False) - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move image_encoder, text_encoder and prior to device and cast to weight_dtype - prior.to(accelerator.device, dtype=weight_dtype) - image_encoder.to(accelerator.device, dtype=weight_dtype) - text_encoder.to(accelerator.device, dtype=weight_dtype) - lora_attn_procs = {} - for name in prior.attn_processors.keys(): - lora_attn_procs[name] = LoRAAttnProcessor(hidden_size=2048, rank=args.rank) - - prior.set_attn_processor(lora_attn_procs) - - def compute_snr(timesteps): - """ - Computes SNR as per https://github.com/TiankaiHang/Min-SNR-Diffusion-Training/blob/521b624bd70c67cee4bdf49225915f5945a872e3/guided_diffusion/gaussian_diffusion.py#L847-L849 - """ - alphas_cumprod = noise_scheduler.alphas_cumprod - sqrt_alphas_cumprod = alphas_cumprod**0.5 - sqrt_one_minus_alphas_cumprod = (1.0 - alphas_cumprod) ** 0.5 - - # Expand the tensors. - # Adapted from https://github.com/TiankaiHang/Min-SNR-Diffusion-Training/blob/521b624bd70c67cee4bdf49225915f5945a872e3/guided_diffusion/gaussian_diffusion.py#L1026 - sqrt_alphas_cumprod = sqrt_alphas_cumprod.to(device=timesteps.device)[timesteps].float() - while len(sqrt_alphas_cumprod.shape) < len(timesteps.shape): - sqrt_alphas_cumprod = sqrt_alphas_cumprod[..., None] - alpha = sqrt_alphas_cumprod.expand(timesteps.shape) - - sqrt_one_minus_alphas_cumprod = sqrt_one_minus_alphas_cumprod.to(device=timesteps.device)[timesteps].float() - while len(sqrt_one_minus_alphas_cumprod.shape) < len(timesteps.shape): - sqrt_one_minus_alphas_cumprod = sqrt_one_minus_alphas_cumprod[..., None] - sigma = sqrt_one_minus_alphas_cumprod.expand(timesteps.shape) - - # Compute SNR. - snr = (alpha / sigma) ** 2 - return snr - - lora_layers = AttnProcsLayers(prior.attn_processors) - - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - optimizer = optimizer_cls( - lora_layers.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None) - if args.image_column is None: - image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - image_column = args.image_column - if image_column not in column_names: - raise ValueError( - f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.caption_column is None: - caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - caption_column = args.caption_column - if caption_column not in column_names: - raise ValueError( - f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" - ) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(examples, is_train=True): - captions = [] - for caption in examples[caption_column]: - if isinstance(caption, str): - captions.append(caption) - elif isinstance(caption, (list, np.ndarray)): - # take a random caption if there are multiple - captions.append(random.choice(caption) if is_train else caption[0]) - else: - raise ValueError( - f"Caption column `{caption_column}` should contain either strings or lists of strings." - ) - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ) - text_input_ids = inputs.input_ids - text_mask = inputs.attention_mask.bool() - return text_input_ids, text_mask - - def preprocess_train(examples): - images = [image.convert("RGB") for image in examples[image_column]] - examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values - examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) - return examples - - with accelerator.main_process_first(): - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - clip_pixel_values = torch.stack([example["clip_pixel_values"] for example in examples]) - clip_pixel_values = clip_pixel_values.to(memory_format=torch.contiguous_format).float() - text_input_ids = torch.stack([example["text_input_ids"] for example in examples]) - text_mask = torch.stack([example["text_mask"] for example in examples]) - return {"clip_pixel_values": clip_pixel_values, "text_input_ids": text_input_ids, "text_mask": text_mask} - - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - shuffle=True, - collate_fn=collate_fn, - batch_size=args.train_batch_size, - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - clip_mean = prior.clip_mean.clone() - clip_std = prior.clip_std.clone() - lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - lora_layers, optimizer, train_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("text2image-fine-tune", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - clip_mean = clip_mean.to(weight_dtype).to(accelerator.device) - clip_std = clip_std.to(weight_dtype).to(accelerator.device) - for epoch in range(first_epoch, args.num_train_epochs): - prior.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(prior): - # Convert images to latent space - text_input_ids, text_mask, clip_images = ( - batch["text_input_ids"], - batch["text_mask"], - batch["clip_pixel_values"].to(weight_dtype), - ) - with torch.no_grad(): - text_encoder_output = text_encoder(text_input_ids) - prompt_embeds = text_encoder_output.text_embeds - text_encoder_hidden_states = text_encoder_output.last_hidden_state - - image_embeds = image_encoder(clip_images).image_embeds - # Sample noise that we'll add to the image_embeds - noise = torch.randn_like(image_embeds) - bsz = image_embeds.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=image_embeds.device - ) - timesteps = timesteps.long() - image_embeds = (image_embeds - clip_mean) / clip_std - noisy_latents = noise_scheduler.add_noise(image_embeds, noise, timesteps) - - target = image_embeds - - # Predict the noise residual and compute loss - model_pred = prior( - noisy_latents, - timestep=timesteps, - proj_embedding=prompt_embeds, - encoder_hidden_states=text_encoder_hidden_states, - attention_mask=text_mask, - ).predicted_image_embedding - - if args.snr_gamma is None: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - else: - # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556. - # Since we predict the noise instead of x_0, the original formulation is slightly changed. - # This is discussed in Section 4.2 of the same paper. - snr = compute_snr(timesteps) - mse_loss_weights = ( - torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(dim=1)[0] / snr - ) - # We first calculate the original loss. Then we mean over the non-batch dimensions and - # rebalance the sample-wise losses with their respective loss weights. - # Finally, we take the mean of the rebalanced loss. - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none") - loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights - loss = loss.mean() - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() - train_loss += avg_loss.item() / args.gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(prior.parameters(), args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` - if args.checkpoints_total_limit is not None: - checkpoints = os.listdir(args.output_dir) - checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] - checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) - - # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints - if len(checkpoints) >= args.checkpoints_total_limit: - num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 - removing_checkpoints = checkpoints[0:num_to_remove] - - logger.info( - f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" - ) - logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") - - for removing_checkpoint in removing_checkpoints: - removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) - shutil.rmtree(removing_checkpoint) - - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process: - if args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - pipeline = AutoPipelineForText2Image.from_pretrained( - args.pretrained_decoder_model_name_or_path, - prior_prior=accelerator.unwrap_model(prior), - torch_dtype=weight_dtype, - ) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = torch.Generator(device=accelerator.device) - if args.seed is not None: - generator = generator.manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - images.append( - pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0] - ) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Save the lora layers - accelerator.wait_for_everyone() - if accelerator.is_main_process: - prior = prior.to(torch.float32) - prior.save_attn_procs(args.output_dir) - - if args.push_to_hub: - save_model_card( - repo_id, - images=images, - base_model=args.pretrained_prior_model_name_or_path, - dataset_name=args.dataset_name, - repo_folder=args.output_dir, - ) - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - # Final inference - # Load previous pipeline - pipeline = AutoPipelineForText2Image.from_pretrained( - args.pretrained_decoder_model_name_or_path, torch_dtype=weight_dtype - ) - pipeline = pipeline.to(accelerator.device) - - # load attention processors - pipeline.prior_prior.load_attn_procs(args.output_dir) - - # run inference - generator = torch.Generator(device=accelerator.device) - if args.seed is not None: - generator = generator.manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - images.append(pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0]) - - if accelerator.is_main_process: - for tracker in accelerator.trackers: - if len(images) != 0: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "test": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/ddim/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/ddim/__init__.py deleted file mode 100644 index 0121cd8f6dac071b4ce78cf727ff1657c8e51626..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/ddim/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -from typing import TYPE_CHECKING - -from ...utils import _LazyModule - - -_import_structure = {"pipeline_ddim": ["DDIMPipeline"]} - -if TYPE_CHECKING: - from .pipeline_ddim import DDIMPipeline -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, - globals()["__file__"], - _import_structure, - module_spec=__spec__, - ) diff --git a/spaces/patgpt4/MusicGen/audiocraft/modules/codebooks_patterns.py b/spaces/patgpt4/MusicGen/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/peb-peb/shravan/transcribe.py b/spaces/peb-peb/shravan/transcribe.py deleted file mode 100644 index fd21572bd4f76a706442f25c4acfaf39d5b6b635..0000000000000000000000000000000000000000 --- a/spaces/peb-peb/shravan/transcribe.py +++ /dev/null @@ -1,96 +0,0 @@ -import whisper -import datetime -import subprocess -import wave -import contextlib - - -import torch -import pyannote.audio -from pyannote.audio.pipelines.speaker_verification import PretrainedSpeakerEmbedding -from pyannote.audio import Audio -from pyannote.core import Segment -from sklearn.cluster import AgglomerativeClustering -import numpy as np - -model = whisper.load_model("large-v2") -embedding_model = PretrainedSpeakerEmbedding( - "speechbrain/spkrec-ecapa-voxceleb", - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -) - -def transcribe(audio, num_speakers): - print(type(audio)) - path, error = convert_to_wav(audio) - if error is not None: - return error - - duration = get_duration(path) - if duration > 4 * 60 * 60: - return "Audio duration too long" - - result = model.transcribe(path) - segments = result["segments"] - - num_speakers = min(max(round(num_speakers), 1), len(segments)) - if len(segments) == 1: - segments[0]['speaker'] = 'SPEAKER 1' - else: - embeddings = make_embeddings(path, segments, duration) - add_speaker_labels(segments, embeddings, num_speakers) - output = get_output(segments) - return output - -def convert_to_wav(path): - if path[-3:] != 'wav': - new_path = '.'.join(path.split('.')[:-1]) + '.wav' - try: - subprocess.call(['ffmpeg', '-i', path, new_path, '-y']) - except: - return path, 'Error: Could not convert file to .wav' - path = new_path - return path, None - -def get_duration(path): - with contextlib.closing(wave.open(path,'r')) as f: - frames = f.getnframes() - rate = f.getframerate() - return frames / float(rate) - -def make_embeddings(path, segments, duration): - embeddings = np.zeros(shape=(len(segments), 192)) - for i, segment in enumerate(segments): - embeddings[i] = segment_embedding(path, segment, duration) - return np.nan_to_num(embeddings) - -audio = Audio() - -def segment_embedding(path, segment, duration): - start = segment["start"] - # Whisper overshoots the end timestamp in the last segment - end = min(duration, segment["end"]) - clip = Segment(start, end) - waveform, sample_rate = audio.crop(path, clip) - return embedding_model(waveform[None]) - -def add_speaker_labels(segments, embeddings, num_speakers): - """Add speaker labels""" - clustering = AgglomerativeClustering(num_speakers).fit(embeddings) - labels = clustering.labels_ - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1) - -def time(secs): - """Function to return time delta""" - return datetime.timedelta(seconds=round(secs)) - -def get_output(segments): - """Format and generate the output string""" - output = '' - for (i, segment) in enumerate(segments): - if i == 0 or segments[i - 1]["speaker"] != segment["speaker"]: - if i != 0: - output += '\n\n' - output += segment["speaker"] + ' ' + str(time(segment["start"])) + '\n' - output += segment["text"][1:] + ' ' - return output diff --git a/spaces/phenomenon1981/DreamlikeArt-Diffusion-1.0/style.css b/spaces/phenomenon1981/DreamlikeArt-Diffusion-1.0/style.css deleted file mode 100644 index fdbef9e64cc6b9f8003698ffa38997ee22a640ac..0000000000000000000000000000000000000000 --- a/spaces/phenomenon1981/DreamlikeArt-Diffusion-1.0/style.css +++ /dev/null @@ -1,84 +0,0 @@ -#col-container { - max-width: 800px; - margin-left: auto; - margin-right: auto; -} -a { - color: inherit; - text-decoration: underline; -} -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: white; - border-color: #9d66e5; - background: #9d66e5; -} -input[type='range'] { - accent-color: #9d66e5; -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} -.container { - max-width: 800px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button { - white-space: nowrap; -} -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} -#advanced-options { - margin-bottom: 20px; -} -.footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.dark .logo{ filter: invert(1); } -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} -.acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} - diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/_manylinux.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/_manylinux.py deleted file mode 100644 index 449c655be65a948f7b2476302a46c35d9e7605ac..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/_manylinux.py +++ /dev/null @@ -1,240 +0,0 @@ -import collections -import contextlib -import functools -import os -import re -import sys -import warnings -from typing import Dict, Generator, Iterator, NamedTuple, Optional, Tuple - -from ._elffile import EIClass, EIData, ELFFile, EMachine - -EF_ARM_ABIMASK = 0xFF000000 -EF_ARM_ABI_VER5 = 0x05000000 -EF_ARM_ABI_FLOAT_HARD = 0x00000400 - - -# `os.PathLike` not a generic type until Python 3.9, so sticking with `str` -# as the type for `path` until then. -@contextlib.contextmanager -def _parse_elf(path: str) -> Generator[Optional[ELFFile], None, None]: - try: - with open(path, "rb") as f: - yield ELFFile(f) - except (OSError, TypeError, ValueError): - yield None - - -def _is_linux_armhf(executable: str) -> bool: - # hard-float ABI can be detected from the ELF header of the running - # process - # https://static.docs.arm.com/ihi0044/g/aaelf32.pdf - with _parse_elf(executable) as f: - return ( - f is not None - and f.capacity == EIClass.C32 - and f.encoding == EIData.Lsb - and f.machine == EMachine.Arm - and f.flags & EF_ARM_ABIMASK == EF_ARM_ABI_VER5 - and f.flags & EF_ARM_ABI_FLOAT_HARD == EF_ARM_ABI_FLOAT_HARD - ) - - -def _is_linux_i686(executable: str) -> bool: - with _parse_elf(executable) as f: - return ( - f is not None - and f.capacity == EIClass.C32 - and f.encoding == EIData.Lsb - and f.machine == EMachine.I386 - ) - - -def _have_compatible_abi(executable: str, arch: str) -> bool: - if arch == "armv7l": - return _is_linux_armhf(executable) - if arch == "i686": - return _is_linux_i686(executable) - return arch in {"x86_64", "aarch64", "ppc64", "ppc64le", "s390x"} - - -# If glibc ever changes its major version, we need to know what the last -# minor version was, so we can build the complete list of all versions. -# For now, guess what the highest minor version might be, assume it will -# be 50 for testing. Once this actually happens, update the dictionary -# with the actual value. -_LAST_GLIBC_MINOR: Dict[int, int] = collections.defaultdict(lambda: 50) - - -class _GLibCVersion(NamedTuple): - major: int - minor: int - - -def _glibc_version_string_confstr() -> Optional[str]: - """ - Primary implementation of glibc_version_string using os.confstr. - """ - # os.confstr is quite a bit faster than ctypes.DLL. It's also less likely - # to be broken or missing. This strategy is used in the standard library - # platform module. - # https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183 - try: - # Should be a string like "glibc 2.17". - version_string: str = getattr(os, "confstr")("CS_GNU_LIBC_VERSION") - assert version_string is not None - _, version = version_string.rsplit() - except (AssertionError, AttributeError, OSError, ValueError): - # os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)... - return None - return version - - -def _glibc_version_string_ctypes() -> Optional[str]: - """ - Fallback implementation of glibc_version_string using ctypes. - """ - try: - import ctypes - except ImportError: - return None - - # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen - # manpage says, "If filename is NULL, then the returned handle is for the - # main program". This way we can let the linker do the work to figure out - # which libc our process is actually using. - # - # We must also handle the special case where the executable is not a - # dynamically linked executable. This can occur when using musl libc, - # for example. In this situation, dlopen() will error, leading to an - # OSError. Interestingly, at least in the case of musl, there is no - # errno set on the OSError. The single string argument used to construct - # OSError comes from libc itself and is therefore not portable to - # hard code here. In any case, failure to call dlopen() means we - # can proceed, so we bail on our attempt. - try: - process_namespace = ctypes.CDLL(None) - except OSError: - return None - - try: - gnu_get_libc_version = process_namespace.gnu_get_libc_version - except AttributeError: - # Symbol doesn't exist -> therefore, we are not linked to - # glibc. - return None - - # Call gnu_get_libc_version, which returns a string like "2.5" - gnu_get_libc_version.restype = ctypes.c_char_p - version_str: str = gnu_get_libc_version() - # py2 / py3 compatibility: - if not isinstance(version_str, str): - version_str = version_str.decode("ascii") - - return version_str - - -def _glibc_version_string() -> Optional[str]: - """Returns glibc version string, or None if not using glibc.""" - return _glibc_version_string_confstr() or _glibc_version_string_ctypes() - - -def _parse_glibc_version(version_str: str) -> Tuple[int, int]: - """Parse glibc version. - - We use a regexp instead of str.split because we want to discard any - random junk that might come after the minor version -- this might happen - in patched/forked versions of glibc (e.g. Linaro's version of glibc - uses version strings like "2.20-2014.11"). See gh-3588. - """ - m = re.match(r"(?P[0-9]+)\.(?P[0-9]+)", version_str) - if not m: - warnings.warn( - f"Expected glibc version with 2 components major.minor," - f" got: {version_str}", - RuntimeWarning, - ) - return -1, -1 - return int(m.group("major")), int(m.group("minor")) - - -@functools.lru_cache() -def _get_glibc_version() -> Tuple[int, int]: - version_str = _glibc_version_string() - if version_str is None: - return (-1, -1) - return _parse_glibc_version(version_str) - - -# From PEP 513, PEP 600 -def _is_compatible(name: str, arch: str, version: _GLibCVersion) -> bool: - sys_glibc = _get_glibc_version() - if sys_glibc < version: - return False - # Check for presence of _manylinux module. - try: - import _manylinux # noqa - except ImportError: - return True - if hasattr(_manylinux, "manylinux_compatible"): - result = _manylinux.manylinux_compatible(version[0], version[1], arch) - if result is not None: - return bool(result) - return True - if version == _GLibCVersion(2, 5): - if hasattr(_manylinux, "manylinux1_compatible"): - return bool(_manylinux.manylinux1_compatible) - if version == _GLibCVersion(2, 12): - if hasattr(_manylinux, "manylinux2010_compatible"): - return bool(_manylinux.manylinux2010_compatible) - if version == _GLibCVersion(2, 17): - if hasattr(_manylinux, "manylinux2014_compatible"): - return bool(_manylinux.manylinux2014_compatible) - return True - - -_LEGACY_MANYLINUX_MAP = { - # CentOS 7 w/ glibc 2.17 (PEP 599) - (2, 17): "manylinux2014", - # CentOS 6 w/ glibc 2.12 (PEP 571) - (2, 12): "manylinux2010", - # CentOS 5 w/ glibc 2.5 (PEP 513) - (2, 5): "manylinux1", -} - - -def platform_tags(linux: str, arch: str) -> Iterator[str]: - if not _have_compatible_abi(sys.executable, arch): - return - # Oldest glibc to be supported regardless of architecture is (2, 17). - too_old_glibc2 = _GLibCVersion(2, 16) - if arch in {"x86_64", "i686"}: - # On x86/i686 also oldest glibc to be supported is (2, 5). - too_old_glibc2 = _GLibCVersion(2, 4) - current_glibc = _GLibCVersion(*_get_glibc_version()) - glibc_max_list = [current_glibc] - # We can assume compatibility across glibc major versions. - # https://sourceware.org/bugzilla/show_bug.cgi?id=24636 - # - # Build a list of maximum glibc versions so that we can - # output the canonical list of all glibc from current_glibc - # down to too_old_glibc2, including all intermediary versions. - for glibc_major in range(current_glibc.major - 1, 1, -1): - glibc_minor = _LAST_GLIBC_MINOR[glibc_major] - glibc_max_list.append(_GLibCVersion(glibc_major, glibc_minor)) - for glibc_max in glibc_max_list: - if glibc_max.major == too_old_glibc2.major: - min_minor = too_old_glibc2.minor - else: - # For other glibc major versions oldest supported is (x, 0). - min_minor = -1 - for glibc_minor in range(glibc_max.minor, min_minor, -1): - glibc_version = _GLibCVersion(glibc_max.major, glibc_minor) - tag = "manylinux_{}_{}".format(*glibc_version) - if _is_compatible(tag, arch, glibc_version): - yield linux.replace("linux", tag) - # Handle the legacy manylinux1, manylinux2010, manylinux2014 tags. - if glibc_version in _LEGACY_MANYLINUX_MAP: - legacy_tag = _LEGACY_MANYLINUX_MAP[glibc_version] - if _is_compatible(legacy_tag, arch, glibc_version): - yield linux.replace("linux", legacy_tag) diff --git a/spaces/pleonova/multi-label-summary-text/README.md b/spaces/pleonova/multi-label-summary-text/README.md deleted file mode 100644 index 8d3f3fcc02af29983a8b45b1802a044e1cadc7af..0000000000000000000000000000000000000000 --- a/spaces/pleonova/multi-label-summary-text/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Multi Label Summary Text -emoji: 📚 -colorFrom: indigo -colorTo: gray -sdk: streamlit -python_version: 3.9.13 -app_file: app.py -pinned: false ---- - -#### Interactive version -This app is hosted on HuggingFace spaces: https://huggingface.co/spaces/pleonova/multi-label-summary-text - -#### Objective -The goal of this app is to identify multiple relevant labels for long text. - -#### Model -facebook/bart-large-mnli zero-shot transfer-learning summarizer and classifier - -#### Approach -Updating the head of the neural network, we can use the same pretrained bart model to first summarize our long text by first splitting out our long text into chunks of 1024 tokens and then generating a summary for each of the text chunks. Next, all the summaries are concanenated and the bart model is used classify the summarized text. Alternatively, one can also classify the whole text as is. diff --git a/spaces/posicube/mean_reciprocal_rank/app.py b/spaces/posicube/mean_reciprocal_rank/app.py deleted file mode 100644 index 2c08a1b057e9568192b03170e84605b112cda2c9..0000000000000000000000000000000000000000 --- a/spaces/posicube/mean_reciprocal_rank/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("posicube/mean_reciprocal_rank") -launch_gradio_widget(module) \ No newline at end of file diff --git a/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/PortAudio.java b/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/PortAudio.java deleted file mode 100644 index 41b3c67b58f9877ddbb4fe040ec32a6fe9a67829..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/PortAudio.java +++ /dev/null @@ -1,261 +0,0 @@ -/* - * Portable Audio I/O Library - * Java Binding for PortAudio - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 2008 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup bindings_java - - @brief Java wrapper for the PortAudio API. -*/ -package com.portaudio; - -/** - * Java methods that call PortAudio via JNI. This is a portable audio I/O - * library that can be used as an alternative to JavaSound. - * - * Please see the PortAudio documentation for a full explanation. - * - * http://portaudio.com/docs/ - * http://portaudio.com/docs/v19-doxydocs/portaudio_8h.html - * - * This Java binding does not support audio callbacks because an audio callback - * should never block. Calling into a Java virtual machine might block for - * garbage collection or synchronization. So only the blocking read/write mode - * is supported. - * - * @see BlockingStream - * @see DeviceInfo - * @see HostApiInfo - * @see StreamInfo - * @see StreamParameters - * - * @author Phil Burk - * - */ -public class PortAudio -{ - public final static int FLAG_CLIP_OFF = (1 << 0); - public final static int FLAG_DITHER_OFF = (1 << 1); - - /** Sample Formats */ - public final static int FORMAT_FLOAT_32 = (1 << 0); - public final static int FORMAT_INT_32 = (1 << 1); // not supported - public final static int FORMAT_INT_24 = (1 << 2); // not supported - public final static int FORMAT_INT_16 = (1 << 3); - public final static int FORMAT_INT_8 = (1 << 4); // not supported - public final static int FORMAT_UINT_8 = (1 << 5); // not supported - - /** These HOST_API_TYPES will not change in the future. */ - public final static int HOST_API_TYPE_DEV = 0; - public final static int HOST_API_TYPE_DIRECTSOUND = 1; - public final static int HOST_API_TYPE_MME = 2; - public final static int HOST_API_TYPE_ASIO = 3; - /** Apple Sound Manager. Obsolete. */ - public final static int HOST_API_TYPE_SOUNDMANAGER = 4; - public final static int HOST_API_TYPE_COREAUDIO = 5; - public final static int HOST_API_TYPE_OSS = 7; - public final static int HOST_API_TYPE_ALSA = 8; - public final static int HOST_API_TYPE_AL = 9; - public final static int HOST_API_TYPE_BEOS = 10; - public final static int HOST_API_TYPE_WDMKS = 11; - public final static int HOST_API_TYPE_JACK = 12; - public final static int HOST_API_TYPE_WASAPI = 13; - public final static int HOST_API_TYPE_AUDIOSCIENCE = 14; - public final static int HOST_API_TYPE_COUNT = 15; - - static - { - String os = System.getProperty( "os.name" ).toLowerCase(); - // On Windows we have separate libraries for 32 and 64-bit JVMs. - if( os.indexOf( "win" ) >= 0 ) - { - if( System.getProperty( "os.arch" ).contains( "64" ) ) - { - System.loadLibrary( "jportaudio_x64" ); - } - else - { - System.loadLibrary( "jportaudio_x86" ); - } - } - else - { - System.loadLibrary( "jportaudio" ); - } - System.out.println( "---- JPortAudio version " + getVersion() + ", " - + getVersionText() ); - } - - /** - * @return the release number of the currently running PortAudio build, eg - * 1900. - */ - public native static int getVersion(); - - /** - * @return a textual description of the current PortAudio build, eg - * "PortAudio V19-devel 13 October 2002". - */ - public native static String getVersionText(); - - /** - * Library initialization function - call this before using PortAudio. This - * function initializes internal data structures and prepares underlying - * host APIs for use. With the exception of getVersion(), getVersionText(), - * and getErrorText(), this function MUST be called before using any other - * PortAudio API functions. - */ - public native static void initialize(); - - /** - * Library termination function - call this when finished using PortAudio. - * This function deallocates all resources allocated by PortAudio since it - * was initialized by a call to initialize(). In cases where Pa_Initialise() - * has been called multiple times, each call must be matched with a - * corresponding call to terminate(). The final matching call to terminate() - * will automatically close any PortAudio streams that are still open. - */ - public native static void terminate(); - - /** - * @return the number of available devices. The number of available devices - * may be zero. - */ - public native static int getDeviceCount(); - - private native static void getDeviceInfo( int index, DeviceInfo deviceInfo ); - - /** - * @param index - * A valid device index in the range 0 to (getDeviceCount()-1) - * @return An DeviceInfo structure. - * @throws RuntimeException - * if the device parameter is out of range. - */ - public static DeviceInfo getDeviceInfo( int index ) - { - DeviceInfo deviceInfo = new DeviceInfo(); - getDeviceInfo( index, deviceInfo ); - return deviceInfo; - } - - /** - * @return the number of available host APIs. - */ - public native static int getHostApiCount(); - - private native static void getHostApiInfo( int index, - HostApiInfo hostApiInfo ); - - /** - * @param index - * @return information about the Host API - */ - public static HostApiInfo getHostApiInfo( int index ) - { - HostApiInfo hostApiInfo = new HostApiInfo(); - getHostApiInfo( index, hostApiInfo ); - return hostApiInfo; - } - - /** - * @param hostApiType - * A unique host API identifier, for example - * HOST_API_TYPE_COREAUDIO. - * @return a runtime host API index - */ - public native static int hostApiTypeIdToHostApiIndex( int hostApiType ); - - /** - * @param hostApiIndex - * A valid host API index ranging from 0 to (getHostApiCount()-1) - * @param apiDeviceIndex - * A valid per-host device index in the range 0 to - * (getHostApiInfo(hostApi).deviceCount-1) - * @return standard PortAudio device index - */ - public native static int hostApiDeviceIndexToDeviceIndex( int hostApiIndex, - int apiDeviceIndex ); - - public native static int getDefaultInputDevice(); - - public native static int getDefaultOutputDevice(); - - public native static int getDefaultHostApi(); - - /** - * @param inputStreamParameters - * input description, may be null - * @param outputStreamParameters - * output description, may be null - * @param sampleRate - * typically 44100 or 48000, or maybe 22050, 16000, 8000, 96000 - * @return 0 if supported or a negative error - */ - public native static int isFormatSupported( - StreamParameters inputStreamParameters, - StreamParameters outputStreamParameters, int sampleRate ); - - private native static void openStream( BlockingStream blockingStream, - StreamParameters inputStreamParameters, - StreamParameters outputStreamParameters, int sampleRate, - int framesPerBuffer, int flags ); - - /** - * - * @param inputStreamParameters - * input description, may be null - * @param outputStreamParameters - * output description, may be null - * @param sampleRate - * typically 44100 or 48000, or maybe 22050, 16000, 8000, 96000 - * @param framesPerBuffer - * @param flags - * @return - */ - public static BlockingStream openStream( - StreamParameters inputStreamParameters, - StreamParameters outputStreamParameters, int sampleRate, - int framesPerBuffer, int flags ) - { - BlockingStream blockingStream = new BlockingStream(); - openStream( blockingStream, inputStreamParameters, - outputStreamParameters, sampleRate, framesPerBuffer, flags ); - return blockingStream; - } - -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/charset_normalizer/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/charset_normalizer/__init__.py deleted file mode 100644 index 55991fc38062b9c800805437ee49b0cf42b98103..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/charset_normalizer/__init__.py +++ /dev/null @@ -1,46 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Charset-Normalizer -~~~~~~~~~~~~~~ -The Real First Universal Charset Detector. -A library that helps you read text from an unknown charset encoding. -Motivated by chardet, This package is trying to resolve the issue by taking a new approach. -All IANA character set names for which the Python core library provides codecs are supported. - -Basic usage: - >>> from charset_normalizer import from_bytes - >>> results = from_bytes('Bсеки човек има право на образование. Oбразованието!'.encode('utf_8')) - >>> best_guess = results.best() - >>> str(best_guess) - 'Bсеки човек има право на образование. Oбразованието!' - -Others methods and usages are available - see the full documentation -at . -:copyright: (c) 2021 by Ahmed TAHRI -:license: MIT, see LICENSE for more details. -""" -import logging - -from .api import from_bytes, from_fp, from_path, is_binary -from .legacy import detect -from .models import CharsetMatch, CharsetMatches -from .utils import set_logging_handler -from .version import VERSION, __version__ - -__all__ = ( - "from_fp", - "from_path", - "from_bytes", - "is_binary", - "detect", - "CharsetMatch", - "CharsetMatches", - "__version__", - "VERSION", - "set_logging_handler", -) - -# Attach a NullHandler to the top level logger by default -# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library - -logging.getLogger("charset_normalizer").addHandler(logging.NullHandler()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/cu2qu/errors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/cu2qu/errors.py deleted file mode 100644 index fa3dc42937131c5db54890dde8f519b15f5d0ff1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/cu2qu/errors.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -class Error(Exception): - """Base Cu2Qu exception class for all other errors.""" - - -class ApproxNotFoundError(Error): - def __init__(self, curve): - message = "no approximation found: %s" % curve - super().__init__(message) - self.curve = curve - - -class UnequalZipLengthsError(Error): - pass - - -class IncompatibleGlyphsError(Error): - def __init__(self, glyphs): - assert len(glyphs) > 1 - self.glyphs = glyphs - names = set(repr(g.name) for g in glyphs) - if len(names) > 1: - self.combined_name = "{%s}" % ", ".join(sorted(names)) - else: - self.combined_name = names.pop() - - def __repr__(self): - return "<%s %s>" % (type(self).__name__, self.combined_name) - - -class IncompatibleSegmentNumberError(IncompatibleGlyphsError): - def __str__(self): - return "Glyphs named %s have different number of segments" % ( - self.combined_name - ) - - -class IncompatibleSegmentTypesError(IncompatibleGlyphsError): - def __init__(self, glyphs, segments): - IncompatibleGlyphsError.__init__(self, glyphs) - self.segments = segments - - def __str__(self): - lines = [] - ndigits = len(str(max(self.segments))) - for i, tags in sorted(self.segments.items()): - lines.append( - "%s: (%s)" % (str(i).rjust(ndigits), ", ".join(repr(t) for t in tags)) - ) - return "Glyphs named %s have incompatible segment types:\n %s" % ( - self.combined_name, - "\n ".join(lines), - ) - - -class IncompatibleFontsError(Error): - def __init__(self, glyph_errors): - self.glyph_errors = glyph_errors - - def __str__(self): - return "fonts contains incompatible glyphs: %s" % ( - ", ".join(repr(g) for g in sorted(self.glyph_errors.keys())) - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/processing_utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/processing_utils.py deleted file mode 100644 index c230272b4e7213174c40dbf7e1cccb0a5e35d778..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/processing_utils.py +++ /dev/null @@ -1,756 +0,0 @@ -from __future__ import annotations - -import base64 -import hashlib -import json -import logging -import os -import shutil -import subprocess -import tempfile -import urllib.request -import warnings -from io import BytesIO -from pathlib import Path -from typing import TYPE_CHECKING, Any, Literal - -import numpy as np -import requests -from gradio_client import utils as client_utils -from PIL import Image, ImageOps, PngImagePlugin - -from gradio import wasm_utils -from gradio.data_classes import FileData, GradioModel, GradioRootModel -from gradio.utils import abspath, is_in_or_equal - -with warnings.catch_warnings(): - warnings.simplefilter("ignore") # Ignore pydub warning if ffmpeg is not installed - from pydub import AudioSegment - -log = logging.getLogger(__name__) - -if TYPE_CHECKING: - from gradio.components.base import Component - -######################### -# GENERAL -######################### - - -def to_binary(x: str | dict) -> bytes: - """Converts a base64 string or dictionary to a binary string that can be sent in a POST.""" - if isinstance(x, dict): - if x.get("data"): - base64str = x["data"] - else: - base64str = client_utils.encode_url_or_file_to_base64(x["path"]) - else: - base64str = x - return base64.b64decode(extract_base64_data(base64str)) - - -def extract_base64_data(x: str) -> str: - """Just extracts the base64 data from a general base64 string.""" - return x.rsplit(",", 1)[-1] - - -######################### -# IMAGE PRE-PROCESSING -######################### - - -def decode_base64_to_image(encoding: str) -> Image.Image: - image_encoded = extract_base64_data(encoding) - img = Image.open(BytesIO(base64.b64decode(image_encoded))) - try: - if hasattr(ImageOps, "exif_transpose"): - img = ImageOps.exif_transpose(img) - except Exception: - log.warning( - "Failed to transpose image %s based on EXIF data.", - img, - exc_info=True, - ) - return img - - -def encode_plot_to_base64(plt): - with BytesIO() as output_bytes: - plt.savefig(output_bytes, format="png") - bytes_data = output_bytes.getvalue() - base64_str = str(base64.b64encode(bytes_data), "utf-8") - return "data:image/png;base64," + base64_str - - -def get_pil_metadata(pil_image): - # Copy any text-only metadata - metadata = PngImagePlugin.PngInfo() - for key, value in pil_image.info.items(): - if isinstance(key, str) and isinstance(value, str): - metadata.add_text(key, value) - - return metadata - - -def encode_pil_to_bytes(pil_image, format="png"): - with BytesIO() as output_bytes: - pil_image.save(output_bytes, format, pnginfo=get_pil_metadata(pil_image)) - return output_bytes.getvalue() - - -def encode_pil_to_base64(pil_image): - bytes_data = encode_pil_to_bytes(pil_image) - base64_str = str(base64.b64encode(bytes_data), "utf-8") - return "data:image/png;base64," + base64_str - - -def encode_array_to_base64(image_array): - with BytesIO() as output_bytes: - pil_image = Image.fromarray(_convert(image_array, np.uint8, force_copy=False)) - pil_image.save(output_bytes, "PNG") - bytes_data = output_bytes.getvalue() - base64_str = str(base64.b64encode(bytes_data), "utf-8") - return "data:image/png;base64," + base64_str - - -def hash_file(file_path: str | Path, chunk_num_blocks: int = 128) -> str: - sha1 = hashlib.sha1() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(chunk_num_blocks * sha1.block_size), b""): - sha1.update(chunk) - return sha1.hexdigest() - - -def hash_url(url: str, chunk_num_blocks: int = 128) -> str: - sha1 = hashlib.sha1() - remote = urllib.request.urlopen(url) - max_file_size = 100 * 1024 * 1024 # 100MB - total_read = 0 - while True: - data = remote.read(chunk_num_blocks * sha1.block_size) - total_read += chunk_num_blocks * sha1.block_size - if not data or total_read > max_file_size: - break - sha1.update(data) - return sha1.hexdigest() - - -def hash_bytes(bytes: bytes): - sha1 = hashlib.sha1() - sha1.update(bytes) - return sha1.hexdigest() - - -def hash_base64(base64_encoding: str, chunk_num_blocks: int = 128) -> str: - sha1 = hashlib.sha1() - for i in range(0, len(base64_encoding), chunk_num_blocks * sha1.block_size): - data = base64_encoding[i : i + chunk_num_blocks * sha1.block_size] - sha1.update(data.encode("utf-8")) - return sha1.hexdigest() - - -def save_pil_to_cache( - img: Image.Image, cache_dir: str, format: Literal["png", "jpg"] = "png" -) -> str: - bytes_data = encode_pil_to_bytes(img, format) - temp_dir = Path(cache_dir) / hash_bytes(bytes_data) - temp_dir.mkdir(exist_ok=True, parents=True) - filename = str((temp_dir / f"image.{format}").resolve()) - img.save(filename, pnginfo=get_pil_metadata(img)) - return filename - - -def save_img_array_to_cache( - arr: np.ndarray, cache_dir: str, format: Literal["png", "jpg"] = "png" -) -> str: - pil_image = Image.fromarray(_convert(arr, np.uint8, force_copy=False)) - return save_pil_to_cache(pil_image, cache_dir, format=format) - - -def save_audio_to_cache( - data: np.ndarray, sample_rate: int, format: str, cache_dir: str -) -> str: - temp_dir = Path(cache_dir) / hash_bytes(data.tobytes()) - temp_dir.mkdir(exist_ok=True, parents=True) - filename = str((temp_dir / f"audio.{format}").resolve()) - audio_to_file(sample_rate, data, filename, format=format) - return filename - - -def save_bytes_to_cache(data: bytes, file_name: str, cache_dir: str) -> str: - path = Path(cache_dir) / hash_bytes(data) - path.mkdir(exist_ok=True, parents=True) - path = path / Path(file_name).name - path.write_bytes(data) - return str(path.resolve()) - - -def save_file_to_cache(file_path: str | Path, cache_dir: str) -> str: - """Returns a temporary file path for a copy of the given file path if it does - not already exist. Otherwise returns the path to the existing temp file.""" - temp_dir = hash_file(file_path) - temp_dir = Path(cache_dir) / temp_dir - temp_dir.mkdir(exist_ok=True, parents=True) - - name = client_utils.strip_invalid_filename_characters(Path(file_path).name) - full_temp_file_path = str(abspath(temp_dir / name)) - - if not Path(full_temp_file_path).exists(): - shutil.copy2(file_path, full_temp_file_path) - - return full_temp_file_path - - -def save_url_to_cache(url: str, cache_dir: str) -> str: - """Downloads a file and makes a temporary file path for a copy if does not already - exist. Otherwise returns the path to the existing temp file.""" - temp_dir = hash_url(url) - temp_dir = Path(cache_dir) / temp_dir - temp_dir.mkdir(exist_ok=True, parents=True) - - name = client_utils.strip_invalid_filename_characters(Path(url).name) - full_temp_file_path = str(abspath(temp_dir / name)) - - if not Path(full_temp_file_path).exists(): - with requests.get(url, stream=True) as r, open(full_temp_file_path, "wb") as f: - shutil.copyfileobj(r.raw, f) - - return full_temp_file_path - - -def save_base64_to_cache( - base64_encoding: str, cache_dir: str, file_name: str | None = None -) -> str: - """Converts a base64 encoding to a file and returns the path to the file if - the file doesn't already exist. Otherwise returns the path to the existing file. - """ - temp_dir = hash_base64(base64_encoding) - temp_dir = Path(cache_dir) / temp_dir - temp_dir.mkdir(exist_ok=True, parents=True) - - guess_extension = client_utils.get_extension(base64_encoding) - if file_name: - file_name = client_utils.strip_invalid_filename_characters(file_name) - elif guess_extension: - file_name = f"file.{guess_extension}" - else: - file_name = "file" - - full_temp_file_path = str(abspath(temp_dir / file_name)) # type: ignore - - if not Path(full_temp_file_path).exists(): - data, _ = client_utils.decode_base64_to_binary(base64_encoding) - with open(full_temp_file_path, "wb") as fb: - fb.write(data) - - return full_temp_file_path - - -def move_resource_to_block_cache(url_or_file_path: str | Path, block: Component) -> str: - """Moves a file or downloads a file from a url to a block's cache directory, adds - to to the block's temp_files, and returns the path to the file in cache. This - ensures that the file is accessible to the Block and can be served to users. - """ - if isinstance(url_or_file_path, Path): - url_or_file_path = str(url_or_file_path) - - if client_utils.is_http_url_like(url_or_file_path): - temp_file_path = save_url_to_cache( - url_or_file_path, cache_dir=block.GRADIO_CACHE - ) - block.temp_files.add(temp_file_path) - else: - url_or_file_path = str(abspath(url_or_file_path)) - if not is_in_or_equal(url_or_file_path, block.GRADIO_CACHE): - temp_file_path = save_file_to_cache( - url_or_file_path, cache_dir=block.GRADIO_CACHE - ) - block.temp_files.add(temp_file_path) - else: - temp_file_path = url_or_file_path - - return temp_file_path - - -def move_files_to_cache(data: Any, block: Component): - """Move files to cache and replace the file path with the cache path. - - Runs after postprocess and before preprocess. - - Args: - data: The input or output data for a component. Can be a dictionary or a dataclass - block: The component - """ - - def _move_to_cache(d: dict): - payload = FileData(**d) - temp_file_path = move_resource_to_block_cache(payload.path, block) - payload.path = temp_file_path - return payload.model_dump() - - if isinstance(data, (GradioRootModel, GradioModel)): - data = data.model_dump() - - return client_utils.traverse(data, _move_to_cache, client_utils.is_file_obj) - - -def resize_and_crop(img, size, crop_type="center"): - """ - Resize and crop an image to fit the specified size. - args: - size: `(width, height)` tuple. Pass `None` for either width or height - to only crop and resize the other. - crop_type: can be 'top', 'middle' or 'bottom', depending on this - value, the image will cropped getting the 'top/left', 'middle' or - 'bottom/right' of the image to fit the size. - raises: - ValueError: if an invalid `crop_type` is provided. - """ - if crop_type == "top": - center = (0, 0) - elif crop_type == "center": - center = (0.5, 0.5) - else: - raise ValueError - - resize = list(size) - if size[0] is None: - resize[0] = img.size[0] - if size[1] is None: - resize[1] = img.size[1] - return ImageOps.fit(img, resize, centering=center) # type: ignore - - -################## -# Audio -################## - - -def audio_from_file(filename, crop_min=0, crop_max=100): - try: - audio = AudioSegment.from_file(filename) - except FileNotFoundError as e: - isfile = Path(filename).is_file() - msg = ( - f"Cannot load audio from file: `{'ffprobe' if isfile else filename}` not found." - + " Please install `ffmpeg` in your system to use non-WAV audio file formats" - " and make sure `ffprobe` is in your PATH." - if isfile - else "" - ) - raise RuntimeError(msg) from e - if crop_min != 0 or crop_max != 100: - audio_start = len(audio) * crop_min / 100 - audio_end = len(audio) * crop_max / 100 - audio = audio[audio_start:audio_end] - data = np.array(audio.get_array_of_samples()) - if audio.channels > 1: - data = data.reshape(-1, audio.channels) - return audio.frame_rate, data - - -def audio_to_file(sample_rate, data, filename, format="wav"): - if format == "wav": - data = convert_to_16_bit_wav(data) - audio = AudioSegment( - data.tobytes(), - frame_rate=sample_rate, - sample_width=data.dtype.itemsize, - channels=(1 if len(data.shape) == 1 else data.shape[1]), - ) - file = audio.export(filename, format=format) - file.close() # type: ignore - - -def convert_to_16_bit_wav(data): - # Based on: https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.write.html - warning = "Trying to convert audio automatically from {} to 16-bit int format." - if data.dtype in [np.float64, np.float32, np.float16]: - warnings.warn(warning.format(data.dtype)) - data = data / np.abs(data).max() - data = data * 32767 - data = data.astype(np.int16) - elif data.dtype == np.int32: - warnings.warn(warning.format(data.dtype)) - data = data / 65538 - data = data.astype(np.int16) - elif data.dtype == np.int16: - pass - elif data.dtype == np.uint16: - warnings.warn(warning.format(data.dtype)) - data = data - 32768 - data = data.astype(np.int16) - elif data.dtype == np.uint8: - warnings.warn(warning.format(data.dtype)) - data = data * 257 - 32768 - data = data.astype(np.int16) - else: - raise ValueError( - "Audio data cannot be converted automatically from " - f"{data.dtype} to 16-bit int format." - ) - return data - - -################## -# OUTPUT -################## - - -def _convert(image, dtype, force_copy=False, uniform=False): - """ - Adapted from: https://github.com/scikit-image/scikit-image/blob/main/skimage/util/dtype.py#L510-L531 - - Convert an image to the requested data-type. - Warnings are issued in case of precision loss, or when negative values - are clipped during conversion to unsigned integer types (sign loss). - Floating point values are expected to be normalized and will be clipped - to the range [0.0, 1.0] or [-1.0, 1.0] when converting to unsigned or - signed integers respectively. - Numbers are not shifted to the negative side when converting from - unsigned to signed integer types. Negative values will be clipped when - converting to unsigned integers. - Parameters - ---------- - image : ndarray - Input image. - dtype : dtype - Target data-type. - force_copy : bool, optional - Force a copy of the data, irrespective of its current dtype. - uniform : bool, optional - Uniformly quantize the floating point range to the integer range. - By default (uniform=False) floating point values are scaled and - rounded to the nearest integers, which minimizes back and forth - conversion errors. - .. versionchanged :: 0.15 - ``_convert`` no longer warns about possible precision or sign - information loss. See discussions on these warnings at: - https://github.com/scikit-image/scikit-image/issues/2602 - https://github.com/scikit-image/scikit-image/issues/543#issuecomment-208202228 - https://github.com/scikit-image/scikit-image/pull/3575 - References - ---------- - .. [1] DirectX data conversion rules. - https://msdn.microsoft.com/en-us/library/windows/desktop/dd607323%28v=vs.85%29.aspx - .. [2] Data Conversions. In "OpenGL ES 2.0 Specification v2.0.25", - pp 7-8. Khronos Group, 2010. - .. [3] Proper treatment of pixels as integers. A.W. Paeth. - In "Graphics Gems I", pp 249-256. Morgan Kaufmann, 1990. - .. [4] Dirty Pixels. J. Blinn. In "Jim Blinn's corner: Dirty Pixels", - pp 47-57. Morgan Kaufmann, 1998. - """ - dtype_range = { - bool: (False, True), - np.bool_: (False, True), - np.bool8: (False, True), # type: ignore - float: (-1, 1), - np.float_: (-1, 1), - np.float16: (-1, 1), - np.float32: (-1, 1), - np.float64: (-1, 1), - } - - def _dtype_itemsize(itemsize, *dtypes): - """Return first of `dtypes` with itemsize greater than `itemsize` - Parameters - ---------- - itemsize: int - The data type object element size. - Other Parameters - ---------------- - *dtypes: - Any Object accepted by `np.dtype` to be converted to a data - type object - Returns - ------- - dtype: data type object - First of `dtypes` with itemsize greater than `itemsize`. - """ - return next(dt for dt in dtypes if np.dtype(dt).itemsize >= itemsize) - - def _dtype_bits(kind, bits, itemsize=1): - """Return dtype of `kind` that can store a `bits` wide unsigned int - Parameters: - kind: str - Data type kind. - bits: int - Desired number of bits. - itemsize: int - The data type object element size. - Returns - ------- - dtype: data type object - Data type of `kind` that can store a `bits` wide unsigned int - """ - - s = next( - i - for i in (itemsize,) + (2, 4, 8) - if bits < (i * 8) or (bits == (i * 8) and kind == "u") - ) - - return np.dtype(kind + str(s)) - - def _scale(a, n, m, copy=True): - """Scale an array of unsigned/positive integers from `n` to `m` bits. - Numbers can be represented exactly only if `m` is a multiple of `n`. - Parameters - ---------- - a : ndarray - Input image array. - n : int - Number of bits currently used to encode the values in `a`. - m : int - Desired number of bits to encode the values in `out`. - copy : bool, optional - If True, allocates and returns new array. Otherwise, modifies - `a` in place. - Returns - ------- - out : array - Output image array. Has the same kind as `a`. - """ - kind = a.dtype.kind - if n > m and a.max() < 2**m: - return a.astype(_dtype_bits(kind, m)) - elif n == m: - return a.copy() if copy else a - elif n > m: - # downscale with precision loss - if copy: - b = np.empty(a.shape, _dtype_bits(kind, m)) - np.floor_divide(a, 2 ** (n - m), out=b, dtype=a.dtype, casting="unsafe") - return b - else: - a //= 2 ** (n - m) - return a - elif m % n == 0: - # exact upscale to a multiple of `n` bits - if copy: - b = np.empty(a.shape, _dtype_bits(kind, m)) - np.multiply(a, (2**m - 1) // (2**n - 1), out=b, dtype=b.dtype) - return b - else: - a = a.astype(_dtype_bits(kind, m, a.dtype.itemsize), copy=False) - a *= (2**m - 1) // (2**n - 1) - return a - else: - # upscale to a multiple of `n` bits, - # then downscale with precision loss - o = (m // n + 1) * n - if copy: - b = np.empty(a.shape, _dtype_bits(kind, o)) - np.multiply(a, (2**o - 1) // (2**n - 1), out=b, dtype=b.dtype) - b //= 2 ** (o - m) - return b - else: - a = a.astype(_dtype_bits(kind, o, a.dtype.itemsize), copy=False) - a *= (2**o - 1) // (2**n - 1) - a //= 2 ** (o - m) - return a - - image = np.asarray(image) - dtypeobj_in = image.dtype - dtypeobj_out = np.dtype("float64") if dtype is np.floating else np.dtype(dtype) - dtype_in = dtypeobj_in.type - dtype_out = dtypeobj_out.type - kind_in = dtypeobj_in.kind - kind_out = dtypeobj_out.kind - itemsize_in = dtypeobj_in.itemsize - itemsize_out = dtypeobj_out.itemsize - - # Below, we do an `issubdtype` check. Its purpose is to find out - # whether we can get away without doing any image conversion. This happens - # when: - # - # - the output and input dtypes are the same or - # - when the output is specified as a type, and the input dtype - # is a subclass of that type (e.g. `np.floating` will allow - # `float32` and `float64` arrays through) - - if np.issubdtype(dtype_in, np.obj2sctype(dtype)): - if force_copy: - image = image.copy() - return image - - if kind_in in "ui": - imin_in = np.iinfo(dtype_in).min - imax_in = np.iinfo(dtype_in).max - if kind_out in "ui": - imin_out = np.iinfo(dtype_out).min # type: ignore - imax_out = np.iinfo(dtype_out).max # type: ignore - - # any -> binary - if kind_out == "b": - return image > dtype_in(dtype_range[dtype_in][1] / 2) - - # binary -> any - if kind_in == "b": - result = image.astype(dtype_out) - if kind_out != "f": - result *= dtype_out(dtype_range[dtype_out][1]) - return result - - # float -> any - if kind_in == "f": - if kind_out == "f": - # float -> float - return image.astype(dtype_out) - - if np.min(image) < -1.0 or np.max(image) > 1.0: - raise ValueError("Images of type float must be between -1 and 1.") - # floating point -> integer - # use float type that can represent output integer type - computation_type = _dtype_itemsize( - itemsize_out, dtype_in, np.float32, np.float64 - ) - - if not uniform: - if kind_out == "u": - image_out = np.multiply(image, imax_out, dtype=computation_type) # type: ignore - else: - image_out = np.multiply( - image, (imax_out - imin_out) / 2, dtype=computation_type # type: ignore - ) - image_out -= 1.0 / 2.0 - np.rint(image_out, out=image_out) - np.clip(image_out, imin_out, imax_out, out=image_out) # type: ignore - elif kind_out == "u": - image_out = np.multiply(image, imax_out + 1, dtype=computation_type) # type: ignore - np.clip(image_out, 0, imax_out, out=image_out) # type: ignore - else: - image_out = np.multiply( - image, (imax_out - imin_out + 1.0) / 2.0, dtype=computation_type # type: ignore - ) - np.floor(image_out, out=image_out) - np.clip(image_out, imin_out, imax_out, out=image_out) # type: ignore - return image_out.astype(dtype_out) - - # signed/unsigned int -> float - if kind_out == "f": - # use float type that can exactly represent input integers - computation_type = _dtype_itemsize( - itemsize_in, dtype_out, np.float32, np.float64 - ) - - if kind_in == "u": - # using np.divide or np.multiply doesn't copy the data - # until the computation time - image = np.multiply(image, 1.0 / imax_in, dtype=computation_type) # type: ignore - # DirectX uses this conversion also for signed ints - # if imin_in: - # np.maximum(image, -1.0, out=image) - else: - image = np.add(image, 0.5, dtype=computation_type) - image *= 2 / (imax_in - imin_in) # type: ignore - - return np.asarray(image, dtype_out) - - # unsigned int -> signed/unsigned int - if kind_in == "u": - if kind_out == "i": - # unsigned int -> signed int - image = _scale(image, 8 * itemsize_in, 8 * itemsize_out - 1) - return image.view(dtype_out) - else: - # unsigned int -> unsigned int - return _scale(image, 8 * itemsize_in, 8 * itemsize_out) - - # signed int -> unsigned int - if kind_out == "u": - image = _scale(image, 8 * itemsize_in - 1, 8 * itemsize_out) - result = np.empty(image.shape, dtype_out) - np.maximum(image, 0, out=result, dtype=image.dtype, casting="unsafe") - return result - - # signed int -> signed int - if itemsize_in > itemsize_out: - return _scale(image, 8 * itemsize_in - 1, 8 * itemsize_out - 1) - - image = image.astype(_dtype_bits("i", itemsize_out * 8)) - image -= imin_in # type: ignore - image = _scale(image, 8 * itemsize_in, 8 * itemsize_out, copy=False) - image += imin_out # type: ignore - return image.astype(dtype_out) - - -def ffmpeg_installed() -> bool: - if wasm_utils.IS_WASM: - # TODO: Support ffmpeg in WASM - return False - - return shutil.which("ffmpeg") is not None - - -def video_is_playable(video_filepath: str) -> bool: - """Determines if a video is playable in the browser. - - A video is playable if it has a playable container and codec. - .mp4 -> h264 - .webm -> vp9 - .ogg -> theora - """ - from ffmpy import FFprobe, FFRuntimeError - - try: - container = Path(video_filepath).suffix.lower() - probe = FFprobe( - global_options="-show_format -show_streams -select_streams v -print_format json", - inputs={video_filepath: None}, - ) - output = probe.run(stderr=subprocess.PIPE, stdout=subprocess.PIPE) - output = json.loads(output[0]) - video_codec = output["streams"][0]["codec_name"] - return (container, video_codec) in [ - (".mp4", "h264"), - (".ogg", "theora"), - (".webm", "vp9"), - ] - # If anything goes wrong, assume the video can be played to not convert downstream - except (FFRuntimeError, IndexError, KeyError): - return True - - -def convert_video_to_playable_mp4(video_path: str) -> str: - """Convert the video to mp4. If something goes wrong return the original video.""" - from ffmpy import FFmpeg, FFRuntimeError - - try: - with tempfile.NamedTemporaryFile(delete=False) as tmp_file: - output_path = Path(video_path).with_suffix(".mp4") - shutil.copy2(video_path, tmp_file.name) - # ffmpeg will automatically use h264 codec (playable in browser) when converting to mp4 - ff = FFmpeg( - inputs={str(tmp_file.name): None}, - outputs={str(output_path): None}, - global_options="-y -loglevel quiet", - ) - ff.run() - except FFRuntimeError as e: - print(f"Error converting video to browser-playable format {str(e)}") - output_path = video_path - finally: - # Remove temp file - os.remove(tmp_file.name) # type: ignore - return str(output_path) - - -def get_video_length(video_path: str | Path): - duration = subprocess.check_output( - [ - "ffprobe", - "-i", - str(video_path), - "-show_entries", - "format=duration", - "-v", - "quiet", - "-of", - "csv={}".format("p=0"), - ] - ) - duration_str = duration.decode("utf-8").strip() - duration_float = float(duration_str) - - return duration_float diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/routes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/routes.py deleted file mode 100644 index 7e15d398581ef27db168389fd0572fa28f165ffa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/routes.py +++ /dev/null @@ -1,821 +0,0 @@ -"""Implements a FastAPI server to run the gradio interface. Note that some types in this -module use the Optional/Union notation so that they work correctly with pydantic.""" - -from __future__ import annotations - -import asyncio -import sys - -if sys.version_info >= (3, 9): - from importlib.resources import files -else: - from importlib_resources import files -import inspect -import json -import mimetypes -import os -import posixpath -import secrets -import shutil -import tempfile -import threading -import time -import traceback -from pathlib import Path -from queue import Empty as EmptyQueue -from typing import TYPE_CHECKING, Any, AsyncIterator, Dict, List, Optional, Type - -import anyio -import fastapi -import httpx -import markupsafe -import orjson -from fastapi import Depends, FastAPI, HTTPException, status -from fastapi.middleware.cors import CORSMiddleware -from fastapi.responses import ( - FileResponse, - HTMLResponse, - JSONResponse, - PlainTextResponse, -) -from fastapi.security import OAuth2PasswordRequestForm -from fastapi.templating import Jinja2Templates -from gradio_client import utils as client_utils -from gradio_client.documentation import document, set_documentation_group -from jinja2.exceptions import TemplateNotFound -from multipart.multipart import parse_options_header -from starlette.background import BackgroundTask -from starlette.responses import RedirectResponse, StreamingResponse - -import gradio -import gradio.ranged_response as ranged_response -from gradio import route_utils, utils, wasm_utils -from gradio.context import Context -from gradio.data_classes import ComponentServerBody, PredictBody, ResetBody -from gradio.exceptions import Error -from gradio.helpers import CACHED_FOLDER -from gradio.oauth import attach_oauth -from gradio.queueing import Estimation, Event -from gradio.route_utils import ( # noqa: F401 - GradioMultiPartParser, - GradioUploadFile, - MultiPartException, - Request, -) -from gradio.state_holder import StateHolder -from gradio.utils import ( - cancel_tasks, - get_package_version, - run_coro_in_background, - set_task_name, -) - -if TYPE_CHECKING: - from gradio.blocks import Block - - -mimetypes.init() - -STATIC_TEMPLATE_LIB = files("gradio").joinpath("templates").as_posix() # type: ignore -STATIC_PATH_LIB = files("gradio").joinpath("templates", "frontend", "static").as_posix() # type: ignore -BUILD_PATH_LIB = files("gradio").joinpath("templates", "frontend", "assets").as_posix() # type: ignore -VERSION = get_package_version() - - -class ORJSONResponse(JSONResponse): - media_type = "application/json" - - @staticmethod - def _render(content: Any) -> bytes: - return orjson.dumps( - content, - option=orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_PASSTHROUGH_DATETIME, - default=str, - ) - - def render(self, content: Any) -> bytes: - return ORJSONResponse._render(content) - - @staticmethod - def _render_str(content: Any) -> str: - return ORJSONResponse._render(content).decode("utf-8") - - -def toorjson(value): - return markupsafe.Markup( - ORJSONResponse._render_str(value) - .replace("<", "\\u003c") - .replace(">", "\\u003e") - .replace("&", "\\u0026") - .replace("'", "\\u0027") - ) - - -templates = Jinja2Templates(directory=STATIC_TEMPLATE_LIB) -templates.env.filters["toorjson"] = toorjson - -client = httpx.AsyncClient() - - -class App(FastAPI): - """ - FastAPI App Wrapper - """ - - def __init__(self, **kwargs): - self.tokens = {} - self.auth = None - self.blocks: gradio.Blocks | None = None - self.state_holder = StateHolder() - self.iterators: dict[str, AsyncIterator] = {} - self.iterators_to_reset: set[str] = set() - self.lock = utils.safe_get_lock() - self.cookie_id = secrets.token_urlsafe(32) - self.queue_token = secrets.token_urlsafe(32) - self.startup_events_triggered = False - self.uploaded_file_dir = os.environ.get("GRADIO_TEMP_DIR") or str( - (Path(tempfile.gettempdir()) / "gradio").resolve() - ) - self.change_event: None | threading.Event = None - self._asyncio_tasks: list[asyncio.Task] = [] - # Allow user to manually set `docs_url` and `redoc_url` - # when instantiating an App; when they're not set, disable docs and redoc. - kwargs.setdefault("docs_url", None) - kwargs.setdefault("redoc_url", None) - super().__init__(**kwargs) - - def configure_app(self, blocks: gradio.Blocks) -> None: - auth = blocks.auth - if auth is not None: - if not callable(auth): - self.auth = {account[0]: account[1] for account in auth} - else: - self.auth = auth - else: - self.auth = None - - self.blocks = blocks - self.cwd = os.getcwd() - self.favicon_path = blocks.favicon_path - self.tokens = {} - self.root_path = blocks.root_path - self.state_holder.set_blocks(blocks) - - def get_blocks(self) -> gradio.Blocks: - if self.blocks is None: - raise ValueError("No Blocks has been configured for this app.") - return self.blocks - - def build_proxy_request(self, url_path): - url = httpx.URL(url_path) - assert self.blocks - # Don't proxy a URL unless it's a URL specifically loaded by the user using - # gr.load() to prevent SSRF or harvesting of HF tokens by malicious Spaces. - is_safe_url = any( - url.host == httpx.URL(root).host for root in self.blocks.proxy_urls - ) - if not is_safe_url: - raise PermissionError("This URL cannot be proxied.") - is_hf_url = url.host.endswith(".hf.space") - headers = {} - if Context.hf_token is not None and is_hf_url: - headers["Authorization"] = f"Bearer {Context.hf_token}" - rp_req = client.build_request("GET", url, headers=headers) - return rp_req - - def _cancel_asyncio_tasks(self): - for task in self._asyncio_tasks: - task.cancel() - self._asyncio_tasks = [] - - @staticmethod - def create_app( - blocks: gradio.Blocks, app_kwargs: Dict[str, Any] | None = None - ) -> App: - app_kwargs = app_kwargs or {} - app_kwargs.setdefault("default_response_class", ORJSONResponse) - app = App(**app_kwargs) - app.configure_app(blocks) - - if not wasm_utils.IS_WASM: - app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_methods=["*"], - allow_headers=["*"], - ) - - @app.get("/user") - @app.get("/user/") - def get_current_user(request: fastapi.Request) -> Optional[str]: - token = request.cookies.get( - f"access-token-{app.cookie_id}" - ) or request.cookies.get(f"access-token-unsecure-{app.cookie_id}") - return app.tokens.get(token) - - @app.get("/login_check") - @app.get("/login_check/") - def login_check(user: str = Depends(get_current_user)): - if app.auth is None or user is not None: - return - raise HTTPException( - status_code=status.HTTP_401_UNAUTHORIZED, detail="Not authenticated" - ) - - @app.get("/token") - @app.get("/token/") - def get_token(request: fastapi.Request) -> dict: - token = request.cookies.get(f"access-token-{app.cookie_id}") - return {"token": token, "user": app.tokens.get(token)} - - @app.get("/app_id") - @app.get("/app_id/") - def app_id(request: fastapi.Request) -> dict: - return {"app_id": app.get_blocks().app_id} - - @app.get("/dev/reload", dependencies=[Depends(login_check)]) - async def notify_changes( - request: fastapi.Request, - ): - async def reload_checker(request: fastapi.Request): - heartbeat_rate = 15 - check_rate = 0.05 - last_heartbeat = time.perf_counter() - - while True: - if await request.is_disconnected(): - return - - if app.change_event and app.change_event.is_set(): - app.change_event.clear() - yield """data: CHANGE\n\n""" - - await asyncio.sleep(check_rate) - if time.perf_counter() - last_heartbeat > heartbeat_rate: - yield """data: HEARTBEAT\n\n""" - last_heartbeat = time.time() - - return StreamingResponse( - reload_checker(request), - media_type="text/event-stream", - ) - - @app.post("/login") - @app.post("/login/") - def login(form_data: OAuth2PasswordRequestForm = Depends()): - username, password = form_data.username.strip(), form_data.password - if app.auth is None: - return RedirectResponse(url="/", status_code=status.HTTP_302_FOUND) - if ( - not callable(app.auth) - and username in app.auth - and app.auth[username] == password - ) or (callable(app.auth) and app.auth.__call__(username, password)): - token = secrets.token_urlsafe(16) - app.tokens[token] = username - response = JSONResponse(content={"success": True}) - response.set_cookie( - key=f"access-token-{app.cookie_id}", - value=token, - httponly=True, - samesite="none", - secure=True, - ) - response.set_cookie( - key=f"access-token-unsecure-{app.cookie_id}", - value=token, - httponly=True, - ) - return response - else: - raise HTTPException(status_code=400, detail="Incorrect credentials.") - - ############### - # OAuth Routes - ############### - - # Define OAuth routes if the app expects it (i.e. a LoginButton is defined). - # It allows users to "Sign in with HuggingFace". - if app.blocks is not None and app.blocks.expects_oauth: - attach_oauth(app) - - ############### - # Main Routes - ############### - - @app.head("/", response_class=HTMLResponse) - @app.get("/", response_class=HTMLResponse) - def main(request: fastapi.Request, user: str = Depends(get_current_user)): - mimetypes.add_type("application/javascript", ".js") - blocks = app.get_blocks() - root_path = ( - request.scope.get("root_path") - or request.headers.get("X-Direct-Url") - or "" - ) - if app.auth is None or user is not None: - config = app.get_blocks().config - config["root"] = route_utils.strip_url(root_path) - else: - config = { - "auth_required": True, - "auth_message": blocks.auth_message, - "space_id": app.get_blocks().space_id, - "root": route_utils.strip_url(root_path), - } - - try: - template = ( - "frontend/share.html" if blocks.share else "frontend/index.html" - ) - return templates.TemplateResponse( - template, - {"request": request, "config": config}, - ) - except TemplateNotFound as err: - if blocks.share: - raise ValueError( - "Did you install Gradio from source files? Share mode only " - "works when Gradio is installed through the pip package." - ) from err - else: - raise ValueError( - "Did you install Gradio from source files? You need to build " - "the frontend by running /scripts/build_frontend.sh" - ) from err - - @app.get("/info/", dependencies=[Depends(login_check)]) - @app.get("/info", dependencies=[Depends(login_check)]) - def api_info(serialize: bool = True): - # config = app.get_blocks().get_api_info() - return app.get_blocks().get_api_info() # type: ignore - - @app.get("/config/", dependencies=[Depends(login_check)]) - @app.get("/config", dependencies=[Depends(login_check)]) - def get_config(request: fastapi.Request): - root_path = ( - request.scope.get("root_path") - or request.headers.get("X-Direct-Url") - or "" - ) - config = app.get_blocks().config - config["root"] = route_utils.strip_url(root_path) - return config - - @app.get("/static/{path:path}") - def static_resource(path: str): - static_file = safe_join(STATIC_PATH_LIB, path) - return FileResponse(static_file) - - @app.get("/custom_component/{id}/{type}/{file_name}") - def custom_component_path(id: str, type: str, file_name: str): - config = app.get_blocks().config - components = config["components"] - location = next( - (item for item in components if item["component_class_id"] == id), None - ) - - if location is None: - raise HTTPException(status_code=404, detail="Component not found.") - - component_instance = app.get_blocks().get_component(location["id"]) - - module_name = component_instance.__class__.__module__ - module_path = sys.modules[module_name].__file__ - - if module_path is None or component_instance is None: - raise HTTPException(status_code=404, detail="Component not found.") - - return FileResponse( - safe_join( - str(Path(module_path).parent), - f"{component_instance.__class__.TEMPLATE_DIR}/{type}/{file_name}", - ) - ) - - @app.get("/assets/{path:path}") - def build_resource(path: str): - build_file = safe_join(BUILD_PATH_LIB, path) - return FileResponse(build_file) - - @app.get("/favicon.ico") - async def favicon(): - blocks = app.get_blocks() - if blocks.favicon_path is None: - return static_resource("img/logo.svg") - else: - return FileResponse(blocks.favicon_path) - - @app.head("/proxy={url_path:path}", dependencies=[Depends(login_check)]) - @app.get("/proxy={url_path:path}", dependencies=[Depends(login_check)]) - async def reverse_proxy(url_path: str): - # Adapted from: https://github.com/tiangolo/fastapi/issues/1788 - try: - rp_req = app.build_proxy_request(url_path) - except PermissionError as err: - raise HTTPException(status_code=400, detail=str(err)) from err - rp_resp = await client.send(rp_req, stream=True) - return StreamingResponse( - rp_resp.aiter_raw(), - status_code=rp_resp.status_code, - headers=rp_resp.headers, # type: ignore - background=BackgroundTask(rp_resp.aclose), - ) - - @app.head("/file={path_or_url:path}", dependencies=[Depends(login_check)]) - @app.get("/file={path_or_url:path}", dependencies=[Depends(login_check)]) - async def file(path_or_url: str, request: fastapi.Request): - blocks = app.get_blocks() - if utils.validate_url(path_or_url): - return RedirectResponse( - url=path_or_url, status_code=status.HTTP_302_FOUND - ) - abs_path = utils.abspath(path_or_url) - - in_blocklist = any( - utils.is_in_or_equal(abs_path, blocked_path) - for blocked_path in blocks.blocked_paths - ) - is_dir = abs_path.is_dir() - - if in_blocklist or is_dir: - raise HTTPException(403, f"File not allowed: {path_or_url}.") - - created_by_app = str(abs_path) in set().union(*blocks.temp_file_sets) - in_allowlist = any( - utils.is_in_or_equal(abs_path, allowed_path) - for allowed_path in blocks.allowed_paths - ) - was_uploaded = utils.is_in_or_equal(abs_path, app.uploaded_file_dir) - is_cached_example = utils.is_in_or_equal( - abs_path, utils.abspath(CACHED_FOLDER) - ) - - if not ( - created_by_app or in_allowlist or was_uploaded or is_cached_example - ): - raise HTTPException(403, f"File not allowed: {path_or_url}.") - - if not abs_path.exists(): - raise HTTPException(404, f"File not found: {path_or_url}.") - - range_val = request.headers.get("Range", "").strip() - if range_val.startswith("bytes=") and "-" in range_val: - range_val = range_val[6:] - start, end = range_val.split("-") - if start.isnumeric() and end.isnumeric(): - start = int(start) - end = int(end) - response = ranged_response.RangedFileResponse( - abs_path, - ranged_response.OpenRange(start, end), - dict(request.headers), - stat_result=os.stat(abs_path), - ) - return response - - return FileResponse(abs_path, headers={"Accept-Ranges": "bytes"}) - - @app.get( - "/stream/{session_hash}/{run}/{component_id}", - dependencies=[Depends(login_check)], - ) - async def stream( - session_hash: str, run: int, component_id: int, request: fastapi.Request - ): - stream: list = ( - app.get_blocks() - .pending_streams[session_hash] - .get(run, {}) - .get(component_id, None) - ) - if stream is None: - raise HTTPException(404, "Stream not found.") - - def stream_wrapper(): - check_stream_rate = 0.01 - max_wait_time = 120 # maximum wait between yields - assume generator thread has crashed otherwise. - wait_time = 0 - while True: - if len(stream) == 0: - if wait_time > max_wait_time: - return - wait_time += check_stream_rate - time.sleep(check_stream_rate) - continue - wait_time = 0 - next_stream = stream.pop(0) - if next_stream is None: - return - yield next_stream - - return StreamingResponse(stream_wrapper()) - - @app.get("/file/{path:path}", dependencies=[Depends(login_check)]) - async def file_deprecated(path: str, request: fastapi.Request): - return await file(path, request) - - @app.post("/reset/") - @app.post("/reset") - async def reset_iterator(body: ResetBody): - if body.event_id not in app.iterators: - return {"success": False} - async with app.lock: - del app.iterators[body.event_id] - app.iterators_to_reset.add(body.event_id) - await app.get_blocks()._queue.clean_event(body.event_id) - return {"success": True} - - # had to use '/run' endpoint for Colab compatibility, '/api' supported for backwards compatibility - @app.post("/run/{api_name}", dependencies=[Depends(login_check)]) - @app.post("/run/{api_name}/", dependencies=[Depends(login_check)]) - @app.post("/api/{api_name}", dependencies=[Depends(login_check)]) - @app.post("/api/{api_name}/", dependencies=[Depends(login_check)]) - async def predict( - api_name: str, - body: PredictBody, - request: fastapi.Request, - username: str = Depends(get_current_user), - ): - fn_index_inferred = route_utils.infer_fn_index( - app=app, api_name=api_name, body=body - ) - - if not app.get_blocks().api_open and app.get_blocks().queue_enabled_for_fn( - fn_index_inferred - ): - raise HTTPException( - detail="This API endpoint does not accept direct HTTP POST requests. Please join the queue to use this API.", - status_code=status.HTTP_404_NOT_FOUND, - ) - - gr_request = route_utils.compile_gr_request( - app, - body, - fn_index_inferred=fn_index_inferred, - username=username, - request=request, - ) - - try: - output = await route_utils.call_process_api( - app=app, - body=body, - gr_request=gr_request, - fn_index_inferred=fn_index_inferred, - ) - except BaseException as error: - show_error = app.get_blocks().show_error or isinstance(error, Error) - traceback.print_exc() - return JSONResponse( - content={"error": str(error) if show_error else None}, - status_code=500, - ) - return output - - @app.get("/queue/join", dependencies=[Depends(login_check)]) - async def queue_join( - fn_index: int, - session_hash: str, - request: fastapi.Request, - username: str = Depends(get_current_user), - data: Optional[str] = None, - ): - blocks = app.get_blocks() - if blocks._queue.server_app is None: - blocks._queue.set_server_app(app) - - event = Event(session_hash, fn_index, request, username) - if data is not None: - input_data = json.loads(data) - event.data = PredictBody( - session_hash=session_hash, - fn_index=fn_index, - data=input_data, - request=request, - ) - - # Continuous events are not put in the queue so that they do not - # occupy the queue's resource as they are expected to run forever - if blocks.dependencies[event.fn_index].get("every", 0): - await cancel_tasks({f"{event.session_hash}_{event.fn_index}"}) - await blocks._queue.reset_iterators(event._id) - blocks._queue.continuous_tasks.append(event) - task = run_coro_in_background( - blocks._queue.process_events, [event], False - ) - set_task_name(task, event.session_hash, event.fn_index, batch=False) - app._asyncio_tasks.append(task) - else: - rank = blocks._queue.push(event) - if rank is None: - event.send_message("queue_full", final=True) - else: - estimation = blocks._queue.get_estimation() - await blocks._queue.send_estimation(event, estimation, rank) - - async def sse_stream(request: fastapi.Request): - last_heartbeat = time.perf_counter() - while True: - if await request.is_disconnected(): - await blocks._queue.clean_event(event) - if not event.alive: - return - - heartbeat_rate = 15 - check_rate = 0.05 - message = None - try: - message = event.message_queue.get_nowait() - if message is None: # end of stream marker - return - except EmptyQueue: - await asyncio.sleep(check_rate) - if time.perf_counter() - last_heartbeat > heartbeat_rate: - message = {"msg": "heartbeat"} - last_heartbeat = time.time() - - if message: - yield f"data: {json.dumps(message)}\n\n" - - return StreamingResponse( - sse_stream(request), - media_type="text/event-stream", - ) - - @app.post("/queue/data", dependencies=[Depends(login_check)]) - async def queue_data( - body: PredictBody, - request: fastapi.Request, - username: str = Depends(get_current_user), - ): - blocks = app.get_blocks() - blocks._queue.attach_data(body) - - @app.post("/component_server", dependencies=[Depends(login_check)]) - @app.post("/component_server/", dependencies=[Depends(login_check)]) - def component_server(body: ComponentServerBody): - state = app.state_holder[body.session_hash] - component_id = body.component_id - block: Block - if component_id in state: - block = state[component_id] - else: - block = app.get_blocks().blocks[component_id] - fn = getattr(block, body.fn_name) - return fn(body.data) - - @app.get( - "/queue/status", - dependencies=[Depends(login_check)], - response_model=Estimation, - ) - async def get_queue_status(): - return app.get_blocks()._queue.get_estimation() - - @app.post("/upload", dependencies=[Depends(login_check)]) - async def upload_file(request: fastapi.Request): - content_type_header = request.headers.get("Content-Type") - content_type: bytes - content_type, _ = parse_options_header(content_type_header) - if content_type != b"multipart/form-data": - raise HTTPException(status_code=400, detail="Invalid content type.") - - try: - multipart_parser = GradioMultiPartParser( - request.headers, - request.stream(), - max_files=1000, - max_fields=1000, - ) - form = await multipart_parser.parse() - except MultiPartException as exc: - raise HTTPException(status_code=400, detail=exc.message) from exc - - output_files = [] - for temp_file in form.getlist("files"): - assert isinstance(temp_file, GradioUploadFile) - if temp_file.filename: - file_name = Path(temp_file.filename).name - name = client_utils.strip_invalid_filename_characters(file_name) - else: - name = f"tmp{secrets.token_hex(5)}" - directory = Path(app.uploaded_file_dir) / temp_file.sha.hexdigest() - directory.mkdir(exist_ok=True, parents=True) - dest = (directory / name).resolve() - await anyio.to_thread.run_sync( - shutil.move, - temp_file.file.name, - dest, - limiter=app.get_blocks().limiter, - ) - output_files.append(dest) - return output_files - - @app.on_event("startup") - @app.get("/startup-events") - async def startup_events(): - if not app.startup_events_triggered: - app.get_blocks().startup_events() - app.startup_events_triggered = True - return True - return False - - @app.get("/theme.css", response_class=PlainTextResponse) - def theme_css(): - return PlainTextResponse(app.get_blocks().theme_css, media_type="text/css") - - @app.get("/robots.txt", response_class=PlainTextResponse) - def robots_txt(): - if app.get_blocks().share: - return "User-agent: *\nDisallow: /" - else: - return "User-agent: *\nDisallow: " - - return app - - -######## -# Helper functions -######## - - -def safe_join(directory: str, path: str) -> str: - """Safely path to a base directory to avoid escaping the base directory. - Borrowed from: werkzeug.security.safe_join""" - _os_alt_seps: List[str] = [ - sep for sep in [os.path.sep, os.path.altsep] if sep is not None and sep != "/" - ] - - if path == "": - raise HTTPException(400) - - filename = posixpath.normpath(path) - fullpath = os.path.join(directory, filename) - if ( - any(sep in filename for sep in _os_alt_seps) - or os.path.isabs(filename) - or filename == ".." - or filename.startswith("../") - or os.path.isdir(fullpath) - ): - raise HTTPException(403) - - if not os.path.exists(fullpath): - raise HTTPException(404, "File not found") - - return fullpath - - -def get_types(cls_set: List[Type]): - docset = [] - types = [] - for cls in cls_set: - doc = inspect.getdoc(cls) or "" - doc_lines = doc.split("\n") - for line in doc_lines: - if "value (" in line: - types.append(line.split("value (")[1].split(")")[0]) - docset.append(doc_lines[1].split(":")[-1]) - return docset, types - - -set_documentation_group("routes") - - -@document() -def mount_gradio_app( - app: fastapi.FastAPI, - blocks: gradio.Blocks, - path: str, - app_kwargs: dict[str, Any] | None = None, -) -> fastapi.FastAPI: - """Mount a gradio.Blocks to an existing FastAPI application. - - Parameters: - app: The parent FastAPI application. - blocks: The blocks object we want to mount to the parent app. - path: The path at which the gradio application will be mounted. - app_kwargs: Additional keyword arguments to pass to the underlying FastAPI app as a dictionary of parameter keys and argument values. For example, `{"docs_url": "/docs"}` - Example: - from fastapi import FastAPI - import gradio as gr - app = FastAPI() - @app.get("/") - def read_main(): - return {"message": "This is your main app"} - io = gr.Interface(lambda x: "Hello, " + x + "!", "textbox", "textbox") - app = gr.mount_gradio_app(app, io, path="/gradio") - # Then run `uvicorn run:app` from the terminal and navigate to http://localhost:8000/gradio. - """ - blocks.dev_mode = False - blocks.config = blocks.get_config_file() - blocks.validate_queue_settings() - gradio_app = App.create_app(blocks, app_kwargs=app_kwargs) - - @app.on_event("startup") - async def start_queue(): - gradio_app.get_blocks().startup_events() - - app.mount(path, gradio_app) - return app diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/utils/_datetime.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/utils/_datetime.py deleted file mode 100644 index 57c3524d08857b37ca4f7e1c4901fd50d5be3404..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/utils/_datetime.py +++ /dev/null @@ -1,66 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to handle datetimes in Huggingface Hub.""" -from datetime import datetime, timedelta, timezone - - -# Local machine offset compared to UTC. -# Taken from https://stackoverflow.com/a/3168394. -# `utcoffset()` returns `None` if no offset -> empty timedelta. -UTC_OFFSET = datetime.now(timezone.utc).astimezone().utcoffset() or timedelta() - - -def parse_datetime(date_string: str) -> datetime: - """ - Parses a date_string returned from the server to a datetime object. - - This parser is a weak-parser is the sense that it handles only a single format of - date_string. It is expected that the server format will never change. The - implementation depends only on the standard lib to avoid an external dependency - (python-dateutil). See full discussion about this decision on PR: - https://github.com/huggingface/huggingface_hub/pull/999. - - Example: - ```py - > parse_datetime('2022-08-19T07:19:38.123Z') - datetime.datetime(2022, 8, 19, 7, 19, 38, 123000, tzinfo=timezone.utc) - ``` - - Args: - date_string (`str`): - A string representing a datetime returned by the Hub server. - String is expected to follow '%Y-%m-%dT%H:%M:%S.%fZ' pattern. - - Returns: - A python datetime object. - - Raises: - :class:`ValueError`: - If `date_string` cannot be parsed. - """ - try: - # Datetime ending with a Z means "UTC". Here we parse the date as local machine - # timezone and then move it to the appropriate UTC timezone. - # See https://en.wikipedia.org/wiki/ISO_8601#Coordinated_Universal_Time_(UTC) - # Taken from https://stackoverflow.com/a/3168394. - - dt = datetime.strptime(date_string, "%Y-%m-%dT%H:%M:%S.%fZ") - dt += UTC_OFFSET # By default, datetime is not timezoned -> move to UTC time - return dt.astimezone(timezone.utc) # Set explicit timezone - except ValueError as e: - raise ValueError( - f"Cannot parse '{date_string}' as a datetime. Date string is expected to" - " follow '%Y-%m-%dT%H:%M:%S.%fZ' pattern." - ) from e diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/extra_vsx_asm.c b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/extra_vsx_asm.c deleted file mode 100644 index b73a6f43808eeb5af2bd212ee88b6c1002a29901..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/extra_vsx_asm.c +++ /dev/null @@ -1,36 +0,0 @@ -/** - * Testing ASM VSX register number fixer '%x' - * - * old versions of CLANG doesn't support %x in the inline asm template - * which fixes register number when using any of the register constraints wa, wd, wf. - * - * xref: - * - https://bugs.llvm.org/show_bug.cgi?id=31837 - * - https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html - */ -#ifndef __VSX__ - #error "VSX is not supported" -#endif -#include - -#if (defined(__GNUC__) && !defined(vec_xl)) || (defined(__clang__) && !defined(__IBMC__)) - #define vsx_ld vec_vsx_ld - #define vsx_st vec_vsx_st -#else - #define vsx_ld vec_xl - #define vsx_st vec_xst -#endif - -int main(void) -{ - float z4[] = {0, 0, 0, 0}; - signed int zout[] = {0, 0, 0, 0}; - - __vector float vz4 = vsx_ld(0, z4); - __vector signed int asm_ret = vsx_ld(0, zout); - - __asm__ ("xvcvspsxws %x0,%x1" : "=wa" (vz4) : "wa" (asm_ret)); - - vsx_st(asm_ret, 0, zout); - return zout[0]; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_matmul.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_matmul.py deleted file mode 100644 index 4ca3ad3f7031e20b8d6cec36caadbcefef8c4196..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_matmul.py +++ /dev/null @@ -1,82 +0,0 @@ -import operator - -import numpy as np -import pytest - -from pandas import ( - DataFrame, - Series, -) -import pandas._testing as tm - - -class TestMatmul: - def test_matmul(self): - # matmul test is for GH#10259 - a = Series( - np.random.default_rng(2).standard_normal(4), index=["p", "q", "r", "s"] - ) - b = DataFrame( - np.random.default_rng(2).standard_normal((3, 4)), - index=["1", "2", "3"], - columns=["p", "q", "r", "s"], - ).T - - # Series @ DataFrame -> Series - result = operator.matmul(a, b) - expected = Series(np.dot(a.values, b.values), index=["1", "2", "3"]) - tm.assert_series_equal(result, expected) - - # DataFrame @ Series -> Series - result = operator.matmul(b.T, a) - expected = Series(np.dot(b.T.values, a.T.values), index=["1", "2", "3"]) - tm.assert_series_equal(result, expected) - - # Series @ Series -> scalar - result = operator.matmul(a, a) - expected = np.dot(a.values, a.values) - tm.assert_almost_equal(result, expected) - - # GH#21530 - # vector (1D np.array) @ Series (__rmatmul__) - result = operator.matmul(a.values, a) - expected = np.dot(a.values, a.values) - tm.assert_almost_equal(result, expected) - - # GH#21530 - # vector (1D list) @ Series (__rmatmul__) - result = operator.matmul(a.values.tolist(), a) - expected = np.dot(a.values, a.values) - tm.assert_almost_equal(result, expected) - - # GH#21530 - # matrix (2D np.array) @ Series (__rmatmul__) - result = operator.matmul(b.T.values, a) - expected = np.dot(b.T.values, a.values) - tm.assert_almost_equal(result, expected) - - # GH#21530 - # matrix (2D nested lists) @ Series (__rmatmul__) - result = operator.matmul(b.T.values.tolist(), a) - expected = np.dot(b.T.values, a.values) - tm.assert_almost_equal(result, expected) - - # mixed dtype DataFrame @ Series - a["p"] = int(a.p) - result = operator.matmul(b.T, a) - expected = Series(np.dot(b.T.values, a.T.values), index=["1", "2", "3"]) - tm.assert_series_equal(result, expected) - - # different dtypes DataFrame @ Series - a = a.astype(int) - result = operator.matmul(b.T, a) - expected = Series(np.dot(b.T.values, a.T.values), index=["1", "2", "3"]) - tm.assert_series_equal(result, expected) - - msg = r"Dot product shape mismatch, \(4,\) vs \(3,\)" - # exception raised is of type Exception - with pytest.raises(Exception, match=msg): - a.dot(a.values[:3]) - msg = "matrices are not aligned" - with pytest.raises(ValueError, match=msg): - a.dot(b.T) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist.py deleted file mode 100644 index 014871d280edb57971aa1eb0fbe26862ce43bf53..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist.py +++ /dev/null @@ -1,143 +0,0 @@ -"""distutils.command.bdist - -Implements the Distutils 'bdist' command (create a built [binary] -distribution).""" - -import os -from distutils.core import Command -from distutils.errors import * -from distutils.util import get_platform - - -def show_formats(): - """Print list of available formats (arguments to "--format" option). - """ - from distutils.fancy_getopt import FancyGetopt - formats = [] - for format in bdist.format_commands: - formats.append(("formats=" + format, None, - bdist.format_command[format][1])) - pretty_printer = FancyGetopt(formats) - pretty_printer.print_help("List of available distribution formats:") - - -class bdist(Command): - - description = "create a built (binary) distribution" - - user_options = [('bdist-base=', 'b', - "temporary directory for creating built distributions"), - ('plat-name=', 'p', - "platform name to embed in generated filenames " - "(default: %s)" % get_platform()), - ('formats=', None, - "formats for distribution (comma-separated list)"), - ('dist-dir=', 'd', - "directory to put final built distributions in " - "[default: dist]"), - ('skip-build', None, - "skip rebuilding everything (for testing/debugging)"), - ('owner=', 'u', - "Owner name used when creating a tar file" - " [default: current user]"), - ('group=', 'g', - "Group name used when creating a tar file" - " [default: current group]"), - ] - - boolean_options = ['skip-build'] - - help_options = [ - ('help-formats', None, - "lists available distribution formats", show_formats), - ] - - # The following commands do not take a format option from bdist - no_format_option = ('bdist_rpm',) - - # This won't do in reality: will need to distinguish RPM-ish Linux, - # Debian-ish Linux, Solaris, FreeBSD, ..., Windows, Mac OS. - default_format = {'posix': 'gztar', - 'nt': 'zip'} - - # Establish the preferred order (for the --help-formats option). - format_commands = ['rpm', 'gztar', 'bztar', 'xztar', 'ztar', 'tar', - 'wininst', 'zip', 'msi'] - - # And the real information. - format_command = {'rpm': ('bdist_rpm', "RPM distribution"), - 'gztar': ('bdist_dumb', "gzip'ed tar file"), - 'bztar': ('bdist_dumb', "bzip2'ed tar file"), - 'xztar': ('bdist_dumb', "xz'ed tar file"), - 'ztar': ('bdist_dumb', "compressed tar file"), - 'tar': ('bdist_dumb', "tar file"), - 'wininst': ('bdist_wininst', - "Windows executable installer"), - 'zip': ('bdist_dumb', "ZIP file"), - 'msi': ('bdist_msi', "Microsoft Installer") - } - - - def initialize_options(self): - self.bdist_base = None - self.plat_name = None - self.formats = None - self.dist_dir = None - self.skip_build = 0 - self.group = None - self.owner = None - - def finalize_options(self): - # have to finalize 'plat_name' before 'bdist_base' - if self.plat_name is None: - if self.skip_build: - self.plat_name = get_platform() - else: - self.plat_name = self.get_finalized_command('build').plat_name - - # 'bdist_base' -- parent of per-built-distribution-format - # temporary directories (eg. we'll probably have - # "build/bdist./dumb", "build/bdist./rpm", etc.) - if self.bdist_base is None: - build_base = self.get_finalized_command('build').build_base - self.bdist_base = os.path.join(build_base, - 'bdist.' + self.plat_name) - - self.ensure_string_list('formats') - if self.formats is None: - try: - self.formats = [self.default_format[os.name]] - except KeyError: - raise DistutilsPlatformError( - "don't know how to create built distributions " - "on platform %s" % os.name) - - if self.dist_dir is None: - self.dist_dir = "dist" - - def run(self): - # Figure out which sub-commands we need to run. - commands = [] - for format in self.formats: - try: - commands.append(self.format_command[format][0]) - except KeyError: - raise DistutilsOptionError("invalid format '%s'" % format) - - # Reinitialize and run each command. - for i in range(len(self.formats)): - cmd_name = commands[i] - sub_cmd = self.reinitialize_command(cmd_name) - if cmd_name not in self.no_format_option: - sub_cmd.format = self.formats[i] - - # passing the owner and group names for tar archiving - if cmd_name == 'bdist_dumb': - sub_cmd.owner = self.owner - sub_cmd.group = self.group - - # If we're going to need to run this command again, tell it to - # keep its temporary files around so subsequent runs go faster. - if cmd_name in commands[i+1:]: - sub_cmd.keep_temp = 1 - self.run_command(cmd_name) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/supervisors/watchgodreload.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/supervisors/watchgodreload.py deleted file mode 100644 index d8bceacefd72d1300fd1d0bacf895cbab459054d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/supervisors/watchgodreload.py +++ /dev/null @@ -1,158 +0,0 @@ -import logging -import warnings -from pathlib import Path -from socket import socket -from typing import TYPE_CHECKING, Callable, Dict, List, Optional - -from watchgod import DefaultWatcher - -from uvicorn.config import Config -from uvicorn.supervisors.basereload import BaseReload - -if TYPE_CHECKING: - import os - - DirEntry = os.DirEntry[str] - -logger = logging.getLogger("uvicorn.error") - - -class CustomWatcher(DefaultWatcher): - def __init__(self, root_path: Path, config: Config): - default_includes = ["*.py"] - self.includes = [ - default - for default in default_includes - if default not in config.reload_excludes - ] - self.includes.extend(config.reload_includes) - self.includes = list(set(self.includes)) - - default_excludes = [".*", ".py[cod]", ".sw.*", "~*"] - self.excludes = [ - default - for default in default_excludes - if default not in config.reload_includes - ] - self.excludes.extend(config.reload_excludes) - self.excludes = list(set(self.excludes)) - - self.watched_dirs: Dict[str, bool] = {} - self.watched_files: Dict[str, bool] = {} - self.dirs_includes = set(config.reload_dirs) - self.dirs_excludes = set(config.reload_dirs_excludes) - self.resolved_root = root_path - super().__init__(str(root_path)) - - def should_watch_file(self, entry: "DirEntry") -> bool: - cached_result = self.watched_files.get(entry.path) - if cached_result is not None: - return cached_result - - entry_path = Path(entry) - - # cwd is not verified through should_watch_dir, so we need to verify here - if entry_path.parent == Path.cwd() and Path.cwd() not in self.dirs_includes: - self.watched_files[entry.path] = False - return False - for include_pattern in self.includes: - if entry_path.match(include_pattern): - for exclude_pattern in self.excludes: - if entry_path.match(exclude_pattern): - self.watched_files[entry.path] = False - return False - self.watched_files[entry.path] = True - return True - self.watched_files[entry.path] = False - return False - - def should_watch_dir(self, entry: "DirEntry") -> bool: - cached_result = self.watched_dirs.get(entry.path) - if cached_result is not None: - return cached_result - - entry_path = Path(entry) - - if entry_path in self.dirs_excludes: - self.watched_dirs[entry.path] = False - return False - - for exclude_pattern in self.excludes: - if entry_path.match(exclude_pattern): - is_watched = False - if entry_path in self.dirs_includes: - is_watched = True - - for directory in self.dirs_includes: - if directory in entry_path.parents: - is_watched = True - - if is_watched: - logger.debug( - "WatchGodReload detected a new excluded dir '%s' in '%s'; " - "Adding to exclude list.", - entry_path.relative_to(self.resolved_root), - str(self.resolved_root), - ) - self.watched_dirs[entry.path] = False - self.dirs_excludes.add(entry_path) - return False - - if entry_path in self.dirs_includes: - self.watched_dirs[entry.path] = True - return True - - for directory in self.dirs_includes: - if directory in entry_path.parents: - self.watched_dirs[entry.path] = True - return True - - for include_pattern in self.includes: - if entry_path.match(include_pattern): - logger.info( - "WatchGodReload detected a new reload dir '%s' in '%s'; " - "Adding to watch list.", - str(entry_path.relative_to(self.resolved_root)), - str(self.resolved_root), - ) - self.dirs_includes.add(entry_path) - self.watched_dirs[entry.path] = True - return True - - self.watched_dirs[entry.path] = False - return False - - -class WatchGodReload(BaseReload): - def __init__( - self, - config: Config, - target: Callable[[Optional[List[socket]]], None], - sockets: List[socket], - ) -> None: - warnings.warn( - '"watchgod" is deprecated, you should switch ' - "to watchfiles (`pip install watchfiles`).", - DeprecationWarning, - ) - super().__init__(config, target, sockets) - self.reloader_name = "WatchGod" - self.watchers = [] - reload_dirs = [] - for directory in config.reload_dirs: - if Path.cwd() not in directory.parents: - reload_dirs.append(directory) - if Path.cwd() not in reload_dirs: - reload_dirs.append(Path.cwd()) - for w in reload_dirs: - self.watchers.append(CustomWatcher(w.resolve(), self.config)) - - def should_restart(self) -> Optional[List[Path]]: - self.pause() - - for watcher in self.watchers: - change = watcher.check() - if change != set(): - return list({Path(c[1]) for c in change}) - - return None diff --git a/spaces/pythainlp/pythainlp/pages/subword_tokenize.py b/spaces/pythainlp/pythainlp/pages/subword_tokenize.py deleted file mode 100644 index 24d3d9a01ec61337cda5af5ff2d5e01a0c74ff30..0000000000000000000000000000000000000000 --- a/spaces/pythainlp/pythainlp/pages/subword_tokenize.py +++ /dev/null @@ -1,32 +0,0 @@ -import streamlit as st -import time -from pythainlp.tokenize import subword_tokenize -st.markdown(""" -# Subword tokenization 🎉 - -PyThaiNLP support Subword tokenization for NLP piplines. We have - -- tcc (default) - Thai Character Cluster (Theeramunkong et al. 2000) -- etcc - Enhanced Thai Character Cluster (Inrut et al. 2001) -- dict - newmm word tokenizer with a syllable dictionary -- ssg - CRF syllable segmenter for Thai -- tltk - syllable tokenizer from tltk - -for this demo page. -""") -with st.form("my_form"): - st.write("Input text") - _text = st.text_area("text","ทดสอบการตัดคำ") - engine=st.selectbox('Select subword tokenizition', ['tcc', 'etcc', 'dict', 'ssg', 'tltk'], key=1,index=0) - - # Every form must have a submit button. - submitted = st.form_submit_button("Submit") - if submitted: - st.subheader("Subwords: ") - start = time.time() - st.write(' '.join(subword_tokenize(str(_text), engine=str(engine)))) - end = time.time() - st.write() - st.write("Running times: "+str(end - start)) - -st.write("See the documentation at [subword_tokenize | PyThaiNLP](https://pythainlp.github.io/docs/3.0/api/tokenize.html#pythainlp.tokenize.subword_tokenize).") diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download VERIFIED Fbus By Maestro 42.md b/spaces/quidiaMuxgu/Expedit-SAM/Download VERIFIED Fbus By Maestro 42.md deleted file mode 100644 index 4823e0351939759bbf59ec3c608e4e9b944db3a8..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download VERIFIED Fbus By Maestro 42.md +++ /dev/null @@ -1,99 +0,0 @@ - -

              What is fbus by maestro 42 and How to Download It

              - -

              If you are looking for a software that can help you unlock, flash, reset or backup your Nokia DCT4 phones, you need to download fbus by maestro 42. Fbus by maestro 42 is a powerful tool that can perform various operations on your Nokia phones using a simple fbus cable. In this article, we will tell you what is fbus by maestro 42, how to download it and how to use it.

              -

              Download fbus by maestro 42


              Download File ⇒⇒⇒ https://geags.com/2uCsda



              - -

              What is fbus by maestro 42

              - -

              Fbus by maestro 42 is a software developed by Maestro Team, a group of programmers who specialize in Nokia phone solutions. Fbus by maestro 42 can unlock, flash, reset or backup your Nokia DCT4 phones using a simple fbus cable that connects your phone to your PC. Fbus by maestro 42 supports most of the Nokia DCT4 models, such as 3100, 3200, 3510i, 6100, 6600, 7210, 7250, 8310 and more. Fbus by maestro 42 can also remove the security code of your Nokia phones in case you forgot it or someone changed it without your knowledge.

              - -

              How to Download fbus by maestro 42

              - -

              If you want to download fbus by maestro 42, you have several options to choose from. Here are some of the sources where you can download fbus by maestro 42:

              - -
                -
              • GSM-Forum: You can download fbus by maestro 42 from GSM-Forum, which is a website that offers various solutions for mobile phones. You can find the link to download fbus by maestro 42 in this thread: https://forum.gsmhosting.com/vbb/f18/fbus-maestro-147596/
              • -
              • Dr. Simran Saini: You can also download fbus by maestro 42 from Dr. Simran Saini's website, which is a platform that provides health and wellness tips. You can find the link to download fbus by maestro 42 in this post: https://www.drsimransaini.com/forum/dieting/download-fbus-by-maestro-42
              • -
              • CLS Professional Services: Another option to download fbus by maestro 42 is CLS Professional Services, which is a company that offers business and IT solutions. You can find the link to download fbus by maestro 42 in this forum: https://www.clsproserv.com/forum/business-affiliates/download-fbus-by-maestro-42-exclusive
              • -
              • Salt and Iron Training: You can also download fbus by maestro 42 from Salt and Iron Training's website, which is a fitness and wellness center. You can find the link to download fbus by maestro 42 in this forum: https://www.saltandirontraining.fit/forum/wellness-forum/download-fbus-by-maestro-42
              • -
              - -

              Before you download fbus by maestro 42, make sure you have a compatible fbus cable that can connect your Nokia phone to your PC. You can buy an fbus cable online or from a local mobile shop.

              - -

              How to Use fbus by maestro 42

              - -

              After you download fbus by maestro 42, you need to install it on your PC and run it as administrator. Then you need to follow these steps:

              -

              - -
                -
              1. Connect your Nokia phone to your PC using the fbus cable.
              2. -
              3. Select the model of your phone from the drop-down menu.
              4. -
              5. Select the operation you want to perform on your phone, such as unlock, flash, reset or backup.
              6. -
              7. Click on the start button and wait for the process to complete.
              8. -
              9. Disconnect your phone from the PC and enjoy your unlocked or flashed phone.
              10. -
              - -

              Fbus by maestro 42 is a simple and effective software that can help you unlock, flash, reset or backup your Nokia DCT4 phones using a simple fbus cable. You can download fbus by maestro 42 from various sources and use it easily on your PC.

              -

              What are the Benefits of fbus by maestro 42

              - -

              Fbus by maestro 42 is a software that can offer you many benefits for your Nokia DCT4 phones. Here are some of the advantages of using fbus by maestro 42:

              - -
                -
              • You can unlock your Nokia phone from any network provider and use any SIM card you want.
              • -
              • You can flash your Nokia phone with any firmware or language pack you prefer.
              • -
              • You can reset your Nokia phone to its factory settings and remove any unwanted data or settings.
              • -
              • You can backup your Nokia phone's data and restore it in case of any loss or damage.
              • -
              • You can save money and time by doing all these operations yourself without going to a mobile shop or service center.
              • -
              - -

              What are the Risks of fbus by maestro 42

              - -

              While fbus by maestro 42 is a useful and reliable software, it also comes with some risks that you need to be aware of. Here are some of the drawbacks of using fbus by maestro 42:

              - -
                -
              • You may damage your Nokia phone if you use an incompatible fbus cable or select the wrong model.
              • -
              • You may void your Nokia phone's warranty if you unlock or flash it with fbus by maestro 42.
              • -
              • You may lose your Nokia phone's data or settings if you reset or backup it with fbus by maestro 42.
              • -
              • You may face legal issues if you unlock or flash your Nokia phone with fbus by maestro 42 without the permission of the network provider or the manufacturer.
              • -
              - -

              Therefore, you need to be careful and responsible when using fbus by maestro 42. You need to follow the instructions carefully and backup your data before performing any operation. You also need to respect the laws and regulations of your country and the network provider.

              -

              What are the Alternatives to fbus by maestro 42

              - -

              Fbus by maestro 42 is not the only software that can unlock, flash, reset or backup your Nokia DCT4 phones. There are other tools that can offer similar or better features and functions. Here are some of the alternatives to fbus by maestro 42:

              - -
                -
              • Nokia Best: Nokia Best is a professional tool that can unlock, flash, reset or backup your Nokia phones using a USB cable. Nokia Best supports most of the Nokia models, including DCT4, BB5, Lumia and Android. Nokia Best has a user-friendly interface and a fast and safe operation.
              • -
              • Nokia Tool: Nokia Tool is another tool that can unlock, flash, reset or backup your Nokia phones using a USB cable. Nokia Tool supports many Nokia models, such as DCT4, BB5, SL3 and MTK. Nokia Tool has a simple and easy interface and a reliable and secure operation.
              • -
              • Nokia Care Suite: Nokia Care Suite is an official tool from Nokia that can unlock, flash, reset or backup your Nokia phones using a USB cable. Nokia Care Suite supports all the Nokia models, including DCT4, BB5, Lumia and Android. Nokia Care Suite has a comprehensive and advanced interface and a high-quality and stable operation.
              • -
              - -

              Conclusion

              - -

              Fbus by maestro 42 is a software that can help you unlock, flash, reset or backup your Nokia DCT4 phones using a simple fbus cable. You can download fbus by maestro 42 from various sources and use it easily on your PC. However, you need to be careful and responsible when using fbus by maestro 42 as it may damage your phone or void your warranty. You also need to respect the laws and regulations of your country and the network provider. If you want to try other tools that can offer similar or better features and functions, you can check out the alternatives to fbus by maestro 42.

              -

              What are the Tips and Tricks for fbus by maestro 42

              - -

              If you want to use fbus by maestro 42 effectively and efficiently, you need to follow some tips and tricks that can help you get the best results. Here are some of the tips and tricks for fbus by maestro 42:

              - -
                -
              • Make sure you have a compatible fbus cable that can connect your Nokia phone to your PC. You can buy an fbus cable online or from a local mobile shop.
              • -
              • Make sure you select the correct model of your phone from the drop-down menu. If you select the wrong model, you may damage your phone or get an error message.
              • -
              • Make sure you backup your phone's data before performing any operation. You may lose your data or settings if you reset or backup your phone with fbus by maestro 42.
              • -
              • Make sure you have enough battery power on your phone before starting any operation. If your phone turns off during the process, you may brick your phone or corrupt your data.
              • -
              • Make sure you follow the instructions carefully and wait for the process to complete. Do not disconnect your phone from the PC or close the software until the operation is done.
              • -
              - -

              What are the FAQs for fbus by maestro 42

              - -

              If you have any questions or doubts about fbus by maestro 42, you can check out some of the frequently asked questions and their answers. Here are some of the FAQs for fbus by maestro 42:

              - -
                -
              1. Q: Is fbus by maestro 42 free or paid?
                A: Fbus by maestro 42 is a free software that you can download from various sources. However, you may need to pay for an fbus cable that can connect your phone to your PC.
              2. -
              3. Q: Is fbus by maestro 42 safe or risky?
                A: Fbus by maestro 42 is a safe software that can help you unlock, flash, reset or backup your Nokia phones using a simple fbus cable. However, it also comes with some risks that you need to be aware of. You may damage your phone if you use an incompatible fbus cable or select the wrong model. You may void your phone's warranty if you unlock or flash it with fbus by maestro 42. You may lose your phone's data or settings if you reset or backup it with fbus by maestro 42. You may face legal issues if you unlock or flash your phone with fbus by maestro 42 without the permission of the network provider or the manufacturer.
              4. -
              5. Q: Does fbus by maestro 42 work with all Nokia models?
                A: Fbus by maestro 42 works with most of the Nokia DCT4 models, such as 3100, 3200, 3510i, 6100, 6600, 7210, 7250, 8310 and more. However, it does not work with some Nokia models, such as DCT4 type 1 phones (3650) and newer phones (Lumia and Android).
              6. -
              7. Q: Where can I get support for fbus by maestro 42?
                A: If you need any support or help for fbus by maestro 42, you can visit GSM-Forum, which is a website that offers various solutions for mobile phones. You can find the link to GSM-Forum in this thread: https://forum.gsmhosting.com/vbb/f18/fbus-maestro-147596/
              8. -
              -

              Fbus by maestro 42 is a software that can help you unlock, flash, reset or backup your Nokia DCT4 phones using a simple fbus cable. You can download fbus by maestro 42 from various sources and use it easily on your PC. However, you need to be careful and responsible when using fbus by maestro 42 as it may damage your phone or void your warranty. You also need to respect the laws and regulations of your country and the network provider. If you want to try other tools that can offer similar or better features and functions, you can check out the alternatives to fbus by maestro 42.

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Far Cry 2 Worlds.dat.md b/spaces/quidiaMuxgu/Expedit-SAM/Far Cry 2 Worlds.dat.md deleted file mode 100644 index ee282e90fd9858a9eb1ff8b44aba55fba7d014a2..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Far Cry 2 Worlds.dat.md +++ /dev/null @@ -1,6 +0,0 @@ -

              Far Cry 2 Worlds.dat


              DOWNLOAD →→→ https://geags.com/2uCqup



              -
              -LOAD WORLDS STRAIGHT FROM USB Cut out the middleman - load/save ... lowered the complexity of there advanced map editor and put this on far cry 2. ... the shop and what you can do there. dat savegame will be valid for your console. 1fdad05405
              -
              -
              -

              diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Fsx Shockwave 3d Lights Redux Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Fsx Shockwave 3d Lights Redux Download.md deleted file mode 100644 index b0bab42650128c05ef7e28fc523240571899b0fd..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Fsx Shockwave 3d Lights Redux Download.md +++ /dev/null @@ -1,34 +0,0 @@ - -

              How to Install and Enjoy Fsx Shockwave 3D Lights Redux

              -

              If you are looking for a way to enhance your night flying experience in Microsoft Flight Simulator X, you might want to try Fsx Shockwave 3D Lights Redux. This add-on by A2A Simulations adds over 40 new lighting effects to your aircraft, including strobes, beacons, navigation, and runway lights. These lights cast realistic light out into 3D space, creating stunning visuals and a more immersive flying experience.

              -

              In this article, we will show you how to download, install, and enjoy Fsx Shockwave 3D Lights Redux on your FSX. We will also provide some tips and tricks to get the most out of this add-on.

              -

              Fsx shockwave 3d lights redux download


              Download Ziphttps://geags.com/2uCrIw



              - -

              How to Download Fsx Shockwave 3D Lights Redux

              -

              The first step is to download Fsx Shockwave 3D Lights Redux from a reliable source. You can purchase it from simMarket[^1^], where you can also find other products by A2A Simulations. The price is €11.99 (about $13.50) and you will get a Mega-Pack that supports both FS2004 and FSX. If you have previously purchased Shockwave 3D Lights from simMarket, you can get an upgrade price of €4.99 (about $5.60).

              -

              Alternatively, you can download Fsx Shockwave 3D Lights Redux from Fly Away Simulation[^2^], where you can also find a huge selection of free mods and add-ons for MSFS, FSX, P3D & X-Plane. The download size is 1.34 MB and you will need to register for a free account to access it.

              - -

              How to Install Fsx Shockwave 3D Lights Redux

              -

              Once you have downloaded Fsx Shockwave 3D Lights Redux, you will need to install it on your FSX. The installation process is easy and low-risk, as the installer backs up all files into a single, organized backup directory. You can also uninstall the add-on at any time if you wish.

              -

              To install Fsx Shockwave 3D Lights Redux, follow these steps:

              -
                -
              1. Run the installer file that you have downloaded.
              2. -
              3. Select the language of your choice and click Next.
              4. -
              5. Read and accept the license agreement and click Next.
              6. -
              7. Select the destination folder for the installation and click Next.
              8. -
              9. Select the simulator that you want to install the add-on on (FS2004 or FSX) and click Next.
              10. -
              11. Select the backup folder for the original files and click Next.
              12. -
              13. Click Install and wait for the installation to complete.
              14. -
              15. Click Finish and launch your FSX.
              16. -
              - -

              How to Enjoy Fsx Shockwave 3D Lights Redux

              -

              After installing Fsx Shockwave 3D Lights Redux, you will notice a significant improvement in the lighting effects of your aircraft. The add-on installs into all twenty-four Microsoft FSX aircraft, including the older aircraft and the Boeing 777, 737, and 747. It also supports Microsoft Acceleration Expansion pack and offers vintage, halogen, and modern xenon lights options.

              -

              You can also add 3D lights to any third-party aircraft that you have installed on your system. To do so, you will need to edit the aircraft.cfg file of the aircraft that you want to modify. You can find detailed instructions on how to do this on A2A Simulations website[^3^], where they also provide a database of configuration settings for various third-party aircraft.

              -

              -

              To enjoy Fsx Shockwave 3D Lights Redux, simply select an aircraft that has 3D lights installed or modified, choose a night time scenario, and take off. You will be amazed by how realistic and immersive the night flying experience becomes with this add-on. You will see the lights reflecting on the ground, other aircraft, buildings, clouds, and water. You will also be able to see better during landing and taxiing with the fully-realized 3D landing lights.

              - -

              Tips and Tricks for Fsx Shockwave 3D Lights Redux

              -

              To

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/rachana219/MODT2/trackers/strongsort/deep/models/resnetmid.py b/spaces/rachana219/MODT2/trackers/strongsort/deep/models/resnetmid.py deleted file mode 100644 index 017f6c62653535a7b04566227d893cb4dfa2a34c..0000000000000000000000000000000000000000 --- a/spaces/rachana219/MODT2/trackers/strongsort/deep/models/resnetmid.py +++ /dev/null @@ -1,307 +0,0 @@ -from __future__ import division, absolute_import -import torch -import torch.utils.model_zoo as model_zoo -from torch import nn - -__all__ = ['resnet50mid'] - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False - ) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNetMid(nn.Module): - """Residual network + mid-level features. - - Reference: - Yu et al. The Devil is in the Middle: Exploiting Mid-level Representations for - Cross-Domain Instance Matching. arXiv:1711.08106. - - Public keys: - - ``resnet50mid``: ResNet50 + mid-level feature fusion. - """ - - def __init__( - self, - num_classes, - loss, - block, - layers, - last_stride=2, - fc_dims=None, - **kwargs - ): - self.inplanes = 64 - super(ResNetMid, self).__init__() - self.loss = loss - self.feature_dim = 512 * block.expansion - - # backbone network - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False - ) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer( - block, 512, layers[3], stride=last_stride - ) - - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - assert fc_dims is not None - self.fc_fusion = self._construct_fc_layer( - fc_dims, 512 * block.expansion * 2 - ) - self.feature_dim += 512 * block.expansion - self.classifier = nn.Linear(self.feature_dim, num_classes) - - self._init_params() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False - ), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None): - """Constructs fully connected layer - - Args: - fc_dims (list or tuple): dimensions of fc layers, if None, no fc layers are constructed - input_dim (int): input dimension - dropout_p (float): dropout probability, if None, dropout is unused - """ - if fc_dims is None: - self.feature_dim = input_dim - return None - - assert isinstance( - fc_dims, (list, tuple) - ), 'fc_dims must be either list or tuple, but got {}'.format( - type(fc_dims) - ) - - layers = [] - for dim in fc_dims: - layers.append(nn.Linear(input_dim, dim)) - layers.append(nn.BatchNorm1d(dim)) - layers.append(nn.ReLU(inplace=True)) - if dropout_p is not None: - layers.append(nn.Dropout(p=dropout_p)) - input_dim = dim - - self.feature_dim = fc_dims[-1] - - return nn.Sequential(*layers) - - def _init_params(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu' - ) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm1d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def featuremaps(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x4a = self.layer4[0](x) - x4b = self.layer4[1](x4a) - x4c = self.layer4[2](x4b) - return x4a, x4b, x4c - - def forward(self, x): - x4a, x4b, x4c = self.featuremaps(x) - - v4a = self.global_avgpool(x4a) - v4b = self.global_avgpool(x4b) - v4c = self.global_avgpool(x4c) - v4ab = torch.cat([v4a, v4b], 1) - v4ab = v4ab.view(v4ab.size(0), -1) - v4ab = self.fc_fusion(v4ab) - v4c = v4c.view(v4c.size(0), -1) - v = torch.cat([v4ab, v4c], 1) - - if not self.training: - return v - - y = self.classifier(v) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError('Unsupported loss: {}'.format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -""" -Residual network configurations: --- -resnet18: block=BasicBlock, layers=[2, 2, 2, 2] -resnet34: block=BasicBlock, layers=[3, 4, 6, 3] -resnet50: block=Bottleneck, layers=[3, 4, 6, 3] -resnet101: block=Bottleneck, layers=[3, 4, 23, 3] -resnet152: block=Bottleneck, layers=[3, 8, 36, 3] -""" - - -def resnet50mid(num_classes, loss='softmax', pretrained=True, **kwargs): - model = ResNetMid( - num_classes=num_classes, - loss=loss, - block=Bottleneck, - layers=[3, 4, 6, 3], - last_stride=2, - fc_dims=[1024], - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['resnet50']) - return model diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/sintellist_final.py b/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/sintellist_final.py deleted file mode 100644 index b8585d594b02984bbe13005f680aaf1e859864d3..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/sintellist_final.py +++ /dev/null @@ -1,32 +0,0 @@ -import torch.utils.data as data - -from PIL import Image -import os -import os.path -import numpy as np -import pdb - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - -def dataloader(filepath): - - left_fold = 'image_2/' - train = [img for img in os.listdir(filepath+left_fold) if img.find('Sintel_final') > -1] - - l0_train = [filepath+left_fold+img for img in train] - l0_train = [img for img in l0_train if '%s_%s.png'%(img.rsplit('_',1)[0],'%02d'%(1+int(img.split('.')[0].split('_')[-1])) ) in l0_train ] - - #l0_train = [i for i in l0_train if not '10.png' in i] # remove 10 as val - - l1_train = ['%s_%s.png'%(img.rsplit('_',1)[0],'%02d'%(1+int(img.split('.')[0].split('_')[-1])) ) for img in l0_train] - flow_train = [img.replace('image_2','flow_occ') for img in l0_train] - - pdb.set_trace() - return l0_train, l1_train, flow_train diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Bahubali - The Beginning dubbed in hindi full movie download in mp4 How to get the best quality and speed.md b/spaces/raedeXanto/academic-chatgpt-beta/Bahubali - The Beginning dubbed in hindi full movie download in mp4 How to get the best quality and speed.md deleted file mode 100644 index 486804afc8925ee79add9c647dbf3f4b8ab7ea8d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Bahubali - The Beginning dubbed in hindi full movie download in mp4 How to get the best quality and speed.md +++ /dev/null @@ -1,90 +0,0 @@ - -

              Bahubali - The Beginning Dubbed in Hindi Full Movie in MP4

              -

              Bahubali - The Beginning is a 2015 Indian epic action film directed by S.S. Rajamouli. It is the first installment of a two-part series that tells the story of Amarendra Bahubali, a legendary prince who must reclaim his rightful place as the king of Mahishmati from his evil cousin Bhallaladeva. The film is one of the highest-grossing Indian films of all time and has received critical acclaim for its visual effects, cinematography, music, and performances.

              -

              Bahubali - The Beginning dubbed in hindi full movie download in mp4


              Download Filehttps://tinourl.com/2uL0sz



              -

              If you are a fan of epic movies with stunning visuals, thrilling action, and captivating drama, then you should definitely watch Bahubali - The Beginning dubbed in Hindi full movie in MP4. In this article, we will tell you why you should watch this movie in Hindi, how to download it from a reliable source, what to expect from it, and how to enjoy it. So, let's get started!

              -

              Why You Should Watch Bahubali - The Beginning Dubbed in Hindi

              -

              There are many reasons why you should watch Bahubali - The Beginning dubbed in Hindi full movie in MP4. Here are some of them:

              -
                -
              • Cultural relevance: The film is based on Indian mythology and history and showcases the rich culture and heritage of India. By watching it in Hindi, you can appreciate the authenticity and beauty of the language and its expressions. You can also relate to the characters and their emotions better.
              • -
              • Emotional impact: The film is a roller coaster ride of emotions that will keep you hooked till the end. You will laugh, cry, cheer, and gasp as you witness the epic saga of Amarendra Bahubali and his quest for justice. By watching it in Hindi, you can feel the intensity and depth of the dialogues and the voice acting. You can also enjoy the songs and their lyrics more.
              • -
              • Linguistic diversity: India is a country with many languages and dialects. By watching Bahubali - The Beginning dubbed in Hindi full movie in MP4, you can experience one of the most widely spoken languages in India and learn some new words and phrases. You can also compare and contrast it with other languages that are spoken in the film, such as Telugu, Tamil, Malayalam, Kannada, etc.
              • -
              -

              How to Download Bahubali - The Beginning Dubbed in Hindi Full Movie in MP4

              -

              If you are convinced that you should watch Bahubali - The Beginning dubbed in Hindi full movie in MP4, then you might be wondering how to download it from a reliable source. Well, don't worry because we have got you covered. Here are some simple steps that you can follow to download this amazing movie:

              -
                -

                How to Enjoy Bahubali - The Beginning Dubbed in Hindi Full Movie in MP4

                -

                Now that you know what to expect from Bahubali - The Beginning dubbed in Hindi full movie in MP4, you might be wondering how to enjoy it to the fullest. Well, we have some tips and suggestions for you that will make your movie-watching experience more fun and enjoyable. Here are some of them:

                -
                  -
                • Choose a good device: To watch Bahubali - The Beginning dubbed in Hindi full movie in MP4, you need a device that can play MP4 files smoothly and clearly. You can use your laptop, tablet, smartphone, or smart TV to watch the movie. Make sure that your device has a good screen resolution, sound quality, and battery life.
                • -
                • Set a comfortable environment: To watch Bahubali - The Beginning dubbed in Hindi full movie in MP4, you need a comfortable environment that can enhance your mood and attention. You can watch the movie in your bedroom, living room, or any other place that suits you. Make sure that the place is well-lit, well-ventilated, and free from distractions. You can also adjust the temperature, lighting, and volume according to your preference.
                • -
                • Invite friends or family: To watch Bahubali - The Beginning dubbed in Hindi full movie in MP4, you can invite your friends or family members who share your interest and enthusiasm for the movie. You can watch the movie together and share your opinions and reactions. You can also have some snacks and drinks to munch on while watching the movie.
                • -
                -

                Conclusion

                -

                Bahubali - The Beginning dubbed in Hindi full movie in MP4 is a must-watch for anyone who loves epic movies with stunning visuals, thrilling action, and captivating drama. It is a masterpiece of Indian cinema that tells the story of Amarendra Bahubali, a legendary prince who must reclaim his rightful place as the king of Mahishmati from his evil cousin Bhallaladeva. The movie has many reasons to watch it in Hindi, such as cultural relevance, emotional impact, and linguistic diversity. It also has many things to expect from it, such as the characters, themes, and scenes. It also has many ways to enjoy it, such as choosing a good device, setting a comfortable environment, and inviting friends or family.

                -

                So, what are you waiting for? Go ahead and download Bahubali - The Beginning dubbed in Hindi full movie in MP4 from the official website of the film or from other licensed platforms. You will not regret it!

                -

                FAQs

                -

                Here are some frequently asked questions about Bahubali - The Beginning dubbed in Hindi full movie in MP4:

                -

                Bahubali 1 hindi dubbed mp4 download
                -Download Bahubali The Beginning in hindi full movie mp4
                -Bahubali hindi dubbed full movie mp4 free download
                -How to download Bahubali The Beginning in hindi mp4
                -Bahubali The Beginning hindi dubbed mp4 movie download
                -Bahubali 1 full movie in hindi mp4 download
                -Bahubali The Beginning hindi mp4 download link
                -Watch Bahubali The Beginning hindi dubbed full movie mp4 online
                -Bahubali The Beginning full movie in hindi mp4 720p download
                -Bahubali The Beginning hindi dubbed mp4 hd download
                -Bahubali 1 hindi mp4 movie download filmywap
                -Bahubali The Beginning hindi dubbed mp4 filmyzilla download
                -Bahubali The Beginning full movie in hindi mp4 480p download
                -Bahubali The Beginning hindi dubbed mp4 1080p download
                -Bahubali 1 full movie in hindi mp4 free download
                -Download Bahubali The Beginning hindi dubbed mp4 from torrent
                -Bahubali The Beginning hindi mp4 movie download pagalworld
                -Bahubali The Beginning full movie in hindi mp4 300mb download
                -Bahubali The Beginning hindi dubbed mp4 worldfree4u download
                -Bahubali 1 hindi mp4 movie download khatrimaza
                -Download Bahubali The Beginning in hindi full movie mp4 for mobile
                -Bahubali The Beginning full movie in hindi mp4 hd quality download
                -Bahubali The Beginning hindi dubbed mp4 direct download link
                -Bahubali 1 full movie in hindi mp4 with english subtitles download
                -Download Bahubali The Beginning in hindi full movie mp4 from youtube
                -Bahubali The Beginning full movie in hindi mp4 google drive download
                -Bahubali The Beginning hindi dubbed mp4 fast download
                -Bahubali 1 full movie in hindi mp4 with sinhala subtitles download
                -Download Bahubali The Beginning in hindi full movie mp4 from telegram
                -Bahubali The Beginning full movie in hindi mp4 tamilrockers download
                -Bahubali The Beginning hindi dubbed mp4 high speed download
                -Bahubali 1 full movie in hindi mp4 with malay subtitles download
                -Download Bahubali The Beginning in hindi full movie mp4 from dailymotion
                -Bahubali The Beginning full movie in hindi mp4 bolly4u download
                -Bahubali The Beginning hindi dubbed mp4 low size download
                -Bahubali 1 full movie in hindi mp4 with urdu subtitles download
                -Download Bahubali The Beginning in hindi full movie mp4 from vimeo
                -Bahubali The Beginning full movie in hindi mp4 moviesflix download
                -Bahubali The Beginning hindi dubbed mp4 original quality download
                -Bahubali 1 full movie in hindi mp4 with telugu audio download
                -Download Bahubali The Beginning in hindi full movie mp4 from amazon prime video
                -Bahubali The Beginning full movie in hindi mp4 skymovieshd download
                -Bahubali The Beginning hindi dubbed mp4 no watermark download
                -Bahubali 1 full movie in hindi mp4 with tamil audio download
                -Download Bahubali The Beginning in hindi full movie mp4 from netflix
                -Bahubali The Beginning full movie in hindi mp4 movierulz download
                -Bahubali The Beginning hindi dubbed mp4 ad free download
                -Bahubali 1 full movie in hindi mp4 with kannada audio download

                -
                  -
                1. Q: Is Bahubali - The Beginning dubbed in Hindi full movie in MP4 available on Netflix or Amazon Prime?
                2. -
                3. A: No, Bahubali - The Beginning dubbed in Hindi full movie in MP4 is not available on Netflix or Amazon Prime. You can only download it from the official website of the film or from other licensed platforms.
                4. -
                5. Q: Is Bahubali - The Beginning dubbed in Hindi full movie in MP4 suitable for children?
                6. -
                7. A: Yes, Bahubali - The Beginning dubbed in Hindi full movie in MP4 is suitable for children above 12 years of age. However, parental guidance is advised as the movie contains some scenes of violence and bloodshed.
                8. -
                9. Q: Is Bahubali - The Beginning dubbed in Hindi full movie in MP4 based on a true story?
                10. -
                11. A: No, Bahubali - The Beginning dubbed in Hindi full movie in MP4 is not based on a true story. It is a fictional story inspired by Indian mythology and history.
                12. -
                13. Q: Is Bahubali - The Beginning dubbed in Hindi full movie in MP4 the same as Bahubali - The Conclusion?
                14. -
                15. A: No, Bahubali - The Beginning dubbed in Hindi full movie in MP4 is not the same as Bahubali - The Conclusion. They are two different movies that form a two-part series. Bahubali - The Beginning tells the story of Amarendra Bahubali's past while Bahubali - The Conclusion tells the story of his son's present.
                16. -
                17. Q: How long is Bahubali - The Beginning dubbed in Hindi full movie in MP4?
                18. -
                19. A: Bahubali - The Beginning dubbed in Hindi full movie in MP4 is 2 hours and 39 minutes long.
                20. -
                -

                0a6ba089eb
                -
                -
                \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Endless Love Movie In Hindi Download Tips and Tricks to Avoid Spoilers and Ads.md b/spaces/raedeXanto/academic-chatgpt-beta/Endless Love Movie In Hindi Download Tips and Tricks to Avoid Spoilers and Ads.md deleted file mode 100644 index 59867311340bbe5179c0208035828635f1695027..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Endless Love Movie In Hindi Download Tips and Tricks to Avoid Spoilers and Ads.md +++ /dev/null @@ -1,133 +0,0 @@ -
                -

                Endless Love Movie In Hindi Download

                -

                If you are a fan of romantic movies, you might have heard of Endless Love, a 2014 American romantic drama film directed by Shana Feste. The film is a remake of the 1981 film of the same name, which was based on a novel by Scott Spencer. The film stars Gabriella Wilde and Alex Pettyfer as two young lovers who face opposition from their parents and society.

                -

                Endless Love Movie In Hindi Download


                DOWNLOADhttps://tinourl.com/2uL5wt



                -

                Endless Love is a beautiful and passionate story of love that defies all odds. It has been praised for its cinematography, soundtrack, and performances. The film has also been dubbed in Hindi for the Indian audience, who love to watch romantic movies in their own language.

                -

                If you are looking for a way to download Endless Love movie in Hindi, you have come to the right place. In this article, we will tell you everything you need to know about this movie, including its plot, characters, themes, reviews, and best scenes. We will also tell you where you can download it in Hindi with high quality and fast speed.

                -

                A brief summary of the plot of Endless Love movie

                -

                The film begins with David Elliot (Alex Pettyfer), a charismatic and handsome high school senior who works as a valet at a country club. He has a crush on Jade Butterfield (Gabriella Wilde), a shy and beautiful girl who belongs to a wealthy family. Jade has been sheltered by her parents after her brother's death and has focused on her studies.

                -

                On her graduation day, David finally gets a chance to talk to Jade and invites her to a party at his friend's house. Jade accepts and goes with him, leaving behind her strict father Hugh (Bruce Greenwood) and her supportive mother Anne (Joely Richardson). At the party, David and Jade share their first kiss and fall in love.

                -

                David and Jade start dating and spend every moment together. They sneak into Jade's house at night and make love for the first time. They also go on a road trip with Jade's brother Keith (Rhys Wakefield) and David's friend Mace (Dayo Okeniyi). However, their romance is not approved by Hugh, who thinks that David is not good enough for his daughter and that he will ruin her future.

                -

                Hugh tries to separate them by sending Jade to an internship in another city, hiring a private investigator to dig up dirt on David, and even setting fire to David's house. David is arrested for arson and assault, but Anne helps him get out of jail. She tells him that she believes in their love and that he should fight for Jade.

                -

                David decides to follow Jade to her internship and confess his feelings for her. He finds out that Hugh has arranged for Jade to meet another boy named Miles (Patrick Johnson), who is more suitable for her. David confronts Miles and tells him to stay away from Jade. He then meets Jade at the airport and tells her that he loves her and that they should run away together.

                -

                Jade agrees and they board a train together. However, Hugh arrives at the station and tries to stop them. He tells Jade that he loves her and that he only wants what's best for her. He also tells David that he respects him for his courage and that he will not press charges against him. He asks them to reconsider their decision and think about their future.

                -

                Jade realizes that she loves her father and that she does not want to hurt him. She also realizes that she loves David and that she does not want to lose him. She tells David that they should wait until they are ready to face the world together. They kiss goodbye and promise to stay in touch.

                -

                Endless Love Hindi Dubbed Movie Download
                -Download Endless Love Full Movie In Hindi
                -Endless Love Movie In Hindi Watch Online Free
                -Endless Love 2014 Movie Hindi Download
                -Endless Love Hindi Audio Track Download
                -Endless Love Movie In Hindi 480p Download
                -Endless Love Movie In Hindi 720p Download
                -Endless Love Movie In Hindi 1080p Download
                -Endless Love Movie In Hindi Torrent Download
                -Endless Love Movie In Hindi Filmyzilla Download
                -Endless Love Movie In Hindi Filmywap Download
                -Endless Love Movie In Hindi Worldfree4u Download
                -Endless Love Movie In Hindi Khatrimaza Download
                -Endless Love Movie In Hindi Bolly4u Download
                -Endless Love Movie In Hindi Movierulz Download
                -Endless Love Movie In Hindi 9xmovies Download
                -Endless Love Movie In Hindi Mp4moviez Download
                -Endless Love Movie In Hindi Pagalmovies Download
                -Endless Love Movie In Hindi Skymovieshd Download
                -Endless Love Movie In Hindi Moviesflix Download
                -Endless Love Movie In Hindi Mkv Download
                -Endless Love Movie In Hindi Avi Download
                -Endless Love Movie In Hindi Hdrip Download
                -Endless Love Movie In Hindi Bluray Download
                -Endless Love Movie In Hindi Dvdrip Download
                -Endless Love Movie In Hindi Webrip Download
                -Endless Love Movie In Hindi Brrip Download
                -Endless Love Movie In Hindi Subtitles Download
                -How To Download Endless Love Movie In Hindi
                -Where To Download Endless Love Movie In Hindi
                -Best Site To Download Endless Love Movie In Hindi
                -Free Download Of Endless Love Movie In Hindi
                -Direct Link To Download Endless Love Movie In Hindi
                -Google Drive Link To Download Endless Love Movie In Hindi
                -Mega Link To Download Endless Love Movie In Hindi
                -Telegram Link To Download Endless Love Movie In Hindi
                -Youtube Link To Download Endless Love Movie In Hindi
                -Dailymotion Link To Download Endless Love Movie In Hindi
                -Vimeo Link To Download Endless Love Movie In Hindi
                -Facebook Link To Download Endless Love Movie In Hindi
                -Instagram Link To Download Endless Love Movie In Hindi
                -Twitter Link To Download Endless Love Movie In Hindi
                -Reddit Link To Download Endless Love Movie In Hindi
                -Quora Link To Download Endless Love Movie In Hindi
                -Medium Link To Download Endless Love Movie In Hindi
                -Pinterest Link To Download Endless Love Movie In Hindi
                -Tumblr Link To Download Endless Love Movie In Hindi
                -Linkedin Link To Download Endless Love Movie In Hindi
                -Wordpress Link To Download Endless Love Movie In Hindi

                -

                The film ends with David narrating that he does not know what will happen next, but he knows that he will always love Jade.

                -

                The main characters and their chemistry

                -

                The film revolves around the relationship between David and Jade, who are played by Alex Pettyfer and Gabriella Wilde respectively. They have a great chemistry on screen and portray their characters with sincerity and emotion. They make us believe in their love story and root for them throughout the film.

                -

                Alex Pettyfer is known for his roles in films like I Am Number Four, Beastly, Magic Mike, and The Butler. He delivers a charming and charismatic performance as David, who is a kind-hearted, loyal, brave, and romantic young man. He shows his range as an actor by expressing his anger, frustration, pain, joy, and love with conviction.

                -

                Gabriella Wilde is known for her roles in films like The Three Musketeers, Carrie, Squatters, and Wonder Woman 1984. She delivers a graceful and elegant performance as Jade, who is a smart, sweet, innocent, artistic, and passionate young woman. She shows her growth as an actor by transforming from a shy girl to a confident lover.

                -

                The themes and messages of the film

                -

                The film explores various themes such as love, family, class, freedom, destiny, and choices. It conveys several messages such as:

                -
                  -
                • Love is powerful and can overcome any obstacle.
                • -
                • Family is important and should be respected.
                • -
                • Class does not define who you are or who you can love.
                • -
                • Freedom is essential for happiness.
                • -
                • Destiny is what you make of it.
                • -
                • Choices have consequences.
                • -
                -

                The reviews and ratings of the film

                -

                The film received mixed reviews from critics and audiences alike. Some praised it for its visuals, music, and performances, while others criticized it for its clichés, melodrama, and lack of originality. The film has a rating of 6.3/10 on IMDb, 20% on Rotten Tomatoes, and 30% on Metacritic. The film was also nominated for six Teen Choice Awards, including Choice Movie: Drama, Choice Movie Actor: Drama, and Choice Movie Actress: Drama.

                -

                The best scenes and dialogues of the film

                -

                The film has many memorable scenes and dialogues that showcase the romance, drama, and emotion of the story. Some of them are:

                -
                  -
                • The scene where David and Jade meet for the first time at the country club and exchange glances.
                • -
                • The scene where David and Jade kiss for the first time at the party and confess their feelings.
                • -
                • The scene where David and Jade sneak into her house at night and make love for the first time.
                • -
                • The scene where David and Jade go on a road trip with Keith and Mace and have fun together.
                • -
                • The scene where Hugh sets fire to David's house and David fights back.
                • -
                • The scene where Anne helps David get out of jail and tells him to fight for Jade.
                • -
                • The scene where David follows Jade to her internship and tells her that he loves her.
                • -
                • The scene where Hugh tries to stop David and Jade from running away together and tells them how he feels about them.
                • -
                • The dialogue where David says: "I don't know what's going to happen next. But I know I'll always love you."
                • -
                • The dialogue where Jade says: "You're my endless love."
                • -
                -

                Conclusion

                -

                In conclusion, Endless Love is a romantic drama film that tells the story of two young lovers who face opposition from their parents and society. The film has beautiful cinematography, soundtrack, and performances, but also suffers from some clichés, melodrama, and lack of originality. The film explores various themes such as love, family, class, freedom, destiny, and choices. It is a movie that will touch your heart and make you believe in the power of love. If you want to download Endless Love movie in Hindi, read on to find out how.

                -

                Where can you download Endless Love movie in Hindi?

                -

                There are many websites that offer Endless Love movie in Hindi for download, but not all of them are safe and legal. Some of them may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Some of them may also have low-quality video or audio, or incomplete or corrupted files that can ruin your viewing experience.

                -

                To avoid these risks, you should only download Endless Love movie in Hindi from trusted and reliable sources that have good reviews and ratings from other users. Here are some of the best websites that you can use to download Endless Love movie in Hindi with high quality and fast speed:

                - - - - - - - - - - - - - - - - - -
                WebsiteFeatures
                KatMovieHD- Offers Endless Love movie in Hindi dubbed with 5.1 DD audio and BluRay quality.
                - Provides multiple download links for 480p, 720p, and 1080p resolutions.
                - Supports direct download and torrent download options.
                - Has a user-friendly interface and easy navigation.
                - Source:
                MoviesMint- Offers Endless Love movie in Hindi and English dual audio with WebRip quality.
                - Provides single download links for 480p and 720p resolutions.
                - Supports Google Drive, One Drive, and Mega download options.
                - Has a simple and clean design and layout.
                - Source:
                YouTube- Offers Endless Love movie in Hindi dubbed with HD quality.
                - Provides online streaming and offline download options.
                - Supports various devices and platforms.
                - Has a large and diverse collection of movies and shows.
                - Source:
                -

                FAQs

                -

                Here are some of the frequently asked questions about Endless Love movie:

                -
                  -
                1. Who are the actors of Endless Love movie?
                2. -

                  The actors of Endless Love movie are Gabriella Wilde as Jade Butterfield, Alex Pettyfer as David Elliot, Bruce Greenwood as Hugh Butterfield, Joely Richardson as Anne Butterfield, Rhys Wakefield as Keith Butterfield, Dayo Okeniyi as Mace Green, Patrick Johnson as Miles, Emma Rigby as Jenny, Robert Patrick as Harry Elliot, Anna Enger as Sabine, Fabianne Therese as Checka, and Sharon Conley as Dr. Edie Watanabe.

                  -
                3. Is Endless Love movie based on a book?
                4. -

                  Yes, Endless Love movie is based on a book of the same name by Scott Spencer, published in 1979. The book is a romantic novel that tells the story of David Axelrod and Jade Butterfield, two teenagers who fall in love and become obsessed with each other. The book was a bestseller and was praised for its literary style and psychological depth. The book was also adapted into a film in 1981, starring Brooke Shields and Martin Hewitt.

                  -
                5. Is Endless Love movie available on Netflix?
                6. -

                  No, Endless Love movie is not available on Netflix at the moment. However, you can watch it on other streaming platforms like Amazon Prime Video, Hulu, HBO Max, Peacock, or Vudu.

                  -
                7. How long is Endless Love movie?
                8. -

                  Endless Love movie is 104 minutes long.

                  -
                9. Is there a sequel to Endless Love movie?
                10. -

                  No, there is no sequel to Endless Love movie. However, there is a sequel to the book by Scott Spencer called Waking the Dead, published in 1986. The book follows David Axelrod as he becomes a politician and encounters a woman who resembles his dead lover Jade.

                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Free !!TOP!! Hindi Dharti.md b/spaces/raedeXanto/academic-chatgpt-beta/Free !!TOP!! Hindi Dharti.md deleted file mode 100644 index 0756c27533440754427012746dddc5fb0605db76..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Free !!TOP!! Hindi Dharti.md +++ /dev/null @@ -1,30 +0,0 @@ -
                  -``` -

                  Free Hindi Dharti: A Platform for Learning and Sharing Hindi Literature

                  -

                  Are you a lover of Hindi literature? Do you want to read, write and share your thoughts on Hindi poems, stories, essays and more? If yes, then Free Hindi Dharti is the perfect place for you.

                  -

                  Free Hindi Dharti is an online platform that aims to promote and preserve the rich and diverse heritage of Hindi literature. It is a community of Hindi enthusiasts who share their passion and knowledge of Hindi language and literature.

                  -

                  Free Hindi Dharti


                  DOWNLOAD ===> https://tinourl.com/2uL2Om



                  -

                  On Free Hindi Dharti, you can find a variety of Hindi content, such as:

                  -
                    -
                  • Dharti ko hum swarg banaye: A lesson from class 6 Hindi textbook that teaches us how to make our earth a heaven by protecting the environment and living in harmony.
                  • -
                  • Dharti: A romantic movie from 1970 starring Rajendra Kumar and Waheeda Rehman that depicts the love story of a pilot and a doctor during the Indo-Pak war.
                  • -
                  • Mere desh ki dharti: A patriotic song from the movie Upkar sung by Mahendra Kapoor that celebrates the beauty and diversity of India.
                  • -
                  -

                  And much more! You can also create your own content and share it with other users. You can write poems, stories, essays, reviews, etc. on any topic of your choice. You can also get feedback and suggestions from other users to improve your writing skills.

                  -

                  Free Hindi Dharti is not just a platform for learning and sharing Hindi literature, but also a platform for connecting with like-minded people who share your love for Hindi. You can chat with other users, join groups, participate in contests, and have fun.

                  -

                  So what are you waiting for? Join Free Hindi Dharti today and explore the world of Hindi literature. It's free, easy and fun!

                  - -``` -

                  Free Hindi Dharti also gives you an opportunity to learn about the history and evolution of Hindi literature. Hindi literature has a long and rich tradition that spans over a thousand years. It has been influenced by various cultural, religious and political factors. It has also been enriched by the contributions of various writers from different regions, backgrounds and styles.

                  -

                  Hindi literature can be broadly divided into four periods: Adikal (the early period), Bhaktikal (the devotional period), Ritikal (the ornamental period) and Adhunikal (the modern period). Each period has its own characteristics, themes and genres. Some of the most famous writers and works of Hindi literature are:

                  -

                  -
                    -
                  • Adikal: This period covers from the 10th to the 14th century CE. It is marked by poems of heroism and bravery. Some of the notable works are Prithviraj Raso by Chand Bardai, Alha Khand by Jagnayak, Padmavat by Malik Muhammad Jayasi and Ramacharitamanas by Tulsidas.
                  • -
                  • Bhaktikal: This period covers from the 14th to the 18th century CE. It is marked by poems of devotion and spirituality. Some of the notable writers are Kabir, Surdas, Mirabai, Raskhan, Tulsidas and Nanak.
                  • -
                  • Ritikal: This period covers from the 18th to the 20th century CE. It is marked by poems of romance and aesthetics. Some of the notable writers are Keshavdas, Bihari, Ghananand, Matiram and Dev.
                  • -
                  • Adhunikal: This period covers from the 20th century CE onwards. It is marked by poems of realism and social issues. Some of the notable writers are Premchand, Jaishankar Prasad, Mahadevi Varma, Suryakant Tripathi Nirala and Harivansh Rai Bachchan.
                  • -
                  -

                  Free Hindi Dharti also helps you to appreciate the beauty and diversity of Hindi language. Hindi language is one of the most widely spoken languages in the world. It belongs to the Indo-Aryan branch of the Indo-European language family. It has many dialects and varieties that reflect the regional and social differences of its speakers. Some of the major dialects are Awadhi, Braj, Bundeli, Khari Boli, Marwari, Magahi, Bhojpuri and Chhattisgarhi.

                  -

                  Hindi language is written in two scripts: Devanagari and Perso-Arabic. Devanagari is the official script of India and Nepal. It is also used for Sanskrit, Marathi and Nepali languages. Perso-Arabic is used for Urdu language which is closely related to Hindi. It is also used for Persian, Arabic and Pashto languages.

                  7b8c122e87
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Free Download Firebreather Movie in 12 Discover the Secrets of the Kaiju and Their War with Humans.md b/spaces/raedeXanto/academic-chatgpt-beta/Free Download Firebreather Movie in 12 Discover the Secrets of the Kaiju and Their War with Humans.md deleted file mode 100644 index 40530a8543dd3c47264dd71052300ee8e374ec51..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Free Download Firebreather Movie in 12 Discover the Secrets of the Kaiju and Their War with Humans.md +++ /dev/null @@ -1,209 +0,0 @@ -
                  -

                  How to Free Download Firebreather Movie in 12

                  -

                  Firebreather is an American computer-animated superhero television film, based on the Image Comics comic book series of the same name, which premiered on November 24, 2010, on Cartoon Network. It was directed by Peter Chung from a screenplay by James Krieg based on a story of Phil Hester and Andy Kuhn, and stars the voices of Jesse Head, Dana Delany, Kevin Michael Richardson, Reed Diamond, Dante Basco, Tia Texada, and Amy Davidson.

                  -

                  The film follows Duncan Rosenblatt (Head), a teenage boy who is half-human and half-Kaiju (a giant monster), as he struggles with his identity and his relationship with his parents. He also has to deal with a Kaiju war that threatens his world and his destiny as the next King of All Monsters.

                  -

                  free download firebreather movie in 12


                  DOWNLOADhttps://tinourl.com/2uL10R



                  -

                  Firebreather is a popular film among fans of animation, action, fantasy, and superheroes. It has received positive reviews from critics and audiences alike. It won an Emmy Award for Outstanding Special Class Animated Program in 2011.

                  -

                  If you are one of those who love Firebreather movie and want to watch it anytime and anywhere without paying any money, then this article is for you. In this article, we will show you how to free download Firebreather movie in 12 easy steps. You will also learn some tips and tricks to enhance your viewing experience. So let's get started.

                  -

                  Step 1: Find a reliable website that offers Firebreather movie for free download

                  -

                  The first step is to find a website that offers Firebreather movie for free download. There are many websites that claim to provide free movies online, but not all of them are trustworthy. Some of them may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Some of them may also have low-quality videos or broken links that can ruin your viewing experience.

                  -

                  Therefore, you need to be careful when choosing a website for downloading movies. Here are some things that you should look for in a website:

                  -
                    -
                  • Quality: The website should offer high-quality videos that have clear sound and picture. You should be able to choose from different formats and resolutions according to your preference.
                  • -
                  • Speed: The website should have fast loading and downloading speed that can save your time and bandwidth. You should be able to download Firebreather movie within minutes.
                  • -
                  • Safety: The website should be safe and secure from any viruses, malware, spyware, or other harmful software that can harm your device or data. You should not have to install any additional software or plugins to access the website.
                  • -
                  • Legality: The website should be legal and ethical in providing free movies online. You should not have to worry about any copyright infringement or legal issues when downloading movies.
                  • -
                  -

                  Some examples of websites that offer Firebreather movie for free download are:

                  -

                  free download firebreather movie in 12 full HD
                  -free download firebreather movie in 12 online streaming
                  -free download firebreather movie in 12 actvid.com
                  -free download firebreather movie in 12 m4uhd
                  -free download firebreather movie in 12 animation
                  -free download firebreather movie in 12 action
                  -free download firebreather movie in 12 fantasy
                  -free download firebreather movie in 12 english subtitles
                  -free download firebreather movie in 12 watch online
                  -free download firebreather movie in 12 cartoon network
                  -free download firebreather movie in 12 kaiju
                  -free download firebreather movie in 12 peter chung
                  -free download firebreather movie in 12 comic book
                  -free download firebreather movie in 12 duncan
                  -free download firebreather movie in 12 tia texada
                  -free download firebreather movie in 12 jesse head
                  -free download firebreather movie in 12 dante basco
                  -free download firebreather movie in 12 amy davidson
                  -free download firebreather movie in 12 reed diamond
                  -free download firebreather movie in 12 dana delany
                  -free download firebreather movie in 12 grey delisle
                  -free download firebreather movie in 12 billy west
                  -free download firebreather movie in 12 kevin michael richardson
                  -free download firebreather movie in 12 tom kenny
                  -free download firebreather movie in 12 nicole sullivan
                  -free download firebreather movie in 12 gary anthony williams
                  -free download firebreather movie in 12 josh keaton
                  -free download firebreather movie in 12 jameson moss
                  -free download firebreather movie in 12 jonathan adams
                  -free download firebreather movie in 12 vanessa marshall
                  -free download firebreather movie in 12 made-for-tv film
                  -free download firebreather movie in 12 based on eponymous comic book series
                  -free download firebreather movie in 120 minutes duration
                  -free download firebreather movie in 2010 release date
                  -free download firebreather movie in united states of america production country
                  -free download firebreather movie adventure genre
                  -free download firebreather movie thriller genre
                  -free download firebreather movie science fiction genre
                  -free download firebreather movie family genre
                  -free download firebreather movie super strength power
                  -free download firebreather movie agility power
                  -free download firebreather movie breathe fire power
                  -free download firebreather movie protect family and friends theme
                  -free download firebreather movie giant monster rampage theme
                  -free download firebreather movie normal kid theme
                  -free download firebreather movie normal school theme
                  -free download firebreather movie king of all monsters theme
                  -free download firebreather movie worlds collide theme
                  -free download firebreather movie human wits theme
                  -free download firebreather movie kaiju powers theme

                  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                  Step 2: Choose the format and resolution of the movie

                  -

                  The next step is to choose the format and resolution of the movie that you want to download. There are different formats and resolutions available for downloading movies online. Some of them are:

                  -
                    -
                  • MP4: This is one of the most common and popular formats for downloading movies online. It is compatible with most devices and media players. It has good quality and small size.
                  • -
                  • MKV: This is another common format for downloading movies online. It is also compatible with most devices and media players. It has better quality than MP4 but larger size.
                  • -tools or methods such as:
                      -
                    • Ad blocker: This is a software or extension that blocks or removes ads from websites. You can install an ad blocker on your browser or device to prevent pop-ups or ads from appearing. Some examples of ad blockers are AdBlock, uBlock Origin, etc.
                    • -
                    • VPN: This is a software or service that creates a secure and encrypted connection between your device and a server. You can use a VPN to hide your IP address and location from websites and bypass any geo-restrictions or censorship. Some examples of VPNs are NordVPN, ExpressVPN, etc.
                    • -
                    • Proxy: This is a server or website that acts as an intermediary between your device and another website. You can use a proxy to access websites that are blocked or restricted in your region or network. Some examples of proxies are Hide.me, Proxysite.com, etc.
                    • -
                    -

                    Step 4: Wait for the download to start

                    -

                    The fourth step is to wait for the download to start. Depending on the size and speed of the movie and your internet connection, the download may take from a few minutes to a few hours. You can check the progress of the download on your browser or device.

                    -

                    While waiting for the download to start or finish, you can do some other things such as:

                    -
                      -
                    • Browse other websites: You can browse other websites that interest you while downloading Firebreather movie. You can also search for more information or resources about Firebreather movie and its creators.
                    • -
                    • Watch other videos: You can watch other videos that are related or similar to Firebreather movie while downloading it. You can also watch some trailers or clips of Firebreather movie to get a glimpse of what it is about.
                    • -
                    • Listen to music: You can listen to some music that matches the mood or theme of Firebreather movie while downloading it. You can also listen to some songs that are featured in Firebreather movie or its soundtrack.
                    • -
                    -

                    Step 5: Pause or resume the download if needed

                    -

                    The fifth step is to pause or resume the download if needed. Sometimes, you may have to stop or restart your device or internet connection for some reasons. This may interrupt or affect your download process. To avoid losing or corrupting the downloaded file, you can pause or resume the download if needed.

                    -

                    To pause or resume the download, you can use some tools or methods such as:

                    -
                      -
                    • Download manager: This is a software or extension that manages and organizes your downloads. You can use a download manager to pause, resume, cancel, schedule, or speed up your downloads. Some examples of download managers are Internet Download Manager, Free Download Manager, etc.
                    • -
                    • Browser settings: This is a feature or option that allows you to control your downloads on your browser. You can use browser settings to pause, resume, cancel, or restart your downloads. Some examples of browsers that have this feature are Chrome, Firefox, Edge, etc.
                    • -
                    • Device settings: This is a feature or option that allows you to control your downloads on your device. You can use device settings to pause, resume, cancel, or restart your downloads. Some examples of devices that have this feature are Windows, Mac, Android, iOS, etc.
                    • -
                    -

                    Step 6: Verify the downloaded file

                    - between them. You can use Wi-Fi to connect your computer and your preferred device to the same network and share or stream the downloaded file from one to another. -
                  -

                  Step 8: Enjoy watching Firebreather movie on your device

                  -

                  The eighth step is to enjoy watching Firebreather movie on your device. After you have transferred the downloaded file to your preferred device, you can open and play it using a media player app. You can also adjust some settings such as sound, subtitles, brightness, etc. to enhance your viewing experience.

                  -

                  Here are some tips and tricks to enjoy watching Firebreather movie on your device:

                  -
                    -
                  • Use headphones or speakers: You can use headphones or speakers to improve the sound quality and immersion of Firebreather movie. You can also adjust the volume and balance according to your preference.
                  • -
                  • Use subtitles or captions: You can use subtitles or captions to understand the dialogue and narration of Firebreather movie better. You can also choose the language and font of the subtitles or captions according to your preference.
                  • -
                  • Use brightness or contrast: You can use brightness or contrast to improve the visibility and clarity of Firebreather movie. You can also adjust the color and saturation according to your preference.
                  • -
                  -

                  Step 9: Delete the downloaded file if you want to save space on your device

                  -

                  The ninth step is to delete the downloaded file if you want to save space on your device. If you have watched Firebreather movie and you don't want to keep it on your device anymore, you can delete it to free up some space on your device. This way, you can avoid cluttering your device with unnecessary files.

                  -

                  To delete the downloaded file from your device, you can use some tools or methods such as:

                  -
                    -
                  • File explorer: This is a software or app that shows and manages the files and folders on your device. You can use a file explorer to locate and select the downloaded file and tap on the delete option.
                  • -
                  • Media player: This is a software or app that plays media files such as videos, audios, images, etc. You can use a media player to open and play the downloaded file and tap on the delete option.
                  • -
                  • Trash bin: This is a feature or option that stores the deleted files temporarily before they are permanently erased. You can use the trash bin to restore or delete the downloaded file permanently.
                  • -
                  -

                  Step 10: Share Firebreather movie with your friends and family

                  -

                  The tenth step is to share Firebreather movie with your friends and family. If you enjoyed watching Firebreather movie and you want to share it with others who might be interested in watching it too, you can do so in various ways. Sharing Firebreather movie with others can also help you express your opinions and feelings about it and have some fun discussions.

                  -

                  To share Firebreather movie with others, you can use some tools or methods such as:

                  -
                    -
                  • Social media: This is a platform or app that allows you to communicate and interact with other people online. You can use social media to post or message about Firebreather movie and tag or mention your friends and family who might want to watch it too. Some examples of social media are Facebook, Twitter, Instagram, etc.
                  • -
                  • Email: This is a service or app that allows you to send and receive electronic messages online. You can use email to attach or link Firebreather movie and send it to your friends and family who might want to watch it too.
                  • -to attach or link Firebreather movie and send it to your friends and family who might want to watch it too. Some examples of messaging apps are WhatsApp, Telegram, Signal, etc. -
                  -

                  Step 11: Explore other movies that are similar to Firebreather movie

                  -

                  The eleventh step is to explore other movies that are similar to Firebreather movie. If you liked Firebreather movie and you want to watch more movies that are similar to it in genre, theme, or style, you can do so in various ways. Exploring other movies that are similar to Firebreather movie can also help you discover new stories and characters and expand your horizons.

                  -

                  To explore other movies that are similar to Firebreather movie, you can use some tools or methods such as:

                  -
                    -
                  • Recommendation engines: This is a software or service that suggests movies that are similar to the ones you have watched or liked. You can use recommendation engines to find and watch other movies that are similar to Firebreather movie based on your preferences and ratings. Some examples of recommendation engines are IMDb, Rotten Tomatoes, Netflix, etc.
                  • -
                  • Search engines: This is a software or service that allows you to search for information or resources online. You can use search engines to find and watch other movies that are similar to Firebreather movie based on keywords or queries. Some examples of search engines are Google, Bing, DuckDuckGo, etc.
                  • -
                  • Reviews and blogs: This is a platform or app that allows you to read or write opinions or comments about movies online. You can use reviews and blogs to find and watch other movies that are similar to Firebreather movie based on the feedback and suggestions of other people. Some examples of reviews and blogs are Metacritic, Roger Ebert, MovieLens, etc.
                  • -
                  -

                  Step 12: Learn more about Firebreather movie and its creators

                  -

                  The twelfth and final step is to learn more about Firebreather movie and its creators. If you enjoyed watching Firebreather movie and you want to learn more about its background and production, you can do so in various ways. Learning more about Firebreather movie and its creators can also help you appreciate and understand it better and inspire you to create your own stories and characters.

                  -

                  To learn more about Firebreather movie and its creators, you can use some tools or methods such as:

                  -
                    -
                  • Wikipedia: This is a website or app that provides free and reliable information and resources on various topics online. You can use Wikipedia to learn more about Firebreather movie and its creators such as the plot, cast, crew, reception, awards, etc.
                  • -
                  • IMDb: This is a website or app that provides information and resources on movies, TV shows, celebrities, etc. online. You can use IMDb to learn more about Firebreather movie and its creators such as the trivia, quotes, goofs, biographies, filmographies, etc.
                  • -
                  • YouTube: This is a website or app that allows you to watch and upload videos online. You can use YouTube to learn more about Firebreather movie and its creators such as the interviews, behind-the-scenes, making-ofs, documentaries, etc.
                  • -
                  -

                  Conclusion

                  -

                  In this article, we have shown you how to free download Firebreather movie in 12 easy steps. We have also provided some tips and tricks to enhance your viewing experience and some tools and methods to explore other movies that are similar to Firebreather movie and learn more about it and its creators.

                  -

                  We hope you have found this article helpful and informative. If you have any questions or comments, please feel free to leave them below. If you have enjoyed this article, please share it with your friends and family who might be interested in watching Firebreather movie too.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions and answers about Firebreather movie and how to free download it in 12 steps.

                  -
                    -
                  • Q: What is Firebreather movie about?
                  • -
                  • A: Firebreather movie is about Duncan Rosenblatt, a teenage boy who is half-human and half-Kaiju (a giant monster), as he struggles with his identity and his relationship with his parents. He also has to deal with a Kaiju war that threatens his world and his destiny as the next King of All Monsters.
                  • -
                  • Q: Who are the creators of Firebreather movie?
                  • -
                  • A: Firebreather movie is based on the Image Comics comic book series of the same name, which was created by Phil Hester and Andy Kuhn. The movie was directed by Peter Chung from a screenplay by James Krieg. The movie features the voices of Jesse Head, Dana Delany, Kevin Michael Richardson, Reed Diamond, Dante Basco, Tia Texada, and Amy Davidson.
                  • -
                  • Q: How can I free download Firebreather movie in 12 steps?
                  • -
                  • A: You can free download Firebreather movie in 12 steps by following these steps:
                  • -
                      -
                    1. Find a reliable website that offers Firebreather movie for free download.
                    2. -
                    3. Choose the format and resolution of the movie.
                    4. -
                    5. Click on the download button or link.
                    6. -
                    7. Wait for the download to start.
                    8. -
                    9. Pause or resume the download if needed.
                    10. -
                    11. Verify the downloaded file.
                    12. -
                    13. Transfer the downloaded file to your preferred device.
                    14. -
                    15. Enjoy watching Firebreather movie on your device.
                    16. -
                    17. Delete the downloaded file if you want to save space on your device.
                    18. -
                    19. Share Firebreather movie with your friends and family.
                    20. -
                    21. Explore other movies that are similar to Firebreather movie.
                    22. -
                    23. Learn more about Firebreather movie and its creators.
                    24. -
                    -
                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/text/numbers.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/text/numbers.py deleted file mode 100644 index 491634d692ee71e7ea0e5213b513e15be825c9b2..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/text/numbers.py +++ /dev/null @@ -1,69 +0,0 @@ -import inflect -import re - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text \ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/autogpt/llm_utils.py b/spaces/ramiin2/AutoGPT/autogpt/llm_utils.py deleted file mode 100644 index 821820ffab07be2753cf385ff1de77820e4206ee..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/autogpt/llm_utils.py +++ /dev/null @@ -1,172 +0,0 @@ -from __future__ import annotations - -import time -from ast import List - -import openai -from colorama import Fore, Style -from openai.error import APIError, RateLimitError - -from autogpt.config import Config -from autogpt.logs import logger - -CFG = Config() - -openai.api_key = CFG.openai_api_key - - -def call_ai_function( - function: str, args: list, description: str, model: str | None = None -) -> str: - """Call an AI function - - This is a magic function that can do anything with no-code. See - https://github.com/Torantulino/AI-Functions for more info. - - Args: - function (str): The function to call - args (list): The arguments to pass to the function - description (str): The description of the function - model (str, optional): The model to use. Defaults to None. - - Returns: - str: The response from the function - """ - if model is None: - model = CFG.smart_llm_model - # For each arg, if any are None, convert to "None": - args = [str(arg) if arg is not None else "None" for arg in args] - # parse args to comma separated string - args = ", ".join(args) - messages = [ - { - "role": "system", - "content": f"You are now the following python function: ```# {description}" - f"\n{function}```\n\nOnly respond with your `return` value.", - }, - {"role": "user", "content": args}, - ] - - return create_chat_completion(model=model, messages=messages, temperature=0) - - -# Overly simple abstraction until we create something better -# simple retry mechanism when getting a rate error or a bad gateway -def create_chat_completion( - messages: list, # type: ignore - model: str | None = None, - temperature: float = CFG.temperature, - max_tokens: int | None = None, -) -> str: - """Create a chat completion using the OpenAI API - - Args: - messages (list[dict[str, str]]): The messages to send to the chat completion - model (str, optional): The model to use. Defaults to None. - temperature (float, optional): The temperature to use. Defaults to 0.9. - max_tokens (int, optional): The max tokens to use. Defaults to None. - - Returns: - str: The response from the chat completion - """ - response = None - num_retries = 10 - warned_user = False - if CFG.debug_mode: - print( - Fore.GREEN - + f"Creating chat completion with model {model}, temperature {temperature}," - f" max_tokens {max_tokens}" + Fore.RESET - ) - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - response = openai.ChatCompletion.create( - deployment_id=CFG.get_azure_deployment_id_for_model(model), - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - else: - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - break - except RateLimitError: - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"Reached rate limit, passing..." + Fore.RESET, - ) - if not warned_user: - logger.double_check( - f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. " - + f"You can read more here: {Fore.CYAN}https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration{Fore.RESET}" - ) - warned_user = True - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) - if response is None: - logger.typewriter_log( - "FAILED TO GET RESPONSE FROM OPENAI", - Fore.RED, - "Auto-GPT has failed to get a response from OpenAI's services. " - + f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.", - ) - logger.double_check() - if CFG.debug_mode: - raise RuntimeError(f"Failed to get response after {num_retries} retries") - else: - quit(1) - - return response.choices[0].message["content"] - - -def create_embedding_with_ada(text) -> list: - """Create an embedding with text-ada-002 using the OpenAI SDK""" - num_retries = 10 - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - return openai.Embedding.create( - input=[text], - engine=CFG.get_azure_deployment_id_for_model( - "text-embedding-ada-002" - ), - )["data"][0]["embedding"] - else: - return openai.Embedding.create( - input=[text], model="text-embedding-ada-002" - )["data"][0]["embedding"] - except RateLimitError: - pass - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Coleccion Elige Tu Propia Aventura Pdf Download BETTER.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Coleccion Elige Tu Propia Aventura Pdf Download BETTER.md deleted file mode 100644 index 3fa560abe92c34397b3cef3d799a9ce87058fe0b..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Coleccion Elige Tu Propia Aventura Pdf Download BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  coleccion elige tu propia aventura pdf download


                  DOWNLOAD >> https://urlgoal.com/2uCLz3



                  - -by RDEL GARCÍA · Cited by 45 — discapacidad están obligadas a transitar para lograr su inclusión efectiva en ... contar en la aventura humana, y no habrá calidad, por más que haya igual- dad de ... tionar su propia vida, a pesar de los pronósticos que desaconsejaban una ... colecciones estadísticas para incluir en la DISTAT, los socios del proyecto se. 1fdad05405
                  -
                  -
                  -

                  diff --git a/spaces/renumics/whisper-commonvoice-speaker-issues/Dockerfile b/spaces/renumics/whisper-commonvoice-speaker-issues/Dockerfile deleted file mode 100644 index 7378b67dedc42f5b6268bd0c2bc567d1af811332..0000000000000000000000000000000000000000 --- a/spaces/renumics/whisper-commonvoice-speaker-issues/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9 - -WORKDIR /code -ENV HOME=/code - -RUN pip install pip -U - -RUN pip install renumics-spotlight==1.3.0rc4 - -COPY . . -COPY audios/ . -RUN chmod -R 777 /code -CMD ["python", "run.py"] diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/reppoints_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/reppoints_head.py deleted file mode 100644 index f7204141db43a3754031bc175c87876a2d7df3e5..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/reppoints_head.py +++ /dev/null @@ -1,764 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.ops import DeformConv2d - -from mmdet.core import (build_assigner, build_sampler, images_to_levels, - multi_apply, unmap) -from mmdet.core.anchor.point_generator import MlvlPointGenerator -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class RepPointsHead(AnchorFreeHead): - """RepPoint head. - - Args: - point_feat_channels (int): Number of channels of points features. - gradient_mul (float): The multiplier to gradients from - points refinement and recognition. - point_strides (Iterable): points strides. - point_base_scale (int): bbox scale for assigning labels. - loss_cls (dict): Config of classification loss. - loss_bbox_init (dict): Config of initial points loss. - loss_bbox_refine (dict): Config of points loss in refinement. - use_grid_points (bool): If we use bounding box representation, the - reppoints is represented as grid points on the bounding box. - center_init (bool): Whether to use center point assignment. - transform_method (str): The methods to transform RepPoints to bbox. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - point_feat_channels=256, - num_points=9, - gradient_mul=0.1, - point_strides=[8, 16, 32, 64, 128], - point_base_scale=4, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_init=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.5), - loss_bbox_refine=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - use_grid_points=False, - center_init=True, - transform_method='moment', - moment_mul=0.01, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='reppoints_cls_out', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.num_points = num_points - self.point_feat_channels = point_feat_channels - self.use_grid_points = use_grid_points - self.center_init = center_init - - # we use deform conv to extract points features - self.dcn_kernel = int(np.sqrt(num_points)) - self.dcn_pad = int((self.dcn_kernel - 1) / 2) - assert self.dcn_kernel * self.dcn_kernel == num_points, \ - 'The points number should be a square number.' - assert self.dcn_kernel % 2 == 1, \ - 'The points number should be an odd square number.' - dcn_base = np.arange(-self.dcn_pad, - self.dcn_pad + 1).astype(np.float64) - dcn_base_y = np.repeat(dcn_base, self.dcn_kernel) - dcn_base_x = np.tile(dcn_base, self.dcn_kernel) - dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape( - (-1)) - self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1) - - super().__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - init_cfg=init_cfg, - **kwargs) - - self.gradient_mul = gradient_mul - self.point_base_scale = point_base_scale - self.point_strides = point_strides - self.prior_generator = MlvlPointGenerator( - self.point_strides, offset=0.) - - self.sampling = loss_cls['type'] not in ['FocalLoss'] - if self.train_cfg: - self.init_assigner = build_assigner(self.train_cfg.init.assigner) - self.refine_assigner = build_assigner( - self.train_cfg.refine.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.transform_method = transform_method - if self.transform_method == 'moment': - self.moment_transfer = nn.Parameter( - data=torch.zeros(2), requires_grad=True) - self.moment_mul = moment_mul - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - self.loss_bbox_init = build_loss(loss_bbox_init) - self.loss_bbox_refine = build_loss(loss_bbox_refine) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - pts_out_dim = 4 if self.use_grid_points else 2 * self.num_points - self.reppoints_cls_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_cls_out = nn.Conv2d(self.point_feat_channels, - self.cls_out_channels, 1, 1, 0) - self.reppoints_pts_init_conv = nn.Conv2d(self.feat_channels, - self.point_feat_channels, 3, - 1, 1) - self.reppoints_pts_init_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - self.reppoints_pts_refine_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_pts_refine_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - - def points2bbox(self, pts, y_first=True): - """Converting the points set into bounding box. - - :param pts: the input points sets (fields), each points - set (fields) is represented as 2n scalar. - :param y_first: if y_first=True, the point set is represented as - [y1, x1, y2, x2 ... yn, xn], otherwise the point set is - represented as [x1, y1, x2, y2 ... xn, yn]. - :return: each points set is converting to a bbox [x1, y1, x2, y2]. - """ - pts_reshape = pts.view(pts.shape[0], -1, 2, *pts.shape[2:]) - pts_y = pts_reshape[:, :, 0, ...] if y_first else pts_reshape[:, :, 1, - ...] - pts_x = pts_reshape[:, :, 1, ...] if y_first else pts_reshape[:, :, 0, - ...] - if self.transform_method == 'minmax': - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'partial_minmax': - pts_y = pts_y[:, :4, ...] - pts_x = pts_x[:, :4, ...] - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'moment': - pts_y_mean = pts_y.mean(dim=1, keepdim=True) - pts_x_mean = pts_x.mean(dim=1, keepdim=True) - pts_y_std = torch.std(pts_y - pts_y_mean, dim=1, keepdim=True) - pts_x_std = torch.std(pts_x - pts_x_mean, dim=1, keepdim=True) - moment_transfer = (self.moment_transfer * self.moment_mul) + ( - self.moment_transfer.detach() * (1 - self.moment_mul)) - moment_width_transfer = moment_transfer[0] - moment_height_transfer = moment_transfer[1] - half_width = pts_x_std * torch.exp(moment_width_transfer) - half_height = pts_y_std * torch.exp(moment_height_transfer) - bbox = torch.cat([ - pts_x_mean - half_width, pts_y_mean - half_height, - pts_x_mean + half_width, pts_y_mean + half_height - ], - dim=1) - else: - raise NotImplementedError - return bbox - - def gen_grid_from_reg(self, reg, previous_boxes): - """Base on the previous bboxes and regression values, we compute the - regressed bboxes and generate the grids on the bboxes. - - :param reg: the regression value to previous bboxes. - :param previous_boxes: previous bboxes. - :return: generate grids on the regressed bboxes. - """ - b, _, h, w = reg.shape - bxy = (previous_boxes[:, :2, ...] + previous_boxes[:, 2:, ...]) / 2. - bwh = (previous_boxes[:, 2:, ...] - - previous_boxes[:, :2, ...]).clamp(min=1e-6) - grid_topleft = bxy + bwh * reg[:, :2, ...] - 0.5 * bwh * torch.exp( - reg[:, 2:, ...]) - grid_wh = bwh * torch.exp(reg[:, 2:, ...]) - grid_left = grid_topleft[:, [0], ...] - grid_top = grid_topleft[:, [1], ...] - grid_width = grid_wh[:, [0], ...] - grid_height = grid_wh[:, [1], ...] - intervel = torch.linspace(0., 1., self.dcn_kernel).view( - 1, self.dcn_kernel, 1, 1).type_as(reg) - grid_x = grid_left + grid_width * intervel - grid_x = grid_x.unsqueeze(1).repeat(1, self.dcn_kernel, 1, 1, 1) - grid_x = grid_x.view(b, -1, h, w) - grid_y = grid_top + grid_height * intervel - grid_y = grid_y.unsqueeze(2).repeat(1, 1, self.dcn_kernel, 1, 1) - grid_y = grid_y.view(b, -1, h, w) - grid_yx = torch.stack([grid_y, grid_x], dim=2) - grid_yx = grid_yx.view(b, -1, h, w) - regressed_bbox = torch.cat([ - grid_left, grid_top, grid_left + grid_width, grid_top + grid_height - ], 1) - return grid_yx, regressed_bbox - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def forward_single(self, x): - """Forward feature map of a single FPN level.""" - dcn_base_offset = self.dcn_base_offset.type_as(x) - # If we use center_init, the initial reppoints is from center points. - # If we use bounding bbox representation, the initial reppoints is - # from regular grid placed on a pre-defined bbox. - if self.use_grid_points or not self.center_init: - scale = self.point_base_scale / 2 - points_init = dcn_base_offset / dcn_base_offset.max() * scale - bbox_init = x.new_tensor([-scale, -scale, scale, - scale]).view(1, 4, 1, 1) - else: - points_init = 0 - cls_feat = x - pts_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - pts_feat = reg_conv(pts_feat) - # initialize reppoints - pts_out_init = self.reppoints_pts_init_out( - self.relu(self.reppoints_pts_init_conv(pts_feat))) - if self.use_grid_points: - pts_out_init, bbox_out_init = self.gen_grid_from_reg( - pts_out_init, bbox_init.detach()) - else: - pts_out_init = pts_out_init + points_init - # refine and classify reppoints - pts_out_init_grad_mul = (1 - self.gradient_mul) * pts_out_init.detach( - ) + self.gradient_mul * pts_out_init - dcn_offset = pts_out_init_grad_mul - dcn_base_offset - cls_out = self.reppoints_cls_out( - self.relu(self.reppoints_cls_conv(cls_feat, dcn_offset))) - pts_out_refine = self.reppoints_pts_refine_out( - self.relu(self.reppoints_pts_refine_conv(pts_feat, dcn_offset))) - if self.use_grid_points: - pts_out_refine, bbox_out_refine = self.gen_grid_from_reg( - pts_out_refine, bbox_out_init.detach()) - else: - pts_out_refine = pts_out_refine + pts_out_init.detach() - - if self.training: - return cls_out, pts_out_init, pts_out_refine - else: - return cls_out, self.points2bbox(pts_out_refine) - - def get_points(self, featmap_sizes, img_metas, device): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - - Returns: - tuple: points of each image, valid flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # points center for one time - multi_level_points = self.prior_generator.grid_priors( - featmap_sizes, device=device, with_stride=True) - points_list = [[point.clone() for point in multi_level_points] - for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level grids - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.prior_generator.valid_flags( - featmap_sizes, img_meta['pad_shape']) - valid_flag_list.append(multi_level_flags) - - return points_list, valid_flag_list - - def centers_to_bboxes(self, point_list): - """Get bboxes according to center points. - - Only used in :class:`MaxIoUAssigner`. - """ - bbox_list = [] - for i_img, point in enumerate(point_list): - bbox = [] - for i_lvl in range(len(self.point_strides)): - scale = self.point_base_scale * self.point_strides[i_lvl] * 0.5 - bbox_shift = torch.Tensor([-scale, -scale, scale, - scale]).view(1, 4).type_as(point[0]) - bbox_center = torch.cat( - [point[i_lvl][:, :2], point[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + bbox_shift) - bbox_list.append(bbox) - return bbox_list - - def offset_to_pts(self, center_list, pred_list): - """Change from point offset to point coordinate.""" - pts_list = [] - for i_lvl in range(len(self.point_strides)): - pts_lvl = [] - for i_img in range(len(center_list)): - pts_center = center_list[i_img][i_lvl][:, :2].repeat( - 1, self.num_points) - pts_shift = pred_list[i_lvl][i_img] - yx_pts_shift = pts_shift.permute(1, 2, 0).view( - -1, 2 * self.num_points) - y_pts_shift = yx_pts_shift[..., 0::2] - x_pts_shift = yx_pts_shift[..., 1::2] - xy_pts_shift = torch.stack([x_pts_shift, y_pts_shift], -1) - xy_pts_shift = xy_pts_shift.view(*yx_pts_shift.shape[:-1], -1) - pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center - pts_lvl.append(pts) - pts_lvl = torch.stack(pts_lvl, 0) - pts_list.append(pts_lvl) - return pts_list - - def _point_target_single(self, - flat_proposals, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - stage='init', - unmap_outputs=True): - inside_flags = valid_flags - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample proposals - proposals = flat_proposals[inside_flags, :] - - if stage == 'init': - assigner = self.init_assigner - pos_weight = self.train_cfg.init.pos_weight - else: - assigner = self.refine_assigner - pos_weight = self.train_cfg.refine.pos_weight - assign_result = assigner.assign(proposals, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, proposals, - gt_bboxes) - - num_valid_proposals = proposals.shape[0] - bbox_gt = proposals.new_zeros([num_valid_proposals, 4]) - pos_proposals = torch.zeros_like(proposals) - proposals_weights = proposals.new_zeros([num_valid_proposals, 4]) - labels = proposals.new_full((num_valid_proposals, ), - self.num_classes, - dtype=torch.long) - label_weights = proposals.new_zeros( - num_valid_proposals, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_gt_bboxes = sampling_result.pos_gt_bboxes - bbox_gt[pos_inds, :] = pos_gt_bboxes - pos_proposals[pos_inds, :] = proposals[pos_inds, :] - proposals_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of proposals - if unmap_outputs: - num_total_proposals = flat_proposals.size(0) - labels = unmap(labels, num_total_proposals, inside_flags) - label_weights = unmap(label_weights, num_total_proposals, - inside_flags) - bbox_gt = unmap(bbox_gt, num_total_proposals, inside_flags) - pos_proposals = unmap(pos_proposals, num_total_proposals, - inside_flags) - proposals_weights = unmap(proposals_weights, num_total_proposals, - inside_flags) - - return (labels, label_weights, bbox_gt, pos_proposals, - proposals_weights, pos_inds, neg_inds) - - def get_targets(self, - proposals_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - stage='init', - label_channels=1, - unmap_outputs=True): - """Compute corresponding GT box and classification targets for - proposals. - - Args: - proposals_list (list[list]): Multi level points/bboxes of each - image. - valid_flag_list (list[list]): Multi level valid flags of each - image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_bboxes_list (list[Tensor]): Ground truth labels of each box. - stage (str): `init` or `refine`. Generate target for init stage or - refine stage - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each level. # noqa: E501 - - bbox_gt_list (list[Tensor]): Ground truth bbox of each level. - - proposal_list (list[Tensor]): Proposals(points/bboxes) of each level. # noqa: E501 - - proposal_weights_list (list[Tensor]): Proposal weights of each level. # noqa: E501 - - num_total_pos (int): Number of positive samples in all images. # noqa: E501 - - num_total_neg (int): Number of negative samples in all images. # noqa: E501 - """ - assert stage in ['init', 'refine'] - num_imgs = len(img_metas) - assert len(proposals_list) == len(valid_flag_list) == num_imgs - - # points number of multi levels - num_level_proposals = [points.size(0) for points in proposals_list[0]] - - # concat all level points and flags to a single tensor - for i in range(num_imgs): - assert len(proposals_list[i]) == len(valid_flag_list[i]) - proposals_list[i] = torch.cat(proposals_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_gt, all_proposals, - all_proposal_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._point_target_single, - proposals_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - stage=stage, - unmap_outputs=unmap_outputs) - # no valid points - if any([labels is None for labels in all_labels]): - return None - # sampled points of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - labels_list = images_to_levels(all_labels, num_level_proposals) - label_weights_list = images_to_levels(all_label_weights, - num_level_proposals) - bbox_gt_list = images_to_levels(all_bbox_gt, num_level_proposals) - proposals_list = images_to_levels(all_proposals, num_level_proposals) - proposal_weights_list = images_to_levels(all_proposal_weights, - num_level_proposals) - return (labels_list, label_weights_list, bbox_gt_list, proposals_list, - proposal_weights_list, num_total_pos, num_total_neg) - - def loss_single(self, cls_score, pts_pred_init, pts_pred_refine, labels, - label_weights, bbox_gt_init, bbox_weights_init, - bbox_gt_refine, bbox_weights_refine, stride, - num_total_samples_init, num_total_samples_refine): - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - cls_score = cls_score.contiguous() - loss_cls = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=num_total_samples_refine) - - # points loss - bbox_gt_init = bbox_gt_init.reshape(-1, 4) - bbox_weights_init = bbox_weights_init.reshape(-1, 4) - bbox_pred_init = self.points2bbox( - pts_pred_init.reshape(-1, 2 * self.num_points), y_first=False) - bbox_gt_refine = bbox_gt_refine.reshape(-1, 4) - bbox_weights_refine = bbox_weights_refine.reshape(-1, 4) - bbox_pred_refine = self.points2bbox( - pts_pred_refine.reshape(-1, 2 * self.num_points), y_first=False) - normalize_term = self.point_base_scale * stride - loss_pts_init = self.loss_bbox_init( - bbox_pred_init / normalize_term, - bbox_gt_init / normalize_term, - bbox_weights_init, - avg_factor=num_total_samples_init) - loss_pts_refine = self.loss_bbox_refine( - bbox_pred_refine / normalize_term, - bbox_gt_refine / normalize_term, - bbox_weights_refine, - avg_factor=num_total_samples_refine) - return loss_cls, loss_pts_init, loss_pts_refine - - def loss(self, - cls_scores, - pts_preds_init, - pts_preds_refine, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - # target for initial stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_init = self.offset_to_pts(center_list, - pts_preds_init) - if self.train_cfg.init.assigner['type'] == 'PointAssigner': - # Assign target for center list - candidate_list = center_list - else: - # transform center list to bbox list and - # assign target for bbox list - bbox_list = self.centers_to_bboxes(center_list) - candidate_list = bbox_list - cls_reg_targets_init = self.get_targets( - candidate_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='init', - label_channels=label_channels) - (*_, bbox_gt_list_init, candidate_list_init, bbox_weights_list_init, - num_total_pos_init, num_total_neg_init) = cls_reg_targets_init - num_total_samples_init = ( - num_total_pos_init + - num_total_neg_init if self.sampling else num_total_pos_init) - - # target for refinement stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_refine = self.offset_to_pts( - center_list, pts_preds_refine) - bbox_list = [] - for i_img, center in enumerate(center_list): - bbox = [] - for i_lvl in range(len(pts_preds_refine)): - bbox_preds_init = self.points2bbox( - pts_preds_init[i_lvl].detach()) - bbox_shift = bbox_preds_init * self.point_strides[i_lvl] - bbox_center = torch.cat( - [center[i_lvl][:, :2], center[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + - bbox_shift[i_img].permute(1, 2, 0).reshape(-1, 4)) - bbox_list.append(bbox) - cls_reg_targets_refine = self.get_targets( - bbox_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='refine', - label_channels=label_channels) - (labels_list, label_weights_list, bbox_gt_list_refine, - candidate_list_refine, bbox_weights_list_refine, num_total_pos_refine, - num_total_neg_refine) = cls_reg_targets_refine - num_total_samples_refine = ( - num_total_pos_refine + - num_total_neg_refine if self.sampling else num_total_pos_refine) - - # compute loss - losses_cls, losses_pts_init, losses_pts_refine = multi_apply( - self.loss_single, - cls_scores, - pts_coordinate_preds_init, - pts_coordinate_preds_refine, - labels_list, - label_weights_list, - bbox_gt_list_init, - bbox_weights_list_init, - bbox_gt_list_refine, - bbox_weights_list_refine, - self.point_strides, - num_total_samples_init=num_total_samples_init, - num_total_samples_refine=num_total_samples_refine) - loss_dict_all = { - 'loss_cls': losses_cls, - 'loss_pts_init': losses_pts_init, - 'loss_pts_refine': losses_pts_refine - } - return loss_dict_all - - # Same as base_dense_head/_get_bboxes_single except self._bbox_decode - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. RepPoints head does not need - this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 2). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, priors) in enumerate( - zip(cls_score_list, bbox_pred_list, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1)[:, :-1] - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self._bbox_decode(priors, bbox_pred, - self.point_strides[level_idx], - img_shape) - - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process( - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - img_meta['scale_factor'], - cfg, - rescale=rescale, - with_nms=with_nms) - - def _bbox_decode(self, points, bbox_pred, stride, max_shape): - bbox_pos_center = torch.cat([points[:, :2], points[:, :2]], dim=1) - bboxes = bbox_pred * stride + bbox_pos_center - x1 = bboxes[:, 0].clamp(min=0, max=max_shape[1]) - y1 = bboxes[:, 1].clamp(min=0, max=max_shape[0]) - x2 = bboxes[:, 2].clamp(min=0, max=max_shape[1]) - y2 = bboxes[:, 3].clamp(min=0, max=max_shape[0]) - decoded_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return decoded_bboxes diff --git a/spaces/rorallitri/biomedical-language-models/logs/Adobe Acrobat Dc Keygen Where to Find and How to Use a Working Crack for Acrobat.md b/spaces/rorallitri/biomedical-language-models/logs/Adobe Acrobat Dc Keygen Where to Find and How to Use a Working Crack for Acrobat.md deleted file mode 100644 index ed0021148e050284b09552b31d2fa30eac2b28bd..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Adobe Acrobat Dc Keygen Where to Find and How to Use a Working Crack for Acrobat.md +++ /dev/null @@ -1,6 +0,0 @@ -
                  -

                  Not all key generators are illegal. It happens that the software developers distribute keygens with their products for business purposes, for example, if the particular program is bought by a large company.

                  -

                  Posted in Document Reader, Mac, Software. Tagged as Adobe Acrobat Pro DC 20.013.20064 Crack, Adobe Acrobat Pro DC 2020 Full Download, Adobe Acrobat Pro DC 2022 Crack, Adobe Acrobat Pro DC 2023 Activation Code, Adobe Acrobat Pro DC 2023 Crack, Adobe Acrobat Pro DC 2023 Crack Key, Adobe Acrobat Pro DC 21.001.20140 Crack, Adobe Acrobat Pro DC 21.011.20039 Crack, Adobe Acrobat Pro DC 22.001.20142 Crack, Adobe Acrobat Pro DC 22.003.20258 Crack, Adobe Acrobat Pro DC 22.003.20310 Crack, adobe acrobat pro dc activation code, Adobe Acrobat Pro DC Crack, Adobe Acrobat Pro DC Crack 2021, Adobe Acrobat Pro DC Crack 2022, Adobe Acrobat Pro DC Crack 2023 Free Download, Adobe Acrobat Pro DC Crack Mac, Adobe Acrobat Pro DC Crack Mac 2021, Adobe Acrobat Pro DC Crack Windows, Adobe Acrobat Pro DC Crack Windows 11, Adobe Acrobat Pro DC Key, Adobe Acrobat Pro DC keygen, Adobe Acrobat Pro DC Mac Crack, Adobe Acrobat Pro DC Serial Number, Adobe Acrobat Pro DC Torrent, Adobe Acrobat Pro Serial Key 2023

                  -

                  Adobe Acrobat Dc Keygen


                  DOWNLOADhttps://tinurll.com/2uzowm



                  aaccfb2cb3
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Atados A Una Estrella Claudia Celis Pdf 42.md b/spaces/rorallitri/biomedical-language-models/logs/Atados A Una Estrella Claudia Celis Pdf 42.md deleted file mode 100644 index 6267ee26ef74c04ce013602813a84a01f759f0de..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Atados A Una Estrella Claudia Celis Pdf 42.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Atados A Una Estrella Claudia Celis Pdf 42


                  Download Filehttps://tinurll.com/2uznjh



                  -
                  -Libro digital, PDF - (Colección Institucional 2019) ... Claudia Korol. ... su opresión en la medida en que permanezcan atadas a la maternidad ... 42 colección género la academia legal. A su vez, se advierte cierta reticencia o resistencia por parte ... Judicializadas por garantizar el derecho al aborto: Estrella, ... “Mocha Celis”. 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/rorallitri/biomedical-language-models/logs/Bobby Kritical Official Drum Kit Rar __LINK__.md b/spaces/rorallitri/biomedical-language-models/logs/Bobby Kritical Official Drum Kit Rar __LINK__.md deleted file mode 100644 index 9cf724f2711ac9e98fdd03fd4ed349bc18435e25..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Bobby Kritical Official Drum Kit Rar __LINK__.md +++ /dev/null @@ -1,7 +0,0 @@ - -

                  this is the official release of the all-star band, which was recorded live at the santa monica civic auditorium, california, united states on the 15th of august 1989. the all-star band was founded by drummer/percussionist lenny white, and was made up of many of his former colleagues in the jazz and rock world. lenny was the driving force behind the all-star band. he was a teacher, writer and drummer, and had been teaching in a high school for several years at the time of the formation of the all-star band. the all-star band features some of the top musicians of the '80s and '90s, including many jazz, rock, and pop stars. the all-star band features some of the greatest musicians on the planet, including guitar virtuoso michael landau, bassist/vocalist marcus miller, keyboardist/vocalist/producer steve winwood, and jazz/rock guitarist john scofield, among others. "the all-star band" is a versatile group and can easily be categorized into a wide variety of genres.

                  -

                  this is a live recording of the october 20, 1967 performance at the new york jazz club and it is one of the very best live recordings made. the band included eric dolphy on sax, jimmy garrison on guitar, bobby hutcherson on bass and al foster on drums. it is a very powerful performance. dolphy was considered to be one of the great saxophonists of his time and he had a very powerful and exciting style. he was one of the first musicians to embrace the free jazz movement and he was a force in the musical world.

                  -

                  Bobby Kritical Official Drum Kit rar


                  Download File ……… https://tinurll.com/2uzoj4



                  -

                  a recording from the palais des sports in montreal, canada, on april 6, 1974. this is a live recording of the tenor saxophonist john coltrane. the band included pharoah sanders on tenor saxophone, fred hopkins on piano, kenny garrett on guitar, dave holland on bass and elvin jones on drums. this is a very interesting recording because the band is playing coltranes music and coltranes music was not very popular at that time, so this is a very interesting recording. coltrane was a very great musician and he was a very great alto saxophonist and he had a very good sound. this is a very good recording.
                  - sneaky pete kleinow, 2001


                  899543212b
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Fightnightchampionpcdownload [EXCLUSIVE].md b/spaces/rorallitri/biomedical-language-models/logs/Fightnightchampionpcdownload [EXCLUSIVE].md deleted file mode 100644 index f7dfd0893a88e61faaef2019bb276dc505b0b47f..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Fightnightchampionpcdownload [EXCLUSIVE].md +++ /dev/null @@ -1,6 +0,0 @@ -

                  fightnightchampionpcdownload


                  Download Filehttps://tinurll.com/2uznkS



                  -
                  -Dead Space PS3 Download For USA And EUR With Fix DLC And Updates Full PKG ... Yet, if the DLC for 1 and 2 is released for the PC ports, I will The way you ... space 2, ea fight night round 3, ea fight night champion, sega all stars racing, ... 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/rstallman/Mayfair-Partner-Music/setup.py b/spaces/rstallman/Mayfair-Partner-Music/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/rubberboy/stable-diffusion-webui/env_patch.py b/spaces/rubberboy/stable-diffusion-webui/env_patch.py deleted file mode 100644 index bd0e40dd64274ce8679905df4e1ca9ff454de06d..0000000000000000000000000000000000000000 --- a/spaces/rubberboy/stable-diffusion-webui/env_patch.py +++ /dev/null @@ -1,3 +0,0 @@ - -is_spaces = True if "SPACE_ID" in os.environ else False -is_shared_ui = True if "IS_SHARED_UI" in os.environ else False diff --git a/spaces/runa91/bite_gradio/src/lifting_to_3d/inn_model_for_shape.py b/spaces/runa91/bite_gradio/src/lifting_to_3d/inn_model_for_shape.py deleted file mode 100644 index 6ab7c1f18ca603a20406092bdd7163e370d17023..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/lifting_to_3d/inn_model_for_shape.py +++ /dev/null @@ -1,61 +0,0 @@ - - -from torch import distributions -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import DataLoader -from torch.distributions import Normal -import numpy as np -import cv2 -import trimesh -from tqdm import tqdm -import warnings -warnings.filterwarnings("ignore", category=DeprecationWarning) -import FrEIA.framework as Ff -import FrEIA.modules as Fm - - -class INNForShape(nn.Module): - def __init__(self, n_betas, n_betas_limbs, k_tot=2, betas_scale=1.0, betas_limbs_scale=0.1): - super(INNForShape, self).__init__() - self.n_betas = n_betas - self.n_betas_limbs = n_betas_limbs - self.n_dim = n_betas + n_betas_limbs - self.betas_scale = betas_scale - self.betas_limbs_scale = betas_limbs_scale - self.k_tot = 2 - self.model_inn = self.build_inn_network(self.n_dim, k_tot=self.k_tot) - - def subnet_fc(self, c_in, c_out): - subnet = nn.Sequential(nn.Linear(c_in, 64), nn.ReLU(), - nn.Linear(64, 64), nn.ReLU(), - nn.Linear(64, c_out)) - return subnet - - def build_inn_network(self, n_input, k_tot=12, verbose=False): - coupling_block = Fm.RNVPCouplingBlock - nodes = [Ff.InputNode(n_input, name='input')] - for k in range(k_tot): - nodes.append(Ff.Node(nodes[-1], - coupling_block, - {'subnet_constructor':self.subnet_fc, 'clamp':2.0}, - name=F'coupling_{k}')) - nodes.append(Ff.Node(nodes[-1], - Fm.PermuteRandom, - {'seed':k}, - name=F'permute_{k}')) - nodes.append(Ff.OutputNode(nodes[-1], name='output')) - model = Ff.ReversibleGraphNet(nodes, verbose=verbose) - return model - - def forward(self, latent_rep): - shape, _ = self.model_inn(latent_rep, rev=False, jac=False) - betas = shape[:, :self.n_betas]*self.betas_scale - betas_limbs = shape[:, self.n_betas:]*self.betas_limbs_scale - return betas, betas_limbs - - def reverse(self, betas, betas_limbs): - shape = torch.cat((betas/self.betas_scale, betas_limbs/self.betas_limbs_scale), dim=1) - latent_rep, _ = self.model_inn(shape, rev=True, jac=False) - return latent_rep \ No newline at end of file diff --git a/spaces/ruslanmv/Text2Lip/app.py b/spaces/ruslanmv/Text2Lip/app.py deleted file mode 100644 index fe5440f78b5e5032d0b1c3020729e21144bde0a5..0000000000000000000000000000000000000000 --- a/spaces/ruslanmv/Text2Lip/app.py +++ /dev/null @@ -1,312 +0,0 @@ -import gradio as gr -import os -import sys -#Installation of libraries -EC2_INSTANCE = False -if EC2_INSTANCE : os.system('cd scripts && sh install.sh') -os.system('python installation.py') -TTS_PATH = "TTS/" -# add libraries into environment -sys.path.append(TTS_PATH) # set this if TTS is not installed globally -VOICE_PATH = "utils/" -# add libraries into environment -sys.path.append(VOICE_PATH) # set this if modules and voice are not installed globally -from utils.voice import * -# Modules for the Video Messsage Generator From Youtube -from IPython.display import HTML, Audio -from base64 import b64decode -import numpy as np -from scipy.io.wavfile import read as wav_read -import io -import ffmpeg -from pytube import YouTube -import random -from subprocess import call -from datetime import datetime - -Sagemaker = False -if Sagemaker : - env='source activate python3 && conda activate VideoMessage &&' -else: - env='' - -def time_between(t1, t2): - FMT = '%H:%M:%S' - t1 = datetime.strptime(t1, FMT) - t2 = datetime.strptime(t2, FMT) - delta = t2 - t1 - return str(delta) - -def download_video(url): - - print("Downloading...") - local_file = ( - YouTube(url) - .streams.filter(progressive=True, file_extension="mp4") - .first() - .download(filename="youtube{}.mp4".format(random.randint(0, 10000))) - ) - print("Downloaded") - return local_file - - - -def download_youtube(url): - #Select a Youtube Video - #find youtube video id - from urllib import parse as urlparse - url_data = urlparse.urlparse(url) - query = urlparse.parse_qs(url_data.query) - YOUTUBE_ID = query["v"][0] - url_download ="https://www.youtube.com/watch?v={}".format(YOUTUBE_ID) - # download the youtube with the given ID - os.system("{} youtube-dl -f mp4 --output youtube.mp4 '{}'".format(env,url_download)) - return "youtube.mp4" - - - -def cleanup(): - import pathlib - import glob - types = ('*.mp4','*.mp3', '*.wav') # the tuple of file types - #Finding mp4 and wave files - junks = [] - for files in types: - junks.extend(glob.glob(files)) - try: - # Deleting those files - for junk in junks: - print("Deleting",junk) - # Setting the path for the file to delete - file = pathlib.Path(junk) - # Calling the unlink method on the path - file.unlink() - except Exception: - print("I cannot delete the file because it is being used by another process") - - -def clean_data(): - # importing all necessary libraries - import sys, os - # initial directory - home_dir = os.getcwd() - # some non existing directory - fd = 'sample_data/' - # Join various path components - path_to_clean=os.path.join(home_dir,fd) - print("Path to clean:",path_to_clean) - # trying to insert to false directory - try: - os.chdir(path_to_clean) - print("Inside to clean", os.getcwd()) - cleanup() - # Caching the exception - except: - print("Something wrong with specified\ - directory. Exception- ", sys.exc_info()) - # handling with finally - finally: - print("Restoring the path") - os.chdir(home_dir) - print("Current directory is-", os.getcwd()) - -def youtube_trim(url,start,end): - #cancel previous youtube - cleanup() - #download youtube - #download_youtube(url) # with youtube-dl (slow) - input_videos=download_video(url) - # Get the current working directory - parent_dir = os.getcwd() - # Trim the video (start, end) seconds - start = start - end = end - #Note: the trimmed video must have face on all frames - #interval = end - start - interval = time_between(start, end) - #trimmed_video= parent_dir+'/sample_data/input_vid{}.mp4'.format(random.randint(0, 10000)) - #trimmed_audio= parent_dir+'/sample_data/input_audio{}.mp3'.format(random.randint(0, 10000)) - trimmed_video= parent_dir+'/sample_data/input_video.mp4' - trimmed_audio= parent_dir+'/sample_data/input_audio.mp3' - #delete trimmed if already exits - clean_data() - # cut the video - call(["ffmpeg","-y","-i",input_videos,"-ss", start,"-t",interval,"-async","1",trimmed_video]) - #!ffmpeg -y -i youtube.mp4 -ss {start} -t {interval} -async 1 {trimmed_video} - # cut the audio - call(["ffmpeg","-i",trimmed_video, "-q:a", "0", "-map","a",trimmed_audio]) - #Preview trimmed video - #clear_output() - print("Trimmed Video+Audio") - return trimmed_video, trimmed_audio - -def create_video(Text,Voicetoclone): - out_audio=greet(Text,Voicetoclone) - current_dir=os.getcwd() - clonned_audio = os.path.join(current_dir, out_audio) - - #Start Crunching and Preview Output - #Note: Only change these, if you have to - pad_top = 0#@param {type:"integer"} - pad_bottom = 10#@param {type:"integer"} - pad_left = 0#@param {type:"integer"} - pad_right = 0#@param {type:"integer"} - rescaleFactor = 1#@param {type:"integer"} - nosmooth = False #@param {type:"boolean"} - - out_name ="result_voice.mp4" - out_file="../"+out_name - - if nosmooth == False: - is_command_ok = os.system('{} cd Wav2Lip && python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face "../sample_data/input_video.mp4" --audio "../out/clonned_audio.wav" --outfile {} --pads {} {} {} {} --resize_factor {}'.format(env,out_file,pad_top ,pad_bottom ,pad_left ,pad_right ,rescaleFactor)) - else: - is_command_ok = os.system('{} cd Wav2Lip && python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face "../sample_data/input_video.mp4" --audio "../out/clonned_audio.wav" --outfile {} --pads {} {} {} {} --resize_factor {} --nosmooth'.format(env,out_file,pad_top ,pad_bottom ,pad_left ,pad_right ,rescaleFactor)) - - if is_command_ok > 0: - print("Error : Ensure the video contains a face in all the frames.") - out_file="./demo/tryagain1.mp4" - return out_file - else: - print("OK") - #clear_output() - print("Creation of video done!") - return out_name - - -def time_format_check(input1): - timeformat = "%H:%M:%S" - #input1 = input("At what time did sensor 1 actuate? ") - try: - validtime = datetime.strptime(input1, timeformat) - print("The time format is valid", input1) - #Do your logic with validtime, which is a valid format - return False - except ValueError: - print("The time {} has not valid format hh:mm:ss".format(input1)) - return True - - -def to_seconds(datetime_obj): - from datetime import datetime - time =datetime_obj - date_time = datetime.strptime(time, "%H:%M:%S") - a_timedelta = date_time - datetime(1900, 1, 1) - seconds = a_timedelta.total_seconds() - return seconds - - -def validate_youtube(url): - #This creates a youtube objet - try: - yt = YouTube(url) - except Exception: - print("Hi there URL seems invalid") - return True, 0 - #This will return the length of the video in sec as an int - video_length = yt.length - if video_length > 600: - print("Your video is larger than 10 minutes") - return True, video_length - else: - print("Your video is less than 10 minutes") - return False, video_length - - -def video_generator(text_to_say,url,initial_time,final_time): - print('Checking the url',url) - check1, video_length = validate_youtube(url) - if check1 is True: return "./demo/tryagain2.mp4" - check2 = validate_time(initial_time,final_time, video_length) - if check2 is True: return "./demo/tryagain0.mp4" - trimmed_video, trimmed_audio=youtube_trim(url,initial_time,final_time) - voicetoclone=trimmed_audio - print(voicetoclone) - outvideo=create_video(text_to_say,voicetoclone) - #Preview output video - print("Final Video Preview") - final_video= parent_dir+'/'+outvideo - print("DONE") - #showVideo(final_video) - return final_video - - -def validate_time(initial_time,final_time,video_length): - is_wrong1=time_format_check(initial_time) - is_wrong2=time_format_check(final_time) - #print(is_wrong1,is_wrong2) - if is_wrong1 is False and is_wrong2 is False: - delta=time_between(initial_time,final_time) - if len(str(delta)) > 8: - print("Final Time is Smaller than Initial Time: t1>t2") - is_wrong = True - return is_wrong - else: - print("OK") - is_wrong=False - if int(to_seconds(delta)) > 300 : - print("The trim is larger than 5 minutes") - is_wrong = True - return is_wrong - - elif int(to_seconds(delta)) > video_length : - print("The trim is larger than video lenght") - is_wrong = True - return is_wrong - else: - return is_wrong - - else: - print("Your time format is invalid") - is_wrong = True - return is_wrong - - -#Definition Web App in Gradio -text_to_say=gr.inputs.Textbox(label='What would you like the voice to say? (max. 2000 characters per request)') -url =gr.inputs.Textbox(label = "Enter the YouTube URL below:") -initial_time = gr.inputs.Textbox(label='Initial time of trim? (format: hh:mm:ss)') -final_time= gr.inputs.Textbox(label='Final time to trim? (format: hh:mm:ss)') -gr.Interface(fn = video_generator, - inputs = [text_to_say,url,initial_time,final_time], - outputs = 'video', - verbose = True, - title = 'Video Speech Generator from Youtube Videos', - description = 'A simple application that replaces the original speech of the video by your text. Wait one minute to process.', - article = - '''
                  -

                  - All you need to do is to paste the Youtube link and - set the initial time and final time of the real speach. - (The limit of the trim is 5 minutes and not larger than video length) - hit submit, then wait for compiling. - After that click on Play/Pause for listing to the video. - The video is saved in an mp4 format. - For more information visit ruslanmv.com -

                  -
                  ''', - enable_queue=True, - examples = [['I am clonning your voice. Charles!. Machine intelligence is the last invention that humanity will ever need to make.', - "https://www.youtube.com/watch?v=xw5dvItD5zY", - "00:00:01","00:00:10"], - ['I am clonning your voice. Jim Carrey!. Machine intelligence is the last invention that humanity will ever need to make.', - "https://www.youtube.com/watch?v=uIaY0l5qV0c", - "00:00:29", "00:01:05"], - ['I am clonning your voice. Mark Zuckerberg!. Machine intelligence is the last invention that humanity will ever need to make.', - "https://www.youtube.com/watch?v=AYjDIFrY9rc", - "00:00:11", "00:00:44"], - ['I am clonning your voice. Ronald Reagan!. Machine intelligence is the last invention that humanity will ever need to make.', - "https://www.youtube.com/watch?v=iuoRDY9c5SQ", - "00:01:03", "00:01:22"], - ['I am clonning your voice. Elon Musk!. Machine intelligence is the last invention that humanity will ever need to make.', - "https://www.youtube.com/watch?v=IZ8JQ_1gytg", - "00:00:10", "00:00:43"], - ['I am clonning your voice. Hitler!. Machine intelligence is the last invention that humanity will ever need to make.', - "https://www.youtube.com/watch?v=F08wrLyH5cs", - "00:00:15", "00:00:40"], - ['I am clonning your voice. Alexandria!. Machine intelligence is the last invention that humanity will ever need to make.', - "https://www.youtube.com/watch?v=Eht6oIkzkew", - "00:00:02", "00:00:30"], - ], - allow_flagging=False - ).launch() - diff --git a/spaces/russellc/BLIP/data/nocaps_dataset.py b/spaces/russellc/BLIP/data/nocaps_dataset.py deleted file mode 100644 index ba0bed06d8af3dbaccf18a56e725f101e585503e..0000000000000000000000000000000000000000 --- a/spaces/russellc/BLIP/data/nocaps_dataset.py +++ /dev/null @@ -1,32 +0,0 @@ -import os -import json - -from torch.utils.data import Dataset -from torchvision.datasets.utils import download_url - -from PIL import Image - -class nocaps_eval(Dataset): - def __init__(self, transform, image_root, ann_root, split): - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/nocaps_val.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/nocaps_test.json'} - filenames = {'val':'nocaps_val.json','test':'nocaps_test.json'} - - download_url(urls[split],ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - self.transform = transform - self.image_root = image_root - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image_path = os.path.join(self.image_root,ann['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - return image, int(ann['img_id']) \ No newline at end of file diff --git a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/ResidualStack.py b/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/ResidualStack.py deleted file mode 100644 index 8bfe256efbecd5d24eba743ae8f3ff0a2bb604c2..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/ResidualStack.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) -# Adapted by Florian Lux 2021 - - -import torch - - -class ResidualStack(torch.nn.Module): - - def __init__(self, kernel_size=3, channels=32, dilation=1, bias=True, nonlinear_activation="LeakyReLU", nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", pad_params={}, ): - """ - Initialize ResidualStack module. - - Args: - kernel_size (int): Kernel size of dilation convolution layer. - channels (int): Number of channels of convolution layers. - dilation (int): Dilation factor. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - - """ - super(ResidualStack, self).__init__() - - # defile residual stack part - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - self.stack = torch.nn.Sequential(getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - getattr(torch.nn, pad)((kernel_size - 1) // 2 * dilation, **pad_params), - torch.nn.Conv1d(channels, channels, kernel_size, dilation=dilation, bias=bias), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - torch.nn.Conv1d(channels, channels, 1, bias=bias), ) - - # defile extra layer for skip connection - self.skip_layer = torch.nn.Conv1d(channels, channels, 1, bias=bias) - - def forward(self, c): - """ - Calculate forward propagation. - - Args: - c (Tensor): Input tensor (B, channels, T). - - Returns: - Tensor: Output tensor (B, chennels, T). - - """ - return self.stack(c) + self.skip_layer(c) diff --git a/spaces/scedlatioru/img-to-music/example/Encase Forensic V7 Crack TOP.iso.md b/spaces/scedlatioru/img-to-music/example/Encase Forensic V7 Crack TOP.iso.md deleted file mode 100644 index 8c8f0f54809b20c9610ded31059664909842cd9f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Encase Forensic V7 Crack TOP.iso.md +++ /dev/null @@ -1,26 +0,0 @@ -

                  Encase Forensic V7 Crack.iso


                  DOWNLOADhttps://gohhs.com/2uEA7r



                  -
                  -.iso - -Mac OS X comes in various flavors, which varies the kind of hardware you'll use to run it on. That's why installing OS X 10.8.5 on a MacBook Air (released in 2012) is a lot different than on a 15-inch MacBook Pro (released in 2010). To run OS X on a Mac, you'll need at least a 1,4 GHz Intel Core 2 Duo processor. - -You'll also need a screen that can display at least 1366 by 768 resolution (the resolution of the 2012 MacBook Air). If your MacBook Air has a screen that displays at 1152 by 720 resolution, you'll need to use a tool called "Out of Sync". - -What if you already have the latest OS X installed? If you're on the home stretch to finishing up the latest Mac OS X installer.iso, you'll need to use a tool called "mover" to transfer the files from one Mac to another. - -Yes, that's right. You can use the.iso to create a bootable external drive (USB or FireWire) that you can use to reinstall your Mac. - -So, What Should You Do? - -Now that you know what you need to do, you should consider the following: - -Use Time Machine to save files: If you have a second drive for backup, you should always use Time Machine to back up your files. This is also an important step for Apple users who want to get the full benefit of using Apple's Time Machine feature. - -If you have a second drive for backup, you should always use Time Machine to back up your files. This is also an important step for Apple users who want to get the full benefit of using Apple's Time Machine feature. Make a bootable.iso drive: If you don't have a second drive for backup, the next best thing is to use your.iso to create a bootable external drive (USB or FireWire). - -If you don't have a second drive for backup, the next best thing is to use your.iso to create a bootable external drive (USB or FireWire). Back up your files: Use a combination of the.iso, Time Machine, and bootable external drive to get the best results possible. - -Use a combination of the.iso, Time Machine, and bootable external drive to get the best results possible. Install OS X on more than one Mac: As you'll see in the next steps, you 4fefd39f24
                  -
                  -
                  -

                  diff --git a/spaces/scedlatioru/img-to-music/example/Ferrari Ki Sawaari Movie Download In Hindi 720p Torrent.md b/spaces/scedlatioru/img-to-music/example/Ferrari Ki Sawaari Movie Download In Hindi 720p Torrent.md deleted file mode 100644 index 95ea6e3edb5742aeab13b6bebde6564cfdd15d4f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Ferrari Ki Sawaari Movie Download In Hindi 720p Torrent.md +++ /dev/null @@ -1,62 +0,0 @@ -

                  Ferrari Ki Sawaari Movie Download In Hindi 720p Torrent


                  Download Ziphttps://gohhs.com/2uEzqA



                  -
                  -Director: K.S.S. Rajkumar, Producer: K.S.S. Murugan) - -Cast - -Prithviraj as Surya - -Nandini as Kanchanamala - -Nassar as Janardhan Rao - -Sukanya as Sangeeta - -Pradeep - -Chandra Mohan as Security guard - -Kota Karuppu - -Anu Vardhan - -Naga Kannan - -Master Ravi - -Devan - -Production - -Thalaivan was originally planned to be directed by Ramana, but the two fell out, and Ramana took on the role of music director and assistant director for the film. The film was later titled K.S.S. Rajkumar. The film was shot at Medchal, Hyderabad. - -Soundtrack - -The soundtrack was composed by Ilaiyaraaja. The song "Vaari Naaga Varagal" is set to a tune from "Vaani Kodum" from the Tamil film Iruvar (1989). Lyrics were written by Vairamuthu. The song "Melethaal Kumbin kumaran" is set to tune of "Kanna Ennaya", and was sung by Kumar Sanu and Sujatha Mohan. - -Release - -Critical reception - -The film received mixed reviews. The Times of India rated the film 2 out of 5 stars and wrote, "Thalaivan is poorly scripted and, more importantly, bad at narrative. There are way too many characters which means some vital scenes are repeated. The sequences on Kanchanamala's home are funny but the rest of the film is disjointed." Rediff gave the film 3 out of 5 stars, stating that it has lots of "wonderful visuals". Deccan Herald rated the film 1.5 out of 5 stars and wrote that the movie was a "bloated, boring yarn". - -References - -External links - -Category:Indian films - -Category:2010s Malayalam-language films - -Category:2010s action films - -Category:2010s romance films - -Category:2010s sports films - -Category:Malayalam films remade in other languagesGreece has been thrown into a fresh political crisis after a second general strike by public sector workers and a sit-in by students. - -Demonstrators - some of whom are occupying central Athens schools - are calling for the resignation of the prime minister, Alexis Ts 4fefd39f24
                  -
                  -
                  -

                  diff --git a/spaces/scedlatioru/img-to-music/example/Intergraph CADWorx 2014rar.md b/spaces/scedlatioru/img-to-music/example/Intergraph CADWorx 2014rar.md deleted file mode 100644 index ea0a2e0edf2b2eab110b0d7ae4cc398a4fb3fe86..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Intergraph CADWorx 2014rar.md +++ /dev/null @@ -1,33 +0,0 @@ - -```html -

                  How to Install Intergraph CADWorx 2014rar on Your PC

                  -

                  Intergraph CADWorx is a plant design software suite that streamlines your plant design and engineering process. With intuitive P&ID and industrial plant layout options, CADWorx is a robust CAD software that automates design deliverables. In this article, we will show you how to install Intergraph CADWorx 2014rar on your PC.

                  -

                  Intergraph CADWorx 2014rar


                  DOWNLOAD ✔✔✔ https://gohhs.com/2uEzrm



                  -
                    -
                  1. Download Intergraph CADWorx 2014rar from the official website or a trusted source. The file size is about 1.2 GB and it may take some time to download depending on your internet speed.
                  2. -
                  3. Extract the rar file using a software like WinRAR or 7-Zip. You will get a folder named "Intergraph CADWorx 2014" with several subfolders and files inside.
                  4. -
                  5. Run the setup.exe file as administrator. Follow the instructions on the screen to install Intergraph CADWorx 2014 on your PC. You will need to enter your license key and select the components you want to install.
                  6. -
                  7. After the installation is complete, you can launch Intergraph CADWorx 2014 from the start menu or the desktop shortcut. You will need to activate your product online or offline using your license key.
                  8. -
                  9. Enjoy using Intergraph CADWorx 2014 for your plant design and engineering projects.
                  10. -
                  -

                  Intergraph CADWorx 2014 is compatible with AutoCAD 2014 and later versions. It also supports other formats like DWG, DGN, PDF, and more. You can learn more about Intergraph CADWorx 2014 by visiting the official website or reading the user manual.

                  -``` - -```html -

                  Features and Benefits of Intergraph CADWorx 2014

                  -

                  Intergraph CADWorx 2014 is not only a powerful and easy-to-use plant design software, but also a comprehensive solution that covers various aspects of plant design and engineering. Here are some of the features and benefits of Intergraph CADWorx 2014 that make it stand out from other software in the market.

                  -
                    -
                  • Intelligent Piping Assemblies: You can create and store custom piping assemblies that include components, routing rules, and dimensions. You can then drag and drop these assemblies into your model, saving time and ensuring consistency.
                  • -
                  • Piping Connection Branch Tables: You can define branch rules for different piping specifications, such as size, angle, and gap. This eliminates the guesswork in creating branch connections and ensures compliance with company or project standards.
                  • -
                  • Rules-Based Piping Design: You can apply design standards that control how the piping system is built, such as minimum spacing, maximum bend radius, and allowable fittings. You can also override these rules in special cases, giving you the flexibility to produce the best possible design.
                  • -
                  • Equipment Modeling: You can model equipment using parametric shapes or import equipment models from other software. You can also link equipment data to external databases or spreadsheets, ensuring data accuracy and integrity.
                  • -
                  • Steel Modeling: You can model structural steel using standard shapes or custom profiles. You can also create steel connections, stairs, ladders, handrails, and gratings with ease.
                  • -
                  • P&ID Synchronization: You can link your 3D model to your P&ID drawings using bi-directional data exchange. This ensures consistency between your design and documentation, and allows you to detect and resolve any discrepancies.
                  • -
                  • Isometric Generation: You can generate isometric drawings from your 3D model using predefined or customized templates. You can also add annotations, dimensions, notes, and bills of material to your isometrics.
                  • -
                  • Orthographic Generation: You can generate orthographic drawings from your 3D model using predefined or customized templates. You can also add annotations, dimensions, notes, and bills of material to your orthographics.
                  • -
                  • BIM Integration: You can export your 3D model to BIM software such as Revit or Navisworks using industry-standard formats such as IFC or NWD. This enables you to collaborate with other disciplines and stakeholders in a BIM environment.
                  • -
                  -

                  Intergraph CADWorx 2014 is a complete solution for plant design and engineering that offers unparalleled productivity, accuracy, and flexibility. To learn more about Intergraph CADWorx 2014, visit the official website or request a free trial.

                  -

                  d5da3c52bf
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/sdhsdhk/bingosjj/src/lib/bots/bing/utils.ts b/spaces/sdhsdhk/bingosjj/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/sdhsdhk/bingosjj/src/lib/hooks/use-bing.ts b/spaces/sdhsdhk/bingosjj/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_mapillary_vistas.py b/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_mapillary_vistas.py deleted file mode 100644 index ce3874b65d943c333d093abd6998500f8a3775f5..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_mapillary_vistas.py +++ /dev/null @@ -1,507 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets import load_sem_seg - -MAPILLARY_VISTAS_SEM_SEG_CATEGORIES = [ - { - "color": [165, 42, 42], - "instances": True, - "readable": "Bird", - "name": "animal--bird", - "evaluate": True, - }, - { - "color": [0, 192, 0], - "instances": True, - "readable": "Ground Animal", - "name": "animal--ground-animal", - "evaluate": True, - }, - { - "color": [196, 196, 196], - "instances": False, - "readable": "Curb", - "name": "construction--barrier--curb", - "evaluate": True, - }, - { - "color": [190, 153, 153], - "instances": False, - "readable": "Fence", - "name": "construction--barrier--fence", - "evaluate": True, - }, - { - "color": [180, 165, 180], - "instances": False, - "readable": "Guard Rail", - "name": "construction--barrier--guard-rail", - "evaluate": True, - }, - { - "color": [90, 120, 150], - "instances": False, - "readable": "Barrier", - "name": "construction--barrier--other-barrier", - "evaluate": True, - }, - { - "color": [102, 102, 156], - "instances": False, - "readable": "Wall", - "name": "construction--barrier--wall", - "evaluate": True, - }, - { - "color": [128, 64, 255], - "instances": False, - "readable": "Bike Lane", - "name": "construction--flat--bike-lane", - "evaluate": True, - }, - { - "color": [140, 140, 200], - "instances": True, - "readable": "Crosswalk - Plain", - "name": "construction--flat--crosswalk-plain", - "evaluate": True, - }, - { - "color": [170, 170, 170], - "instances": False, - "readable": "Curb Cut", - "name": "construction--flat--curb-cut", - "evaluate": True, - }, - { - "color": [250, 170, 160], - "instances": False, - "readable": "Parking", - "name": "construction--flat--parking", - "evaluate": True, - }, - { - "color": [96, 96, 96], - "instances": False, - "readable": "Pedestrian Area", - "name": "construction--flat--pedestrian-area", - "evaluate": True, - }, - { - "color": [230, 150, 140], - "instances": False, - "readable": "Rail Track", - "name": "construction--flat--rail-track", - "evaluate": True, - }, - { - "color": [128, 64, 128], - "instances": False, - "readable": "Road", - "name": "construction--flat--road", - "evaluate": True, - }, - { - "color": [110, 110, 110], - "instances": False, - "readable": "Service Lane", - "name": "construction--flat--service-lane", - "evaluate": True, - }, - { - "color": [244, 35, 232], - "instances": False, - "readable": "Sidewalk", - "name": "construction--flat--sidewalk", - "evaluate": True, - }, - { - "color": [150, 100, 100], - "instances": False, - "readable": "Bridge", - "name": "construction--structure--bridge", - "evaluate": True, - }, - { - "color": [70, 70, 70], - "instances": False, - "readable": "Building", - "name": "construction--structure--building", - "evaluate": True, - }, - { - "color": [150, 120, 90], - "instances": False, - "readable": "Tunnel", - "name": "construction--structure--tunnel", - "evaluate": True, - }, - { - "color": [220, 20, 60], - "instances": True, - "readable": "Person", - "name": "human--person", - "evaluate": True, - }, - { - "color": [255, 0, 0], - "instances": True, - "readable": "Bicyclist", - "name": "human--rider--bicyclist", - "evaluate": True, - }, - { - "color": [255, 0, 100], - "instances": True, - "readable": "Motorcyclist", - "name": "human--rider--motorcyclist", - "evaluate": True, - }, - { - "color": [255, 0, 200], - "instances": True, - "readable": "Other Rider", - "name": "human--rider--other-rider", - "evaluate": True, - }, - { - "color": [200, 128, 128], - "instances": True, - "readable": "Lane Marking - Crosswalk", - "name": "marking--crosswalk-zebra", - "evaluate": True, - }, - { - "color": [255, 255, 255], - "instances": False, - "readable": "Lane Marking - General", - "name": "marking--general", - "evaluate": True, - }, - { - "color": [64, 170, 64], - "instances": False, - "readable": "Mountain", - "name": "nature--mountain", - "evaluate": True, - }, - { - "color": [230, 160, 50], - "instances": False, - "readable": "Sand", - "name": "nature--sand", - "evaluate": True, - }, - { - "color": [70, 130, 180], - "instances": False, - "readable": "Sky", - "name": "nature--sky", - "evaluate": True, - }, - { - "color": [190, 255, 255], - "instances": False, - "readable": "Snow", - "name": "nature--snow", - "evaluate": True, - }, - { - "color": [152, 251, 152], - "instances": False, - "readable": "Terrain", - "name": "nature--terrain", - "evaluate": True, - }, - { - "color": [107, 142, 35], - "instances": False, - "readable": "Vegetation", - "name": "nature--vegetation", - "evaluate": True, - }, - { - "color": [0, 170, 30], - "instances": False, - "readable": "Water", - "name": "nature--water", - "evaluate": True, - }, - { - "color": [255, 255, 128], - "instances": True, - "readable": "Banner", - "name": "object--banner", - "evaluate": True, - }, - { - "color": [250, 0, 30], - "instances": True, - "readable": "Bench", - "name": "object--bench", - "evaluate": True, - }, - { - "color": [100, 140, 180], - "instances": True, - "readable": "Bike Rack", - "name": "object--bike-rack", - "evaluate": True, - }, - { - "color": [220, 220, 220], - "instances": True, - "readable": "Billboard", - "name": "object--billboard", - "evaluate": True, - }, - { - "color": [220, 128, 128], - "instances": True, - "readable": "Catch Basin", - "name": "object--catch-basin", - "evaluate": True, - }, - { - "color": [222, 40, 40], - "instances": True, - "readable": "CCTV Camera", - "name": "object--cctv-camera", - "evaluate": True, - }, - { - "color": [100, 170, 30], - "instances": True, - "readable": "Fire Hydrant", - "name": "object--fire-hydrant", - "evaluate": True, - }, - { - "color": [40, 40, 40], - "instances": True, - "readable": "Junction Box", - "name": "object--junction-box", - "evaluate": True, - }, - { - "color": [33, 33, 33], - "instances": True, - "readable": "Mailbox", - "name": "object--mailbox", - "evaluate": True, - }, - { - "color": [100, 128, 160], - "instances": True, - "readable": "Manhole", - "name": "object--manhole", - "evaluate": True, - }, - { - "color": [142, 0, 0], - "instances": True, - "readable": "Phone Booth", - "name": "object--phone-booth", - "evaluate": True, - }, - { - "color": [70, 100, 150], - "instances": False, - "readable": "Pothole", - "name": "object--pothole", - "evaluate": True, - }, - { - "color": [210, 170, 100], - "instances": True, - "readable": "Street Light", - "name": "object--street-light", - "evaluate": True, - }, - { - "color": [153, 153, 153], - "instances": True, - "readable": "Pole", - "name": "object--support--pole", - "evaluate": True, - }, - { - "color": [128, 128, 128], - "instances": True, - "readable": "Traffic Sign Frame", - "name": "object--support--traffic-sign-frame", - "evaluate": True, - }, - { - "color": [0, 0, 80], - "instances": True, - "readable": "Utility Pole", - "name": "object--support--utility-pole", - "evaluate": True, - }, - { - "color": [250, 170, 30], - "instances": True, - "readable": "Traffic Light", - "name": "object--traffic-light", - "evaluate": True, - }, - { - "color": [192, 192, 192], - "instances": True, - "readable": "Traffic Sign (Back)", - "name": "object--traffic-sign--back", - "evaluate": True, - }, - { - "color": [220, 220, 0], - "instances": True, - "readable": "Traffic Sign (Front)", - "name": "object--traffic-sign--front", - "evaluate": True, - }, - { - "color": [140, 140, 20], - "instances": True, - "readable": "Trash Can", - "name": "object--trash-can", - "evaluate": True, - }, - { - "color": [119, 11, 32], - "instances": True, - "readable": "Bicycle", - "name": "object--vehicle--bicycle", - "evaluate": True, - }, - { - "color": [150, 0, 255], - "instances": True, - "readable": "Boat", - "name": "object--vehicle--boat", - "evaluate": True, - }, - { - "color": [0, 60, 100], - "instances": True, - "readable": "Bus", - "name": "object--vehicle--bus", - "evaluate": True, - }, - { - "color": [0, 0, 142], - "instances": True, - "readable": "Car", - "name": "object--vehicle--car", - "evaluate": True, - }, - { - "color": [0, 0, 90], - "instances": True, - "readable": "Caravan", - "name": "object--vehicle--caravan", - "evaluate": True, - }, - { - "color": [0, 0, 230], - "instances": True, - "readable": "Motorcycle", - "name": "object--vehicle--motorcycle", - "evaluate": True, - }, - { - "color": [0, 80, 100], - "instances": False, - "readable": "On Rails", - "name": "object--vehicle--on-rails", - "evaluate": True, - }, - { - "color": [128, 64, 64], - "instances": True, - "readable": "Other Vehicle", - "name": "object--vehicle--other-vehicle", - "evaluate": True, - }, - { - "color": [0, 0, 110], - "instances": True, - "readable": "Trailer", - "name": "object--vehicle--trailer", - "evaluate": True, - }, - { - "color": [0, 0, 70], - "instances": True, - "readable": "Truck", - "name": "object--vehicle--truck", - "evaluate": True, - }, - { - "color": [0, 0, 192], - "instances": True, - "readable": "Wheeled Slow", - "name": "object--vehicle--wheeled-slow", - "evaluate": True, - }, - { - "color": [32, 32, 32], - "instances": False, - "readable": "Car Mount", - "name": "void--car-mount", - "evaluate": True, - }, - { - "color": [120, 10, 10], - "instances": False, - "readable": "Ego Vehicle", - "name": "void--ego-vehicle", - "evaluate": True, - }, - { - "color": [0, 0, 0], - "instances": False, - "readable": "Unlabeled", - "name": "void--unlabeled", - "evaluate": False, - }, -] - - -def _get_mapillary_vistas_meta(): - stuff_classes = [k["readable"] for k in MAPILLARY_VISTAS_SEM_SEG_CATEGORIES if k["evaluate"]] - assert len(stuff_classes) == 65 - - stuff_colors = [k["color"] for k in MAPILLARY_VISTAS_SEM_SEG_CATEGORIES if k["evaluate"]] - assert len(stuff_colors) == 65 - - ret = { - "stuff_classes": stuff_classes, - "stuff_colors": stuff_colors, - } - return ret - - -def register_all_mapillary_vistas(root): - root = os.path.join(root, "mapillary_vistas") - meta = _get_mapillary_vistas_meta() - for name, dirname in [("train", "training"), ("val", "validation")]: - image_dir = os.path.join(root, dirname, "images") - gt_dir = os.path.join(root, dirname, "labels") - name = f"mapillary_vistas_sem_seg_{name}" - DatasetCatalog.register( - name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg") - ) - MetadataCatalog.get(name).set( - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=65, # different from other datasets, Mapillary Vistas sets ignore_label to 65 - **meta, - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_mapillary_vistas(_root) diff --git a/spaces/shnippi/Email_Generai-tor/app.py b/spaces/shnippi/Email_Generai-tor/app.py deleted file mode 100644 index 729fbcf675ea4ec0778b5e0167056fe1ae59c541..0000000000000000000000000000000000000000 --- a/spaces/shnippi/Email_Generai-tor/app.py +++ /dev/null @@ -1,107 +0,0 @@ -import gradio as gr -import os -import json -import time -import requests - -# Nosey fucker arent you :) - -def email(linkedin_url, description_of_what_you_are_selling , description_of_who_you_are , type_of_email = "", calendly_link = ""): - - human_url = os.environ["HUMAN_URL"] - model_url = os.environ["MODEL_URL"] - - if description_of_what_you_are_selling == "": - description_of_what_you_are_selling = "Generai, a YCombinator startup building a flowchart tool to build ML aplications" - if description_of_who_you_are == "": - description_of_who_you_are = "Jan Schnyder, Co-Founder of Generai" - - - data = { - "api_key": os.environ["HUMAN_API_KEY"], - "linkedin_url" : linkedin_url - } - - headers = { - 'Cache-Control': 'no-cache', - 'Content-Type': 'application/json' - } - - response = requests.request("POST", human_url, headers=headers, json=data) - resp= json.loads(response.text) - - print(resp) - print("the email is: " + str(resp["person"]["email"])) - - - email_address = str(resp["person"]["email"]) - - # shorten the json if too long, otherwise 400 for openai - if len(str(resp)) > 8000: - resp = str(resp)[0:8000] - - - # summarize - - headers = { - "Content-Type": "application/json", - "Authorization": "Bearer " + os.environ["MODEL_API_KEY"] - } - - data = { - "model": os.environ["MODEL_NAME"], - "prompt": "summarize this json in a very detailed and readable paragraph describing the person. Dont include any ID or number combinations :" + str(resp), - "temperature": 0, - "max_tokens": 300 - } - - response = requests.post(model_url, headers=headers, json=data) - - if response.status_code == 200: - response_json = response.json() - summary = response_json["choices"][0]["text"] - else: - print("Request failed with status code:", response.status_code) - return "Request failed with status code:", response.status_code - - - # write the mail - - prompt = "now write me a " + type_of_email + " cold outreach email to this person that is highly personalized with the following information: " + summary + ". \n The email should be about selling " + description_of_what_you_are_selling + "The email is from " + description_of_who_you_are + " . Don't use placeholders. " - - if calendly_link != "": - prompt = prompt + "The person can schedule a call here: " + str(calendly_link) - - data = { - "model": os.environ["MODEL_NAME"], - "prompt": prompt, - "temperature": 0, - "max_tokens": 300 - } - - response = requests.post(model_url, headers=headers, json=data) - - if response.status_code == 200: - response_json = response.json() - text = response_json["choices"][0]["text"] - print(text) - else: - print("Request failed with status code:", response.status_code) - return "Request failed with status code:", response.status_code - - return "email address: \n" + email_address + " \n\n" + text + " \n\n- " + "P.S: This email was generated using our tool ;) , built with " + "https://flow.generai.art/" - -demo = gr.Interface( - email, - [ - gr.Textbox(lines=1, label="LinkedIn URL"), - gr.Textbox(lines=1, label="Description of what you are selling (Name, Product, Service))"), - gr.Textbox(lines=1, label="Description of who you are (Name, Company, Position))"), - gr.Textbox(lines=1, label="(Optional) Describe the style of the email (funny, formal, short, long))"), - gr.Textbox(lines=1, label="(Optional) Add your Calendly link here") - ], - gr.Textbox(lines=15, placeholder="Name Here...", label="Output"), - title="LinkedIn Email Generai-tor", - description="From Linked in profile to personalized email in 1 click! API might be down sometimes. For questions team@generai.art", -) -demo.launch() \ No newline at end of file diff --git a/spaces/sidharthism/fashion-eye/netdissect/upsegmodel/__init__.py b/spaces/sidharthism/fashion-eye/netdissect/upsegmodel/__init__.py deleted file mode 100644 index 76b40a0a36bc2976f185dbdc344c5a7c09b65920..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/netdissect/upsegmodel/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .models import ModelBuilder, SegmentationModule diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Arena Breakout How to survive and loot in the lawless arena on your Android phone.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Arena Breakout How to survive and loot in the lawless arena on your Android phone.md deleted file mode 100644 index e55b5f4374b05cddecb32254820610a56b99ed43..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Arena Breakout How to survive and loot in the lawless arena on your Android phone.md +++ /dev/null @@ -1,98 +0,0 @@ - -

                  Arena Breakout APK CBT: A Next-Gen Immersive Tactical FPS on Mobile

                  -

                  If you are a fan of realistic and hardcore shooter games, you might want to check out Arena Breakout APK CBT, a new mobile game that promises to deliver a next-gen immersive tactical FPS experience. Arena Breakout is a game that combines the elements of escape from tarkov, pubg, and call of duty, creating a unique and thrilling survival mode that will test your skills and nerves. In this article, we will tell you everything you need to know about Arena Breakout APK CBT, including what it is, how to download it, what are its features, and what are some tips and tricks for playing it.

                  -

                  arena breakout apk cbt


                  Download Filehttps://ssurll.com/2uNU4w



                  -

                  What is Arena Breakout?

                  -

                  Arena Breakout is a game that was developed by Ichnitex, a studio that specializes in creating realistic and immersive shooter games for mobile devices. The game is currently in its global closed beta test (CBT) phase, which means that it is not yet fully released, but you can still download it and play it for free. The CBT also offers amazing rewards for players who complete daily tasks and provide feedback to the developers.

                  -

                  A realistic and hardcore shooter game

                  -

                  Arena Breakout is a game that aims to provide a realistic and hardcore shooter experience on mobile devices. The game features stunning graphics, sound effects, and animations that will make you feel like you are in a real battlefield. The game also has a realistic ballistics system, weapon recoil, bullet drop, and penetration, making every shot count. The game also has a variety of weapons and attachments that you can use to customize your loadout according to your preference and playstyle.

                  -

                  A thrilling and rewarding survival mode

                  -

                  Arena Breakout is a game that has a unique survival mode that will challenge your skills and nerves. In this mode, you will be dropped into a lawless arena with other players, where you will have to scour the area for valuable guns, attachments, and supplies. You will also have to break out of the combat area alive before the time runs out, or you will lose everything you have collected. If you manage to escape, you will be rewarded with cash and loot that you can use to upgrade your character and weapons. However, be prepared to face fierce enemies and unpredictable situations along the way.

                  -

                  A global closed beta test with amazing rewards

                  -

                  Arena Breakout is a game that is currently in its global closed beta test (CBT) phase, which means that it is not yet fully released, but you can still download it and play it for free. The CBT also offers amazing rewards for players who join the test and provide feedback to the developers. By completing daily tasks, you can earn cash, loot boxes, skins, badges, and more. You can also participate in surveys and events to win exclusive prizes. The CBT will last until July 31st, 2023, so don't miss this opportunity to join Arena Breakout Global CBT NOW!

                  -

                  How to download Arena Breakout APK CBT for Android phone?

                  -

                  If you want to download Arena Breakout APK CBT for your Android phone, you have two options. You can either visit the official website or Google Play store.

                  -

                  Visit the official website or Google Play store

                  -

                  The easiest way to download Arena Breakout APK CBT for your Android phone is to visit the Google Play store and search for the game. You can also use this link to go directly to the game page: [Arena Breakout - Apps on Google Play]. Once you are on the game page, you can tap on the Install button and wait for the game to download and install on your phone. You will need about 2.5 GB of free space on your phone to install the game.

                  -

                  Install the APK and OBB files

                  -

                  If you cannot access the Google Play store or prefer to download the game from another source, you can also install the APK and OBB files manually. You can download the APK and OBB files from the official website: [Arena Breakout]. Once you have downloaded the files, you will need to follow these steps to install them on your phone:

                  -

                  How to download Arena Breakout Lite version for Android phone
                  -Arena Breakout APK free download for Android devices
                  -Arena Breakout is a Next-Gen Immersive Tactical FPS
                  -Arena Breakout Global Closed Beta Test details and rewards
                  -Arena Breakout Lite vs Original: Which version should you choose?
                  -Arena Breakout tips and tricks: How to survive and loot in the game
                  -Arena Breakout system requirements and compatible devices
                  -Arena Breakout review: A first-of-its-kind extraction looter shooter
                  -Arena Breakout gameplay features and modes explained
                  -Arena Breakout best weapons and attachments guide
                  -How to install Arena Breakout APK on your Android phone
                  -Arena Breakout FAQs: Everything you need to know about the game
                  -Arena Breakout update: What's new in the latest version of the game
                  -Arena Breakout feedback: How to share your opinions and suggestions with the developers
                  -Arena Breakout cheats and hacks: How to avoid getting banned from the game
                  -Arena Breakout graphics settings: How to optimize the game performance on your device
                  -Arena Breakout maps and locations: How to navigate the war zones
                  -Arena Breakout characters and classes: How to choose the best one for your playstyle
                  -Arena Breakout skills and abilities: How to use them effectively in combat
                  -Arena Breakout weapons tier list: The best guns in the game ranked
                  -Arena Breakout glitches and bugs: How to report and fix them
                  -Arena Breakout codes and coupons: How to redeem them for free rewards
                  -Arena Breakout events and challenges: How to participate and win prizes
                  -Arena Breakout clans and teams: How to join and create them
                  -Arena Breakout ranking system and leaderboards: How to climb the ladder and compete with others
                  -Arena Breakout support and customer service: How to contact them for help and assistance
                  -Arena Breakout community and forums: How to connect with other players and fans of the game
                  -Arena Breakout videos and streams: Where to watch the best gameplay and guides of the game
                  -Arena Breakout news and updates: Where to get the latest information and announcements about the game
                  -Arena Breakout release date and availability: When and where can you play the game
                  -Arena Breakout mod APK: What is it and why you should avoid it
                  -Arena Breakout beta test registration: How to sign up and get access to the game early
                  -Arena Breakout comparison: How does it stack up against other FPS games on mobile
                  -Arena Breakout controller support: How to play the game with a gamepad or keyboard and mouse
                  -Arena Breakout customization options: How to change your character's appearance and loadout
                  -Arena Breakout gameplay trailer: Watch the official video of the game here
                  -Arena Breakout memes and jokes: The funniest and most hilarious content about the game
                  -Arena Breakout merchandise and products: Where to buy the coolest stuff related to the game
                  -Arena Breakout wallpapers and backgrounds: How to download and use them on your device or PC
                  -Arena Breakout wiki and database: The ultimate resource for everything about the game

                  -
                    -
                  1. Go to your phone settings and enable the installation of apps from unknown sources.
                  2. -
                  3. Locate the downloaded APK file and tap on it to install it.
                  4. -
                  5. Copy the downloaded OBB file to the Android/OBB folder on your phone. If you don't have this folder, create it manually.
                  6. -
                  7. Launch the game and enjoy.
                  8. -
                  -

                  What are the features of Arena Breakout APK CBT?

                  -

                  Arena Breakout APK CBT is a game that offers many features that will make you enjoy playing it. Here are some of the features that you can expect from the game:

                  -

                  Stunning graphics and sound effects

                  -

                  Arena Breakout APK CBT is a game that boasts stunning graphics and sound effects that will immerse you in a realistic and thrilling combat environment. The game uses Unreal Engine 4, which is one of the most advanced game engines in the world, to create lifelike scenes, lighting, shadows, and textures. The game also has realistic sound effects, such as gunshots, explosions, footsteps, and voiceovers, that will make you feel like you are in a real battlefield.

                  -

                  Diverse weapons and attachments

                  -

                  Arena Breakout APK CBT is a game that has a diverse range of weapons and attachments that you can use to customize your loadout according to your preference and playstyle. The game has over 50 weapons, including pistols, rifles, shotguns, snipers, and machine guns, each with its own characteristics and performance. The game also has over 100 attachments, such as scopes, silencers, grips, magazines, and stocks, that you can use to modify your weapons and improve their stats. You can also switch between different fire modes, such as single, burst, or full-auto, depending on the situation.

                  -

                  Customizable characters and loadouts

                  -

                  Arena Breakout APK CBT is a game that allows you to customize your character and loadout according to your liking. You can choose from different character models, skins, outfits, helmets, masks, gloves, and backpacks to create your own unique look. You can also choose from different loadouts, such as assault, recon, medic, or support, each with its own perks and abilities. You can also unlock more items and skills as you level up and progress in the game.

                  -

                  What are the tips and tricks for playing Arena Breakout APK CBT?

                  -

                  Arena Breakout APK CBT is a game that requires skill and strategy to play well. Here are some tips and tricks that will help you improve your gameplay and survive longer in the arena:

                  -

                  Scour the arena for loot and supplies

                  -

                  Arena Breakout APK CBT is a game that requires you to scavenge for loot and supplies in order to survive. The arena is filled with various items that you can find in crates, boxes, cabinets, lockers, cars, and other places. You can find weapons, attachments, ammo, armor, health kits, grenades, and other useful items that will help you in your escape. However, be careful as some items may be booby-trapped or guarded by enemies. Also, be aware of your inventory space and weight limit as they will affect your movement speed and stamina.

                  -

                  Break out of the combat area alive

                  -

                  Arena Breakout APK CBT is a game that requires you to break out of the combat area alive before the time runs out or you will lose everything you have collected. The combat area is marked by a red circle on the map that will shrink over time. You will have to find an exit point within the circle and reach it safely. However, be prepared to face other players who will try to stop you or steal your loot. You will also have to deal with environmental hazards, such as fire, gas, radiation, and traps, that will damage your health and armor. You will also have to manage your stamina, hunger, thirst, and bleeding, as they will affect your performance and survival. To break out of the combat area alive, you will need to use your skills, tactics, and luck.

                  -

                  Use cover and tactics to win firefights

                  -

                  Arena Breakout APK CBT is a game that requires you to use cover and tactics to win firefights against other players and enemies. The game has a realistic and hardcore combat system that will punish you for any mistakes or carelessness. You will have to aim carefully, control your recoil, adjust your distance, and watch your ammo. You will also have to use cover, such as walls, trees, cars, and buildings, to protect yourself from enemy fire and ambushes. You will also have to use tactics, such as flanking, suppressing, sniping, and grenading, to gain an advantage over your opponents. You will also have to communicate and cooperate with your teammates if you are playing in a squad mode.

                  -

                  Conclusion

                  -

                  Arena Breakout APK CBT is a game that offers a next-gen immersive tactical FPS experience on mobile devices. The game has a realistic and hardcore shooter gameplay, a thrilling and rewarding survival mode, and a global closed beta test with amazing rewards. The game also has stunning graphics and sound effects, diverse weapons and attachments, and customizable characters and loadouts. The game is not for the faint of heart, as it will test your skills and nerves in every match. If you are looking for a new and exciting mobile game that will challenge you and keep you on the edge of your seat, you should definitely try Arena Breakout APK CBT.

                  -

                  FAQs

                  -

                  Here are some of the frequently asked questions about Arena Breakout APK CBT:

                  -
                    -
                  • Q: When will Arena Breakout be officially released?
                  • -
                  • A: Arena Breakout is currently in its global closed beta test (CBT) phase, which will last until July 31st, 2023. The official release date of the game has not been announced yet, but you can follow the official social media accounts of the game for the latest updates.
                  • -
                  • Q: Is Arena Breakout free to play?
                  • -
                  • A: Yes, Arena Breakout is free to play during the CBT phase. However, the game may have some in-app purchases or ads in the future.
                  • -
                  • Q: Can I play Arena Breakout on PC or iOS devices?
                  • -
                  • A: No, Arena Breakout is currently only available for Android devices. However, the developers may consider releasing the game for other platforms in the future.
                  • -
                  • Q: How can I provide feedback or report bugs to the developers?
                  • -
                  • A: You can provide feedback or report bugs to the developers by using the in-game feedback system or by joining the official Discord server of the game: [Arena Breakout Discord]. You can also rate and review the game on Google Play store or leave a comment on the official website or social media accounts of the game.
                  • -
                  • Q: How can I get more rewards or benefits from playing Arena Breakout?
                  • -
                  • A: You can get more rewards or benefits from playing Arena Breakout by completing daily tasks, participating in surveys and events, inviting your friends to join the game, sharing your gameplay videos or screenshots on social media platforms with hashtags #ArenaBreakout #ABCBT #ABGlobalCBT #ABSurvivalMode #ABFPS #ABMobileGame #ABUnrealEngine4 #ABIchnitex.
                  • -

                  401be4b1e0
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download NBA Street Vol 2 and Relive the Legendary Moments.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download NBA Street Vol 2 and Relive the Legendary Moments.md deleted file mode 100644 index 428b07f1a1bdcabf14732fee22ca6cd4b5ea8ac1..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download NBA Street Vol 2 and Relive the Legendary Moments.md +++ /dev/null @@ -1,133 +0,0 @@ -
                  -

                  How to Download NBA Street Vol. 2, the Greatest Basketball Video Game of All Time

                  -

                  If you are a fan of basketball and video games, chances are you have heard of NBA Street Vol. 2, the sequel to the original NBA Street and the second game in the NBA Street series. Released in 2003 by Electronic Arts under the EA Sports BIG label, this game is widely considered as one of the best basketball video games ever made, and for good reasons. In this article, we will tell you what makes this game so special, and how you can download it and play it on your device.

                  -

                  What is NBA Street Vol. 2?

                  -

                  NBA Street Vol. 2 is a basketball video game that focuses on the streetball culture and style. Unlike the typical NBA simulation games, this game features 3-on-3 matches on various outdoor courts across the United States, with no fouls, no out of bounds, and no rules. The game also features a variety of trick moves, dunks, combos, and gamebreakers that add to the excitement and spectacle of the sport.

                  -

                  download nba street vol 2


                  Download Filehttps://ssurll.com/2uNUFo



                  -

                  Gameplay and features

                  -

                  The gameplay of NBA Street Vol. 2 is fast-paced, fluid, and dynamic. You can choose from 29 fully playable NBA teams from the 2002–03 season, as well as street legends and NBA legends such as Michael Jordan, Larry Bird, Julius Erving, and more. You can also create your own custom player and develop his skills and attributes in the Be a Legend mode. The game also has a turbo meter that governs the use of your special moves and abilities. As you perform tricks and combos, you fill up your gamebreaker meter, which allows you to unleash a powerful shot that gives you more points and takes away points from your opponent. You can also save your gamebreaker energy for a level-two gamebreaker, which is unblockable and does more damage to your opponent's score.

                  -

                  Modes and challenges

                  -

                  The game has four different modes to choose from: Pick Up Game, NBA Challenge, Be a Legend, and Street School. Pick Up Game is an exhibition mode where you can play against the computer or another user. NBA Challenge is a mode where you take on every NBA team in a series of matches, unlocking legendary players along the way. Be a Legend is a mode where you create your own player and take him on a tour of different courts, facing various challenges and boss characters, and earning rewards such as new moves, jerseys, courts, and surprises. Street School is a tutorial mode where you learn the basics of the game and practice your skills.

                  -

                  Soundtrack and style

                  -

                  One of the most distinctive aspects of NBA Street Vol. 2 is its soundtrack and style. The game features tracks from artists such as Nate Dogg featuring Eve, Pete Rock & CL Smooth, Erick Sermon featuring Redman, Benzino, MC Lyte, Black Sheep, Nelly, and more. It also features instrumental beats from producer Just Blaze . The game also has an in-game announcer named Bobbito Garcia , who provides commentary and hype for every move and play. The game also has a lot of visual flair and personality, with colorful graphics, expressive animations, sparkler rims, ball trails, explosive rims, big heads, small players, and other effects.

                  -

                  Why is NBA Street Vol. 2 so popular?

                  -

                  NBA Street Vol. 2 is not just a basketball video game; it is a cultural phenomenon that has influenced many fans and players of the sport. Here are some of the reasons why this game is so popular

                  The appeal of street basketball

                  -

                  Street basketball is a form of basketball that is played on outdoor courts, usually in urban areas, with its own rules and culture. Street basketball is often seen as a way of expressing oneself, showcasing one's skills, and competing with others. Street basketball is also a way of connecting with the community, as many players and fans gather around the courts to watch and participate in the games. NBA Street Vol. 2 captures the essence and spirit of street basketball, with its realistic and diverse courts, its authentic and charismatic characters, and its creative and flashy gameplay.

                  -

                  The nostalgia factor

                  -

                  NBA Street Vol. 2 is also a game that evokes nostalgia for many fans of basketball and video games. The game features many legendary players from the past, such as Magic Johnson, Wilt Chamberlain, Kareem Abdul-Jabbar, and more. It also features some of the most iconic courts in basketball history, such as Rucker Park, Venice Beach, The Cage, and more. The game also has a retro style and vibe, with its old-school hip-hop soundtrack, its vintage jerseys and outfits, and its arcade-like graphics and effects. NBA Street Vol. 2 is a game that pays homage to the history and culture of basketball, and many fans appreciate that.

                  -

                  download nba street vol 2 ps2 iso
                  -download nba street vol 2 gamecube rom
                  -download nba street vol 2 xbox iso
                  -download nba street vol 2 pc free
                  -download nba street vol 2 for android
                  -download nba street vol 2 soundtrack
                  -download nba street vol 2 ps4
                  -download nba street vol 2 emulator
                  -download nba street vol 2 cheats
                  -download nba street vol 2 online
                  -download nba street vol 2 full version
                  -download nba street vol 2 mod apk
                  -download nba street vol 2 ps3
                  -download nba street vol 2 psp
                  -download nba street vol 2 mac
                  -download nba street vol 2 highly compressed
                  -download nba street vol 2 crack
                  -download nba street vol 2 patch
                  -download nba street vol 2 update
                  -download nba street vol 2 rar
                  -download nba street vol 2 zip
                  -download nba street vol 2 torrent
                  -download nba street vol 2 mega
                  -download nba street vol 2 mediafire
                  -download nba street vol 2 google drive
                  -download nba street vol 2 reddit
                  -download nba street vol 2 youtube
                  -download nba street vol 2 gameplay
                  -download nba street vol 2 review
                  -download nba street vol 2 tips and tricks
                  -download nba street vol 2 best players
                  -download nba street vol 2 legends roster
                  -download nba street vol 2 unlockables
                  -download nba street vol 2 codes and secrets
                  -download nba street vol 2 guide and walkthrough
                  -download nba street vol 2 how to play
                  -download nba street vol 2 system requirements
                  -download nba street vol 2 controller settings
                  -download nba street vol 2 keyboard controls
                  -download nba street vol 2 graphics settings
                  -download nba street vol 2 resolution fix
                  -download nba street vol 2 widescreen mod
                  -download nba street vol 2 trainer and cheats engine
                  -download nba street vol 2 save file and editor
                  -download nba street vol 2 custom teams and courts
                  -download nba street vol 2 mods and patches
                  -download nba street vol 2 remastered and enhanced
                  -download nba street vol 2 original and classic
                  -download nba street vol 2 demo and trial

                  -

                  The fun factor

                  -

                  Ultimately, NBA Street Vol. 2 is a game that is fun to play and watch. The game has a simple and intuitive control scheme that allows you to perform amazing moves and combos with ease. The game also has a lot of variety and replay value, with its different modes, challenges, rewards, and secrets. The game also has a lot of humor and personality, with its witty and hilarious commentary, its quirky and memorable characters, and its outrageous and spectacular gameplay. NBA Street Vol. 2 is a game that makes you feel like a basketball superstar, and that is why it is so popular.

                  -

                  How to download NBA Street Vol. 2?

                  -

                  If you want to experience the greatness of NBA Street Vol. 2 for yourself, you might be wondering how you can download it and play it on your device. Unfortunately, the game is not officially available on any digital platform or store, as it was released only for the PlayStation 2 , Xbox , Nintendo GameCube , and Game Boy Advance consoles. However, there are some ways you can still enjoy this game on your PC or mobile device.

                  -

                  Requirements and compatibility

                  -

                  The first thing you need to do is to check if your device meets the minimum requirements to run the game smoothly. You will need a device that has at least 1 GB of RAM , 4 GB of storage space , a dual-core processor , and a decent graphics card . You will also need an internet connection to download the game files and an emulator to run them.

                  -

                  Sources and links

                  -

                  The next thing you need to do is to find a reliable source to download the game files from. You will need two files: an ISO file , which is an image of the original game disc , and a BIOS file , which is a file that contains the system information of the console . You can find these files on various websites that offer ROMs (Read-Only Memory) , which are copies of games for emulators . However, you should be careful when downloading these files, as some of them might contain viruses or malware . You should also be aware of the legal issues involved in downloading these files, as they might violate the copyright laws in your country . Therefore, we recommend that you only download these files if you own the original game disc or have permission from the publisher . Here are some links where you can find these files:

                  -

                  Emulators and links

                  -

                  An emulator is a software that mimics the functions of another device or system. In this case, you will need an emulator that can run the console or handheld system that the game was originally made for. For example, if you want to play the PS2 version of NBA Street Vol. 2, you will need a PS2 emulator. There are many emulators available for different platforms, but some of the best ones are:

                  -
                    -
                  • BlueStacks: This is one of the most popular and comprehensive Android emulators for PC and Mac. It has a lot of features to improve the gaming experience, such as a keymapping tool, an instance manager, an eco mode, and more. It also has a built-in app store and supports other APK files.
                  • -
                  • NoxPlayer: This is another great Android emulator for PC, especially for gamers. It has a simple and intuitive control scheme, a lot of customizable options, a turbo meter, a gamebreaker meter, and more. It also has a built-in Google Play store and supports other APK files.
                  • -
                  • MEmu Play: This is a super customizable Android emulator for PC that lets you adjust the RAM, CPU, resolution, device model, root mode, and more. It also has a key mapping feature, a multi-instance feature, a screen recorder, and more. It also has a built-in Google Play store and supports other APK files.
                  • -
                  • GameLoop: This is an Android emulator for PC that is made by Tencent, the company behind popular games like PUBG Mobile and Call of Duty Mobile. It is optimized for gaming performance and compatibility, and has features like keyboard and mouse support, game center, live streaming, and more. It also has a built-in Google Play store and supports other APK files.
                  • -
                  • Android-x86: This is an open source project that aims to port the Android operating system to x86 platforms. It allows you to run Android as a native application on your PC or as a virtual machine on your PC or Mac. It supports most Android apps and games, and has features like Wi-Fi support, Bluetooth support, camera support, and more.
                  • -
                  -

                  Once you have chosen an emulator that suits your needs, you will need to download it from its official website or from a trusted source. You will also need to install it on your device following the instructions provided by the developer.

                  -

                  Steps and tips

                  -

                  After you have downloaded and installed the emulator and the game files, you are ready to play NBA Street Vol. 2 on your device. Here are the steps and tips to do so:

                  -
                    -
                  1. Launch the emulator on your device and locate the game file (ISO or ROM) that you want to play.
                  2. -
                  3. Select the game file and load it on the emulator. You might need to configure some settings such as the controller layout, the graphics quality, the sound volume, etc.
                  4. -
                  5. Enjoy playing NBA Street Vol. 2 on your device. You can use your keyboard and mouse or a gamepad to control the game.
                  6. -
                  -

                  Here are some tips to enhance your gaming experience:

                  -
                    -
                  • Save your progress frequently using the emulator's save state feature. This way, you can resume your game from where you left off without losing any data.
                  • -
                  • Use cheats or mods if you want to unlock more content or customize your game. You can find many cheats and mods online for different versions of NBA Street Vol. 2.
                  • -
                  • Share your gameplay with others using the emulator's screen recorder or live streaming feature. You can also take screenshots or videos of your best moments and show them off to your friends.
                  • -
                  -

                  Conclusion

                  -

                  NBA Street Vol. 2 is one of the greatest basketball video games of all time, and you can download it and play it on your device using an emulator. In this article, we have explained what NBA Street Vol. 2 is, why it is so popular, how to download it, and how to play it on your device. We hope you have found this article helpful and informative.

                  -

                  Summary of the main points

                  -
                    -
                  • NBA Street Vol. 2 is a basketball video game that focuses on the streetball culture and style.
                  • -
                  • NBA Street Vol. 2 is popular because of
                  • NBA Street Vol. 2 is popular because of its gameplay and features, its modes and challenges, its soundtrack and style, its appeal of street basketball, its nostalgia factor, and its fun factor.
                  • -
                  • NBA Street Vol. 2 can be downloaded and played on your device using an emulator, such as BlueStacks, NoxPlayer, MEmu Play, GameLoop, or Android-x86.
                  • -
                  • NBA Street Vol. 2 can be played on your device by following these steps: launch the emulator, locate the game file, load it on the emulator, and enjoy playing. You can also use some tips to enhance your gaming experience, such as saving your progress, using cheats or mods, and sharing your gameplay.
                  • -
                  -

                  Call to action

                  -

                  Now that you know how to download NBA Street Vol. 2, the greatest basketball video game of all time, what are you waiting for? Grab your device, download the game and the emulator, and start playing. You will not regret it. NBA Street Vol. 2 is a game that will keep you entertained for hours and make you fall in love with basketball all over again.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about NBA Street Vol. 2:

                  -
                    -
                  1. Is NBA Street Vol. 2 better than NBA Street?
                  2. -

                    NBA Street Vol. 2 is generally considered as a superior game to NBA Street, as it has more content, more features, more polish, and more fun. NBA Street Vol. 2 improves on almost every aspect of NBA Street, such as the graphics, the gameplay, the modes, the characters, the courts, the music, and more.

                    -
                  3. Is NBA Street Vol. 2 better than NBA Street V3?
                  4. -

                    NBA Street V3 is the third game in the NBA Street series and the successor to NBA Street Vol. 2. It was released in 2005 for the PlayStation 2 , Xbox , Nintendo GameCube , and Nintendo DS . NBA Street V3 has some new features and improvements over NBA Street Vol. 2, such as the trick stick , the dunk contest , the custom baller , and more. However, many fans still prefer NBA Street Vol. 2 over NBA Street V3, as they feel that NBA Street Vol. 2 has a better balance, a better soundtrack, a better style, and a better vibe.

                    -
                  5. Is NBA Street Vol. 2 available for PC?
                  6. -

                    NBA Street Vol. 2 was not officially released for PC , as it was only made for consoles and handhelds. However, you can still play NBA Street Vol. 2 on your PC using an emulator , as explained in this article.

                    -
                  7. Is NBA Street Vol. 2 available for mobile?
                  8. -

                    NBA Street Vol. 2 was not officially released for mobile devices , as it was only made for consoles and handhelds. However, you can still play NBA Street Vol. 2 on your mobile device using an emulator , as explained in this article.

                    -
                  9. Is NBA Street Vol. 2 safe to download?
                  10. -

                    NBA Street Vol. 2 is safe to download if you get it from a reliable source and if you have permission from the publisher . However, you should be careful when downloading any files from the internet , as some of them might contain viruses or malware . You should also be aware of the legal issues involved in downloading these files , as they might violate the copyright laws in your country . Therefore, we recommend that you only download these files if you own the original game disc or have permission from the publisher .

                    -

                  401be4b1e0
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Spider Solitaire Classic and Play with 1 2 3 or 4 Suits.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Spider Solitaire Classic and Play with 1 2 3 or 4 Suits.md deleted file mode 100644 index 38556770368deb6746d4d5778031b9f08aa3cba5..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Spider Solitaire Classic and Play with 1 2 3 or 4 Suits.md +++ /dev/null @@ -1,121 +0,0 @@ -
                  -

                  How to Download Spider Solitaire Classic for Free

                  -

                  Spider Solitaire is one of the most popular and addictive card games in the world. It is a fun and challenging way to test your logic, strategy, and patience skills. But did you know that you can download Spider Solitaire Classic for free and play it anytime and anywhere you want? In this article, we will show you what Spider Solitaire Classic is, where to download it for free, and how to play it after downloading it.

                  -

                  What is Spider Solitaire Classic?

                  -

                  Spider Solitaire Classic is a variation of the classic solitaire game that uses two decks of cards instead of one. The goal of the game is to arrange all the cards in descending order from King to Ace in the same suit on the eight foundation piles at the top. You can move cards between the ten tableau columns at the bottom by dragging and dropping them. You can only move a card or a group of cards if they are in the same suit and in descending order. You can also deal a new card onto each tableau column by clicking on the stock pile on the upper left corner. The game is over when you have moved all the cards to the foundations or when there are no more moves possible.

                  -

                  download spider solitaire classic


                  Download File ✺✺✺ https://ssurll.com/2uO13N



                  -

                  The rules of Spider Solitaire Classic

                  -

                  The rules of Spider Solitaire Classic are simple but require some strategy and planning. Here are some basic rules to remember:

                  -
                    -
                  • You can only move one card at a time, unless you have a group of cards in the same suit and in descending order.
                  • -
                  • You can only move a card or a group of cards to an empty column or to a card that is one rank higher and in the same suit.
                  • -
                  • You can deal a new card onto each tableau column by clicking on the stock pile, but only if there are no empty columns.
                  • -
                  • You can remove a complete suit from King to Ace from the tableau and move it to a foundation pile.
                  • -
                  • You can choose between one suit, two suits, or four suits difficulty levels.
                  • -
                  -

                  The benefits of playing Spider Solitaire Classic

                  -

                  Playing Spider Solitaire Classic is not only fun and relaxing, but also beneficial for your brain and mental health. Here are some of the benefits of playing Spider Solitaire Classic:

                  -
                    -
                  • It improves your memory, concentration, and problem-solving skills.
                  • -
                  • It reduces stress, anxiety, and boredom.
                  • -
                  • It boosts your mood, self-esteem, and confidence.
                  • -
                  • It enhances your creativity, logic, and strategic thinking.
                  • -
                  • It helps you pass time and have fun.
                  • -
                  -

                  Where to download Spider Solitaire Classic for free?

                  -

                  There are many ways to download Spider Solitaire Classic for free and enjoy it on your preferred device. Here are some of the best options:

                  -

                  Online websites

                  -

                  If you want to play Spider Solitaire Classic online without downloading anything, you can visit some of the following websites:

                  -
                  WebsiteQualitySpeedSafetyLegality
                  Actvid.comHDFastSafeLegal
                  YouTube.comHDFastSafeLegal
                  Fmovies.toHDFastRiskyIllegal
                  Putlocker9.showHDFastRiskyIllegal
                  Watchcartoononline.comLQSlowRiskyIllegal
                  - - - - -
                  WebsiteDescription
                  [Free Spider Solitaire](^1^)A website that offers a modern collection of solitaire games including Spider Solitaire with different difficulty levels, card sets, backgrounds, and statistics.
                  [Spider Solitaire Game](^2^)A website that allows you to play Spider Solitaire online and for free with full-screen mode, no registration, no download, and detailed statistics.
                  [Solitr](^3^)A website that provides a simple and fast way to play Spider Solitaire online with undo, hint, auto-play, and timer features
                  -

                  Mobile apps

                  -

                  If you want to play Spider Solitaire Classic on your smartphone or tablet, you can download some of the following apps:

                  - - - - - -
                  AppDescription
                  [Spider Solitaire Classic by MobilityWare]An app that lets you play Spider Solitaire with beautiful graphics, animations, sound effects, and customizable settings. You can also challenge yourself with daily goals and achievements.
                  [Spider Solitaire by Brainium Studios]An app that offers a smooth and intuitive gameplay of Spider Solitaire with stunning visuals, smart hints, unlimited undos, and statistics. You can also choose from various themes and card backs.
                  [Spider Solitaire by IGC Mobile]An app that features a classic design and gameplay of Spider Solitaire with one suit, two suits, or four suits options. You can also track your progress and performance with leaderboards and achievements.
                  -

                  Desktop software

                  -

                  If you want to play Spider Solitaire Classic on your computer, you can download some of the following software:

                  - - - - - -
                  SoftwareDescription
                  [Spider Solitaire Collection Free for Windows 10]A software that provides a collection of five Spider Solitaire games with different rules and layouts. You can also customize the game appearance, difficulty level, and scoring system.
                  [Free Spider Solitaire 2020]A software that delivers a high-quality Spider Solitaire game with 3D graphics, animations, sound effects, and tips. You can also play in full-screen mode, change the background color, and select the card style.
                  [123 Free Solitaire]A software that includes 12 solitaire card games such as Spider Solitaire, Spider One Suit, Spider Two Suits, and more. You can also adjust the game options, speed, and screen size.
                  -

                  How to play Spider Solitaire Classic after downloading it?

                  -

                  After downloading Spider Solitaire Classic for free from any of the sources mentioned above, you can start playing it right away. Here are some basic steps to follow:

                  -

                  download spider solitaire classic for windows 10
                  -download spider solitaire classic free online
                  -download spider solitaire classic app
                  -download spider solitaire classic game
                  -download spider solitaire classic apk
                  -download spider solitaire classic for android
                  -download spider solitaire classic for pc
                  -download spider solitaire classic for mac
                  -download spider solitaire classic offline
                  -download spider solitaire classic by mobilityware
                  -download spider solitaire classic for windows 8.1
                  -download spider solitaire classic for windows 7
                  -download spider solitaire classic for iphone
                  -download spider solitaire classic for ipad
                  -download spider solitaire classic for chromebook
                  -download spider solitaire classic for linux
                  -download spider solitaire classic mod apk
                  -download spider solitaire classic no ads
                  -download spider solitaire classic with hints
                  -download spider solitaire classic with undo
                  -download spider solitaire classic with themes
                  -download spider solitaire classic with daily challenges
                  -download spider solitaire classic with 1 2 3 4 suits
                  -download spider solitaire classic with sound effects
                  -download spider solitaire classic with animations
                  -download spider solitaire classic from microsoft store
                  -download spider solitaire classic from google play store
                  -download spider solitaire classic from app store
                  -download spider solitaire classic from amazon appstore
                  -download spider solitaire classic from softonic
                  -how to download spider solitaire classic on laptop
                  -how to download spider solitaire classic on desktop
                  -how to download spider solitaire classic on tablet
                  -how to download spider solitaire classic on phone
                  -how to play downloaded spider solitaire classic offline
                  -how to install downloaded spider solitaire classic on pc
                  -how to uninstall downloaded spider solitaire classic from windows 10
                  -how to update downloaded spider solitaire classic app
                  -how to restore downloaded spider solitaire classic game data
                  -how to transfer downloaded spider solitaire classic to another device
                  -where to download spider solitaire classic for free without ads
                  -where to find downloaded spider solitaire classic files on pc
                  -where to get downloaded spider solitaire classic support and help
                  -where to rate and review downloaded spider solitaire classic app
                  -where to share downloaded spider solitaire classic game with friends and family

                  -

                  Choose the difficulty level

                  -

                  Depending on the source you downloaded from, you may have different options to choose the difficulty level of the game. Generally, you can choose between one suit (easy), two suits (medium), or four suits (hard). The more suits you have, the harder it is to complete the game. You can change the difficulty level anytime before starting a new game.

                  -

                  Drag and drop cards to move them

                  -

                  To play the game, you need to drag and drop cards from one column to another. You can only move a card or a group of cards if they are in the same suit and in descending order. For example, you can move a 9 of hearts onto a 10 of hearts, or a group of 7-6-5 of spades onto an 8 of spades. You can also move a card or a group of cards to an empty column. To remove a complete suit from King to Ace from the tableau, you need to drag and drop it onto one of the foundation piles at the top.

                  -

                  Use the undo and hint buttons if needed

                  -

                  If you make a mistake or want to try a different move, you can use the undo button to reverse your last action. You can undo as many times as you want until you reach the beginning of the game. If you are stuck or need some help, you can use the hint button to get a suggestion for your next move. The hint button may not always give you the best move, but it will give you a valid one.

                  -

                  Conclusion

                  -

                  Spider Solitaire Classic is a great game to play if you love solitaire games and want to challenge yourself with different levels of difficulty. It is also a great way to improve your brain skills and have fun at the same time. You can download Spider Solitaire Classic for free from various sources such as online websites, mobile apps, or desktop software. Once you download it, you can start playing it by choosing the difficulty level, dragging and dropping cards to move them, and using the undo and hint buttons if needed. We hope this article has helped you learn how to download Spider Solitaire Classic for free and how to play it after downloading it.

                  -

                  FAQs

                  -

                  Here are some of the frequently asked questions about Spider Solitaire Classic:

                  -
                    -
                  • Q: How many cards are used in Spider Solitaire Classic?
                  • -
                  • A: Spider Solitaire Classic uses two standard 52-card decks, for a total of 104 cards.
                  • -
                  • Q: How do I win Spider Solitaire Classic?
                  • -
                  • A: You win Spider Solitaire Classic by moving all the cards from the tableau to the foundations in descending order from King to Ace in the same suit.
                  • -
                  • Q: How do I change the card style or the background color in Spider Solitaire Classic?
                  • -
                  • A: Depending on the source you downloaded from, you may have different options to customize the game appearance. Generally, you can find the settings or options menu on the main screen or in the game menu and choose from various card styles or background colors.
                  • -
                  • Q: What is the difference between Spider Solitaire and Spider Solitaire Classic?
                  • -
                  • A: Spider Solitaire is a generic term for any solitaire game that uses two decks of cards and has similar rules to Spider Solitaire Classic. Spider Solitaire Classic is a specific variation of Spider Solitaire that has one suit, two suits, or four suits difficulty levels and a classic design and gameplay.
                  • -
                  • Q: Is Spider Solitaire Classic good for your brain?
                  • -
                  • A: Yes, Spider Solitaire Classic is good for your brain as it improves your memory, concentration, problem-solving, logic, and strategic thinking skills. It also reduces stress, anxiety, and boredom and boosts your mood, self-esteem, and confidence.
                  • -

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/modules/streaming.py b/spaces/simsantonioii/MusicGen-Continuation/audiocraft/modules/streaming.py deleted file mode 100644 index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/modules/streaming.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Streaming module API that should be implemented by all Streaming components, -""" - -from contextlib import contextmanager -import typing as tp -from torch import nn -import torch - - -State = tp.Dict[str, torch.Tensor] - - -class StreamingModule(nn.Module): - """Common API for streaming components. - - Each streaming component has a streaming state, which is just a dict[str, Tensor]. - By convention, the first dim of each tensor must be the batch size. - Don't use dots in the key names, as this would clash with submodules - (like in state_dict). - - If `self._is_streaming` is True, the component should use and remember - the proper state inside `self._streaming_state`. - - To set a streaming component in streaming state, use - - with module.streaming(): - ... - - This will automatically reset the streaming state when exiting the context manager. - This also automatically propagates to all streaming children module. - - Some module might also implement the `StreamingModule.flush` method, although - this one is trickier, as all parents module must be StreamingModule and implement - it as well for it to work properly. See `StreamingSequential` after. - """ - def __init__(self) -> None: - super().__init__() - self._streaming_state: State = {} - self._is_streaming = False - - def _apply_named_streaming(self, fn: tp.Any): - for name, module in self.named_modules(): - if isinstance(module, StreamingModule): - fn(name, module) - - def _set_streaming(self, streaming: bool): - def _set_streaming(name, module): - module._is_streaming = streaming - self._apply_named_streaming(_set_streaming) - - @contextmanager - def streaming(self): - """Context manager to enter streaming mode. Reset streaming state on exit. - """ - self._set_streaming(True) - try: - yield - finally: - self._set_streaming(False) - self.reset_streaming() - - def reset_streaming(self): - """Reset the streaming state. - """ - def _reset(name: str, module: StreamingModule): - module._streaming_state.clear() - - self._apply_named_streaming(_reset) - - def get_streaming_state(self) -> State: - """Return the streaming state, including that of sub-modules. - """ - state: State = {} - - def _add(name: str, module: StreamingModule): - if name: - name += "." - for key, value in module._streaming_state.items(): - state[name + key] = value - - self._apply_named_streaming(_add) - return state - - def set_streaming_state(self, state: State): - """Set the streaming state, including that of sub-modules. - """ - state = dict(state) - - def _set(name: str, module: StreamingModule): - if name: - name += "." - module._streaming_state.clear() - for key, value in list(state.items()): - # complexity is not ideal here, but probably fine. - if key.startswith(name): - local_key = key[len(name):] - if '.' not in local_key: - module._streaming_state[local_key] = value - del state[key] - - self._apply_named_streaming(_set) - assert len(state) == 0, list(state.keys()) - - def flush(self, x: tp.Optional[torch.Tensor] = None): - """Flush any remaining outputs that were waiting for completion. - Typically, for convolutions, this will add the final padding - and process the last buffer. - - This should take an optional argument `x`, which will be provided - if a module before this one in the streaming pipeline has already - spitted out a flushed out buffer. - """ - if x is None: - return None - else: - return self(x) - - -class StreamingSequential(StreamingModule, nn.Sequential): - """A streaming compatible alternative of `nn.Sequential`. - """ - def flush(self, x: tp.Optional[torch.Tensor] = None): - for module in self: - if isinstance(module, StreamingModule): - x = module.flush(x) - elif x is not None: - x = module(x) - return x diff --git a/spaces/skf15963/summary/data_utils.py b/spaces/skf15963/summary/data_utils.py deleted file mode 100644 index 879798749bc06d6857c01ec101baf5f3fb61d012..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/data_utils.py +++ /dev/null @@ -1,319 +0,0 @@ -# -*- coding: utf-8 -*- - -import re -import six -import unicodedata -import torch -import rouge -import numpy as np -import random -# from fengshen.examples.pegasus.pegasus_utils import text_segmentate -import sys - -sys.path.append('../../../') - -rouge = rouge.Rouge() - - -is_py2 = six.PY2 - -if not is_py2: - basestring = str - - -def _is_chinese_char(cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ((cp >= 0x4E00 and cp <= 0x9FFF) or (cp >= 0x3400 and cp <= 0x4DBF) - or (cp >= 0x20000 and cp <= 0x2A6DF) - or (cp >= 0x2A700 and cp <= 0x2B73F) - or (cp >= 0x2B740 and cp <= 0x2B81F) - or (cp >= 0x2B820 and cp <= 0x2CEAF) - or (cp >= 0xF900 and cp <= 0xFAFF) - or (cp >= 0x2F800 and cp <= 0x2FA1F)): - return True - - return False - - -def _is_whitespace(char): - """Checks whether `char` is a whitespace character.""" - # \t, \n, and \r are technically control characters but we treat them - # as whitespace since they are generally considered as such. - if char == " " or char == "\t" or char == "\n" or char == "\r": - return True - cat = unicodedata.category(char) - if cat == "Zs": - return True - return False - - -def _is_control(char): - """Checks whether `char` is a control character.""" - # These are technically control characters but we count them as whitespace - # characters. - if char == "\t" or char == "\n" or char == "\r": - return False - cat = unicodedata.category(char) - if cat.startswith("C"): - return True - return False - - -def _is_punctuation(char): - """Checks whether `char` is a punctuation character.""" - cp = ord(char) - # We treat all non-letter/number ASCII as punctuation. - # Characters such as "^", "$", and "`" are not in the Unicode - # Punctuation class but we treat them as punctuation anyways, for - # consistency. - if (cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or ( - cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126): - return True - cat = unicodedata.category(char) - if cat.startswith("P"): - return True - return False - - -def is_string(s): - """判断是否是字符串 - """ - return isinstance(s, basestring) - - -def is_stopwords(word, stopwords): - if word in stopwords: - return True - else: - return False - - -def text_segmentate(text): - en_seg_pattern = '((?:\\!|\\?|\\.|\\n)+(?:\\s)+)' - ch_seg_pattern = '((?:?|!|。|\\n)+)' - try: - text = re.sub(en_seg_pattern, r'\1[SEP]', text) - # print("sub text: ", text) - except Exception as e: - print("input: ", text) - raise e - text = re.sub(ch_seg_pattern, r'\1[SEP]', text) - # print("sub ch text: ", text) - text_list = text.split("[SEP]") - text_list = list(filter(lambda x: len(x) != 0, text_list)) - return text_list - - -def load_stopwords(stopwords_path): - stopwords_dict = {} - with open(stopwords_path, "r") as rf: - for line in rf: - line = line.strip() - if line not in stopwords_dict: - stopwords_dict[line] = 0 - else: - pass - return stopwords_dict - - -def text_process(text, max_length): - """分割文本 - """ - texts = text_segmentate(text) - - result, length = [], 0 - for text in texts: - if length + len(text) > max_length * 1.3 and len(result) >= 3: - yield result - result, length = [], 0 - result.append(text) - length += len(text) - if result and len(result) >= 3: - yield result - - -def text_process_split_long_content(text, max_length): - """分割长文本 - """ - texts = text_segmentate(text) - - result, sentence_num = "", 0 - for text in texts: - if len(text) > 500: - if len(result) > 300 and sentence_num >= 3: - yield result - result, sentence_num = "", 0 - else: - result, sentence_num = "", 0 - continue - else: - if len(result) + len(text) > max_length * 1.1 and sentence_num >= 3: - yield result - result, sentence_num = "", 0 - result += text - sentence_num += 1 - - if result and sentence_num >= 3: - yield result - - -def gather_join(texts, idxs): - """取出对应的text,然后拼接起来 - """ - return ''.join([texts[i] for i in idxs]) - - -def gather_join_f1(texts_token, idsx): - join_texts = [] - for id in idsx: - join_texts.extend(texts_token[id]) - return join_texts - - -def compute_rouge(source, target): - """计算rouge-1、rouge-2、rouge-l - """ - source, target = ' '.join(source), ' '.join(target) - try: - scores = rouge.get_scores(hyps=source, refs=target) - return { - 'rouge-1': scores[0]['rouge-1']['f'], - 'rouge-2': scores[0]['rouge-2']['f'], - 'rouge-l': scores[0]['rouge-l']['f'], - } - except ValueError: - return { - 'rouge-1': 0.0, - 'rouge-2': 0.0, - 'rouge-l': 0.0, - } - - -def remove_stopwords(texts, stopwords_dict): - for i, text in enumerate(texts): - texts[i] = list(filter(lambda x: x not in stopwords_dict, text)) - return texts - - -def pseudo_summary_f1(texts, - stopwords, - tokenizer, - max_length, - rouge_strategy="rouge-l"): - """构建伪标签摘要数据集 - """ - summary_rate = 0.25 - max_length = max_length - 1 - texts_tokens = [] - sentece_idxs_vec = [] - for text in texts: - if len(texts) == 0: - continue - try: - ids = tokenizer.encode(text.strip())[:-1] - except ValueError: - print("error, input : ", text) - raise ValueError - sentece_idxs_vec.append(ids) - tokens = [tokenizer._convert_id_to_token(token) for token in ids] - texts_tokens.append(tokens) - - texts_tokens_rm = remove_stopwords(texts_tokens, stopwords) - source_idxs, target_idxs = list(range(len(texts))), [] - - assert len(texts_tokens) == len(texts) - # truncate_index = 0 - while True: - sims = [] - for i in source_idxs: - new_source_idxs = [j for j in source_idxs if j != i] - new_target_idxs = sorted(target_idxs + [i]) - new_source = gather_join_f1(texts_tokens_rm, new_source_idxs) - new_target = gather_join_f1(texts_tokens_rm, new_target_idxs) - sim = compute_rouge(new_source, new_target)[rouge_strategy] - sims.append(sim) - new_idx = source_idxs[np.argmax(sims)] - del sims - source_idxs.remove(new_idx) - target_idxs = sorted(target_idxs + [new_idx]) - source = gather_join(texts, source_idxs) - target = gather_join(texts, target_idxs) - try: - if (len(source_idxs) == 1 - or 1.0 * len(target) / len(source) > summary_rate): - break - except ZeroDivisionError as e: - print(e.meesage) - print(texts) - print("source: ", source) - print("target: ", target) - - if len(source) < len(target): - source, target = target, source - source_idxs, target_idxs = target_idxs, source_idxs - - return sentece_idxs_vec, source, target, source_idxs, target_idxs - - -def get_input_mask(sentence_id_vec, indexs): - target_idxs = [] - input_idxs = [] - kMaskSentenceTokenId = 2 - kEosTokenId = 1 - mask_sentence_options_cumulative_prob = [0.9, 0.9, 1, 1] - for index in indexs: - target_idxs.extend(sentence_id_vec[index]) - choice = random.uniform(0, 1) - if choice < mask_sentence_options_cumulative_prob[0]: - # print("mask index: ", index) - sentence_id_vec[index] = [kMaskSentenceTokenId] - elif choice < mask_sentence_options_cumulative_prob[1]: - # print("replace index: ", index) - replace_id = random.randint(0, len(sentence_id_vec)) - sentence_id_vec[index] = sentence_id_vec[replace_id] - elif choice < mask_sentence_options_cumulative_prob[2]: - pass - else: - sentence_id_vec[index] = [] - - target_idxs.append(kEosTokenId) - # print(sentence_id_vec) - for index, sentence_id in enumerate(sentence_id_vec): - # print(index, sentence_id) - if len(sentence_id) == 0: - continue - input_idxs.extend(sentence_id_vec[index]) - - input_idxs.append(kEosTokenId) - return input_idxs, target_idxs - - -def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, - decoder_start_token_id: int): - """ - Shift input ids one token to the right. - """ - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() - shifted_input_ids[:, 0] = decoder_start_token_id - - if pad_token_id is None: - raise ValueError("self.model.config.pad_token_id has to be defined.") - # replace possible -100 values in labels by `pad_token_id` - shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) - - return shifted_input_ids - - -def padding_to_maxlength(ids, max_length, pad_id): - cur_len = len(ids) - len_diff = max_length - cur_len - return ids + [pad_id] * len_diff, [1] * cur_len + [0] * len_diff diff --git a/spaces/skf15963/summary/fengshen/examples/summary/randeng_t5_70M_summary_predict.sh b/spaces/skf15963/summary/fengshen/examples/summary/randeng_t5_70M_summary_predict.sh deleted file mode 100644 index ccbf410fa92b1d5e09c97d6ae3af7bb4ff121c64..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/summary/randeng_t5_70M_summary_predict.sh +++ /dev/null @@ -1,138 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=randeng_t5_77M_summary_predict -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=2 -#SBATCH --gres=gpu:2 # number of gpus -#SBATCH --cpus-per-task=30 -#SBATCH -o %x-%j.log - -set -x -e - -echo "START TIME: $(date)" -MODEL_NAME=randeng_t5_77M_summary_predict -MICRO_BATCH_SIZE=16 -ROOT_DIR=/cognitive_comp/ganruyi/experiments/${MODEL_NAME} -if [ ! -d ${ROOT_DIR} ];then - mkdir ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -output_save_path=$ROOT_DIR/randeng_t5_77M_predict_lcsts.json -if [ -f ${output_save_path} ];then - echo ${output_save_path} exist, rm it!!!!!!!!!!!!!!!!! - rm ${output_save_path} -fi - -ZERO_STAGE=1 - -config_json="${ROOT_DIR}/ds_config.${MODEL_NAME}.json" - -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-4, - "betas": [ - 0.9, - 0.95 - ], - "eps": 1e-8, - "weight_decay": 5e-2 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 5e-6, - "warmup_max_lr": 1e-4 - } - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions -export MASTER_PORT=$[RANDOM%10000+50000] - -# --strategy deepspeed_stage_${ZERO_STAGE} \ -TRAINER_ARGS=" - --max_epochs 1 \ - --gpus 2 \ - --num_nodes 1 \ - --strategy ddp \ - --default_root_dir $ROOT_DIR \ - --dirpath $ROOT_DIR/ckpt \ - --save_top_k 3 \ - --monitor train_loss \ - --mode min \ - --save_last \ - --every_n_train_steps 0 \ -" -DATA_DIR=/cognitive_comp/ganruyi/data_datasets_LCSTS_LCSTS/ -prompt="summary:" -DATA_ARGS=" - --datasets_name lcsts \ - --num_workers 30 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --max_enc_length 128 \ - --max_dec_length 64 \ - --val_datasets_field val \ - --prompt $prompt \ -" -# --prompt $prompt \ -# --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_77M_summary/ckpt/hf_pretrained_epoch1_step75019 \ - -MODEL_ARGS=" - --pretrained_model_path /cognitive_comp/gaoxinyu/pretrained_model/bart-759M \ - --output_save_path $ROOT_DIR/randeng_t5_77M_predict_lcsts.json \ - --learning_rate 1e-4 \ - --weight_decay 0.1 \ - --precision 16 \ - --warmup 0.01 \ - --do_eval_only \ - --max_dec_length 32 \ -" - -SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/summary/seq2seq_summary.py -SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " -echo $CMD -source activate base -# srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD' -python $CMD \ No newline at end of file diff --git a/spaces/smangrul/peft-lora-sd-dreambooth/train_dreambooth.py b/spaces/smangrul/peft-lora-sd-dreambooth/train_dreambooth.py deleted file mode 100644 index 2f5312390975e9aefd0fc8617af3cffeded12fcb..0000000000000000000000000000000000000000 --- a/spaces/smangrul/peft-lora-sd-dreambooth/train_dreambooth.py +++ /dev/null @@ -1,1005 +0,0 @@ -import argparse -import gc -import hashlib -import itertools -import json -import logging -import math -import os -import threading -import warnings -from pathlib import Path -from typing import Optional - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from torch.utils.data import Dataset -from transformers import AutoTokenizer, PretrainedConfig - -import datasets -import diffusers -import psutil -from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version -from diffusers.utils.import_utils import is_xformers_available -from huggingface_hub import HfFolder, Repository, whoami -from peft import LoraConfig, LoraModel, get_peft_model_state_dict -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.10.0.dev0") - -logger = get_logger(__name__) - -UNET_TARGET_MODULES = ["to_q", "to_v", "query", "value"] # , "ff.net.0.proj"] -TEXT_ENCODER_TARGET_MODULES = ["q_proj", "v_proj"] - - -def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, - subfolder="text_encoder", - revision=revision, - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "RobertaSeriesModelWithTransformation": - from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation - - return RobertaSeriesModelWithTransformation - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - - # lora args - parser.add_argument("--use_lora", action="store_true", help="Whether to use Lora for parameter efficient tuning") - parser.add_argument("--lora_r", type=int, default=8, help="Lora rank, only used if use_lora is True") - parser.add_argument("--lora_alpha", type=int, default=32, help="Lora alpha, only used if use_lora is True") - parser.add_argument("--lora_dropout", type=float, default=0.0, help="Lora dropout, only used if use_lora is True") - parser.add_argument( - "--lora_bias", - type=str, - default="none", - help="Bias type for Lora. Can be 'none', 'all' or 'lora_only', only used if use_lora is True", - ) - parser.add_argument( - "--lora_text_encoder_r", - type=int, - default=8, - help="Lora rank for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_alpha", - type=int, - default=32, - help="Lora alpha for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_dropout", - type=float, - default=0.0, - help="Lora dropout for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_bias", - type=str, - default="none", - help="Bias type for Lora. Can be 'none', 'all' or 'lora_only', only used if use_lora and `train_text_encoder` are True", - ) - - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--prior_generation_precision", - type=str, - default=None, - choices=["no", "fp32", "fp16", "bf16"], - help=( - "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - # logger is not available yet - if args.class_data_dir is not None: - warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - warnings.warn("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -# Converting Bytes to Megabytes -def b2mb(x): - return int(x / 2**20) - - -# This context manager is used to track the peak memory usage of the process -class TorchTracemalloc: - def __enter__(self): - gc.collect() - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero - self.begin = torch.cuda.memory_allocated() - self.process = psutil.Process() - - self.cpu_begin = self.cpu_mem_used() - self.peak_monitoring = True - peak_monitor_thread = threading.Thread(target=self.peak_monitor_func) - peak_monitor_thread.daemon = True - peak_monitor_thread.start() - return self - - def cpu_mem_used(self): - """get resident set size memory for the current process""" - return self.process.memory_info().rss - - def peak_monitor_func(self): - self.cpu_peak = -1 - - while True: - self.cpu_peak = max(self.cpu_mem_used(), self.cpu_peak) - - # can't sleep or will not catch the peak right (this comment is here on purpose) - # time.sleep(0.001) # 1msec - - if not self.peak_monitoring: - break - - def __exit__(self, *exc): - self.peak_monitoring = False - - gc.collect() - torch.cuda.empty_cache() - self.end = torch.cuda.memory_allocated() - self.peak = torch.cuda.max_memory_allocated() - self.used = b2mb(self.end - self.begin) - self.peaked = b2mb(self.peak - self.begin) - - self.cpu_end = self.cpu_mem_used() - self.cpu_used = b2mb(self.cpu_end - self.cpu_begin) - self.cpu_peaked = b2mb(self.cpu_peak - self.cpu_begin) - # print(f"delta used/peak {self.used:4d}/{self.peaked:4d}") - - -def print_trainable_parameters(model): - """ - Prints the number of trainable parameters in the model. - """ - trainable_params = 0 - all_param = 0 - for _, param in model.named_parameters(): - all_param += param.numel() - if param.requires_grad: - trainable_params += param.numel() - print( - f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" - ) - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - return example - - -def collate_fn(examples, with_prior_preservation=False): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = torch.cat(input_ids, dim=0) - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Generate class images if prior preservation is enabled. - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - if args.prior_generation_precision == "fp32": - torch_dtype = torch.float32 - elif args.prior_generation_precision == "fp16": - torch_dtype = torch.float16 - elif args.prior_generation_precision == "bf16": - torch_dtype = torch.bfloat16 - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) # noqa: F841 - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) - elif args.pretrained_model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - use_fast=False, - ) - - # import correct text encoder class - text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision) - - # Load scheduler and models - noise_scheduler = DDPMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - ) # DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = text_encoder_cls.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - if args.use_lora: - config = LoraConfig( - r=args.lora_r, - lora_alpha=args.lora_alpha, - target_modules=UNET_TARGET_MODULES, - lora_dropout=args.lora_dropout, - bias=args.lora_bias, - ) - unet = LoraModel(config, unet) - print_trainable_parameters(unet) - print(unet) - - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - elif args.train_text_encoder and args.use_lora: - config = LoraConfig( - r=args.lora_text_encoder_r, - lora_alpha=args.lora_text_encoder_alpha, - target_modules=TEXT_ENCODER_TARGET_MODULES, - lora_dropout=args.lora_text_encoder_dropout, - bias=args.lora_text_encoder_bias, - ) - text_encoder = LoraModel(config, text_encoder) - print_trainable_parameters(text_encoder) - print(text_encoder) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - # below fails when using lora so commenting it out - if args.train_text_encoder and not args.use_lora: - text_encoder.gradient_checkpointing_enable() - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - # Optimizer creation - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), - num_workers=1, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - num_cycles=args.lr_num_cycles, - power=args.lr_power, - ) - - # Prepare everything with our `accelerator`. - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and text_encoder to device and cast to weight_dtype - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the mos recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = resume_global_step // num_update_steps_per_epoch - resume_step = resume_global_step % num_update_steps_per_epoch - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - with TorchTracemalloc() as tracemalloc: - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device - ) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - # if global_step % args.checkpointing_steps == 0: - # if accelerator.is_main_process: - # save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - # accelerator.save_state(save_path) - # logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - # Printing the GPU memory usage details such as allocated memory, peak memory, and total memory usage - accelerator.print("GPU Memory before entering the train : {}".format(b2mb(tracemalloc.begin))) - accelerator.print("GPU Memory consumed at the end of the train (end-begin): {}".format(tracemalloc.used)) - accelerator.print("GPU Peak Memory consumed during the train (max-begin): {}".format(tracemalloc.peaked)) - accelerator.print( - "GPU Total Peak Memory consumed during the train (max): {}".format( - tracemalloc.peaked + b2mb(tracemalloc.begin) - ) - ) - - accelerator.print("CPU Memory before entering the train : {}".format(b2mb(tracemalloc.cpu_begin))) - accelerator.print("CPU Memory consumed at the end of the train (end-begin): {}".format(tracemalloc.cpu_used)) - accelerator.print("CPU Peak Memory consumed during the train (max-begin): {}".format(tracemalloc.cpu_peaked)) - accelerator.print( - "CPU Total Peak Memory consumed during the train (max): {}".format( - tracemalloc.cpu_peaked + b2mb(tracemalloc.cpu_begin) - ) - ) - - # Create the pipeline using using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.use_lora: - lora_config = {} - state_dict = get_peft_model_state_dict(unet, state_dict=accelerator.get_state_dict(unet)) - lora_config["peft_config"] = unet.get_peft_config_as_dict(inference=True) - if args.train_text_encoder: - text_encoder_state_dict = get_peft_model_state_dict( - text_encoder, state_dict=accelerator.get_state_dict(text_encoder) - ) - text_encoder_state_dict = {f"text_encoder_{k}": v for k, v in text_encoder_state_dict.items()} - state_dict.update(text_encoder_state_dict) - lora_config["text_encoder_peft_config"] = text_encoder.get_peft_config_as_dict(inference=True) - - accelerator.print(state_dict) - accelerator.save(state_dict, os.path.join(args.output_dir, f"{args.instance_prompt}_lora.pt")) - with open(os.path.join(args.output_dir, f"{args.instance_prompt}_lora_config.json"), "w") as f: - json.dump(lora_config, f) - else: - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - ) - pipeline.save_pretrained(args.output_dir) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/smith2020/WhatsApp-chat-analysis-summary/helper.py b/spaces/smith2020/WhatsApp-chat-analysis-summary/helper.py deleted file mode 100644 index 533bc03cb4d61c0fc21223bbd759355a472d06d7..0000000000000000000000000000000000000000 --- a/spaces/smith2020/WhatsApp-chat-analysis-summary/helper.py +++ /dev/null @@ -1,180 +0,0 @@ -from urlextract import URLExtract -import pandas as pd -from collections import Counter -ex=URLExtract() -from wordcloud import WordCloud, STOPWORDS -import emoji - - - - -def fetch_stats(selected_user,df): - if selected_user != "Over All": - df=df[df["user"] == selected_user] - - num_meassage = df.shape[0] - v = [] - for i in df["message"]: - v.extend(i.split()) - - #num of media - media= df[df["message"]=="\n"].shape[0] - - # for links - links = [] - for i in df["message"]: - links.extend(ex.find_urls(i)) - - return num_meassage,len(v),media,len(links) - -#Most Busy Users -def m_b_u(df): - x=df["user"].value_counts().head() - # Most Busy Users Presentage - dl = round((df["user"].value_counts() / df.shape[0]) * 100, 2).reset_index().rename( - columns={"index": "name", "user": "presentage"}) - return x,dl - - -#creating wordcloud - -def create_wordcloud(selected_user,df): - if selected_user != "Over All": - df=df[df["user"] == selected_user] - f = open('stop_hinglish.txt','r') - stop_words = f.read() - - temp = df[df['user'] != 'group_notification'] - temp = temp[temp['message'] != '\n'] - - def remove_stop_words(message): - y = [] - for word in message.lower().split(): - if word not in stop_words: - y.append(word) - return " ".join(y) - - wc = WordCloud(width=500, height=500, min_font_size=10, background_color='white') - temp['message'] = temp['message'].apply(remove_stop_words) - df_wc = wc.generate(temp['message'].str.cat(sep=" ")) - return df_wc - - -def most_common_words(selected_user,df): - if selected_user != "Over All": - df=df[df["user"] == selected_user] - f = open('stop_hinglish.txt','r') - stop_words = f.read() - - - - temp = df[df['user'] != 'group_notification'] - temp = temp[temp['message'] != '\n'] - - words = [] - for message in temp['message']: - for word in message.lower().split(): - if word not in stop_words: - words.append(word) - - most_common_df = pd.DataFrame(Counter(words).most_common(20)) - return most_common_df - - - - -def emoji_helper(selected_user,df): - if selected_user != "Over All": - df=df[df["user"] == selected_user] - - emojis = [] - for message in df['message']: - emojis.extend([c for c in message if c in emoji.EMOJI_DATA]) - - emoji_df = pd.DataFrame(Counter(emojis).most_common(len(Counter(emojis)))) - - return emoji_df - - - - -def time_line(selected_user,df): - if selected_user != "Over All": - df=df[df["user"] == selected_user] - - time_line = df.groupby(["year", "month"]).count()["message"].reset_index() - t = [] - for i in range(time_line.shape[0]): - t.append(time_line["month"][i] + "- " + str(time_line["year"][i])) - - time_line["time_year"] = t - - return time_line - -def daily_timeline(selected_user, df): - if selected_user != "Over All": - df = df[df["user"] == selected_user] - - daily_timeline = df.groupby('only_date').count()['message'].reset_index() - - return daily_timeline - - - -def week_activity_map(selected_user, df): - if selected_user != "Over All": - df = df[df["user"] == selected_user] - - return df['day_name'].value_counts() - - -def month_activity_map(selected_user, df): - if selected_user != "Over All": - df = df[df["user"] == selected_user] - - return df['month'].value_counts() - - - - - -def activity_heatmap(selected_user, df): - if selected_user != "Over All": - df = df[df["user"] == selected_user] - - user_heatmap = df.pivot_table(index='day_name', columns='period', values='message', aggfunc='count').fillna(0) - - return user_heatmap - - -# date to the message - -from urlextract import URLExtract - - -def d_message(selected_user, df): - if selected_user != "Over All": - df = df[df["user"] == selected_user] - df = df.groupby('user') - df = df.get_group(selected_user) - - import datetime - Previous_Date = datetime.datetime.today() - datetime.timedelta(days=1) - - now = Previous_Date - now = str(now) - now = now[:10] - - c = URLExtract() # object - #filtered_df = df.loc[(df['date'] == now)] - filtered_df = df.loc[(df['date'] >= '2023-01-27') - & (df['date'] < '2023-01-30')] - d = [] - for i in filtered_df["message"]: - if c.find_urls(i) or i == '\n' or i == 'This message was deleted\n': - continue - " ".join(i) - d.append(i[0:-1]) - if selected_user == "Over All": - d = " ".join(d) - return d \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/En Perseguirme Mundo Que Interesas Figuras Literarias _HOT_.md b/spaces/stomexserde/gpt4-ui/Examples/En Perseguirme Mundo Que Interesas Figuras Literarias _HOT_.md deleted file mode 100644 index 8c595996c45b364da162712e97b3dccc3488106c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/En Perseguirme Mundo Que Interesas Figuras Literarias _HOT_.md +++ /dev/null @@ -1,23 +0,0 @@ -
                  -Here is a possible title and article with HTML formatting for the keyword "En Perseguirme Mundo Que Interesas Figuras Literarias": - -

                  El uso de las figuras literarias en el soneto "En perseguirme mundo ¿qué interesas?" de Sor Juana Inés de la Cruz

                  -

                  Sor Juana Inés de la Cruz fue una de las más destacadas poetisas del Siglo de Oro español y una de las primeras defensoras de los derechos de las mujeres. Su obra abarca diversos géneros como la lírica, el teatro, la prosa y la epístola. Entre sus poemas más famosos se encuentra el soneto "En perseguirme mundo ¿qué interesas?", en el que expresa su rechazo a las críticas y persecuciones que sufría por dedicarse al estudio y al cultivo de su inteligencia.

                  -

                  Este soneto es un ejemplo del conceptismo, una corriente literaria que se caracteriza por el uso de juegos verbales ingeniosos, paradojas, antítesis y retruécanos. Estas figuras literarias buscan sorprender al lector y transmitir ideas profundas y complejas con brevedad y claridad. Veamos algunos ejemplos de estas figuras en el poema de Sor Juana:

                  -

                  En Perseguirme Mundo Que Interesas Figuras Literarias


                  Download Filehttps://urlgoal.com/2uI8uu



                  -
                    -
                  • Paradoja: consiste en unir dos ideas aparentemente contradictorias para expresar una verdad. Por ejemplo, en el verso "poner bellezas en mi entendimiento", Sor Juana contrapone los conceptos de belleza y entendimiento, que normalmente se asocian con lo físico y lo intelectual respectivamente, para mostrar que ella busca embellecer su mente con el conocimiento y no su cuerpo con adornos.
                  • -
                  • Antítesis: consiste en contraponer dos ideas opuestas para crear un efecto de contraste. Por ejemplo, en el verso "que no mi entendimiento en las bellezas", Sor Juana opone su entendimiento (su razón) a las bellezas (las apariencias), para indicar que ella no se deja engañar por lo superficial y que prefiere lo profundo.
                  • -
                  • Retruécano: consiste en repetir una frase cambiando el orden de sus palabras para modificar su sentido. Por ejemplo, en los versos "poner riquezas en mi pensamiento / que no mi pensamiento en las riquezas", Sor Juana invierte el orden de las palabras riquezas y pensamiento, para mostrar que ella valora más el saber que el dinero.
                  • -
                  -

                  Estas figuras literarias le permiten a Sor Juana expresar su rebeldía ante un mundo que la juzga y la condena por ser mujer y por ser sabia. Con su poesía, ella demuestra su talento, su erudición y su independencia, y se convierte en una voz precursora del feminismo y de la libertad intelectual.

                  Here is a possible continuation of the article: - -

                  Además de las figuras literarias, otro aspecto importante del soneto de Sor Juana Inés de la Cruz es el contexto histórico y biográfico en el que fue escrito. Sor Juana Inés de la Cruz nació en 1648 en la hacienda de San Miguel Nepantla, en el actual Estado de México. Fue hija natural de una criolla y un español, y desde muy niña mostró una gran inteligencia y curiosidad por el saber. Aprendió a leer y escribir a los tres años, y a los ocho escribió su primera loa. En 1659 se trasladó con su familia a la Ciudad de México, donde fue dama de honor de la virreina Leonor Carreto, esposa del virrey Antonio Sebastián de Toledo. En la corte virreinal deslumbró por su erudición y su poesía, y fue apadrinada por los marqueses de Mancera.

                  -

                  En 1667 ingresó en un convento de las carmelitas descalzas, pero lo abandonó al poco tiempo por problemas de salud. Dos años después entró en otro convento, el de las jerónimas de San Jerónimo, donde permaneció hasta su muerte en 1695. Allí convirtió su celda en un verdadero centro cultural, donde reunió una amplia biblioteca, realizó experimentos científicos, compuso música y escribió obras de diversos géneros. También recibió la visita de poetas e intelectuales, como Carlos de Sigüenza y Góngora, y mantuvo una estrecha amistad con la nueva virreina, Luisa Manrique de Lara, condesa de Paredes.

                  -

                  Sin embargo, su vida no estuvo exenta de dificultades y conflictos. Su afán de conocimiento y su libertad de expresión chocaron con los prejuicios y las presiones de una sociedad patriarcal y religiosa que no toleraba que una mujer se dedicara al estudio y a la escritura. Sor Juana Inés tuvo que enfrentarse a las críticas y censuras de algunos eclesiásticos que la acusaban de soberbia e inmodestia. Entre ellos se destacó el obispo de Puebla, Manuel Fernández de Santa Cruz, quien publicó sin su permiso una carta suya en la que criticaba un sermón del jesuita portugués Antonio Vieira. El obispo le añadió una carta propia bajo el seudónimo de Sor Filotea de la Cruz, en la que le aconsejaba que abandonara sus estudios teológicos y se dedicara a la vida monástica.

                  -

                  Sor Juana Inés respondió con una carta admirable, conocida como Respuesta a Sor Filotea de la Cruz, en la que defendió su derecho al saber y al pensamiento crítico. En ella expuso sus razones para entrar en el convento, su pasión por el aprendizaje desde la infancia, su admiración por las mujeres sabias de la historia y su rechazo al matrimonio. También argumentó que el estudio era un medio para acercarse a Dios y que no había ninguna ley divina ni humana que prohibiera a las mujeres cultivar su inteligencia. La carta es un documento excepcional que revela la personalidad y el talento de Sor Juana Inés, así como su valentía para enfrentarse al poder establecido.

                  -

                  Después de esta carta, Sor Juana Inés sufrió una fuerte represión por parte del clero. Tuvo que vender su biblioteca y sus instrumentos científicos, y renunciar a sus actividades intelectuales. Se dedicó entonces a las labores propias del convento, como cuidar a los enfermos durante una epidemia de cólera que azotó la ciudad en 1695. Fue así como contrajo la enfermedad y murió el 17 de abril de ese año.

                  -

                  -

                  Su obra quedó silenciada durante mucho tiempo, hasta que fue redescubierta en el siglo XIX por algunos

                  7196e7f11a
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/studiobrn/SplitTrack/audiocraft/utils/utils.py b/spaces/studiobrn/SplitTrack/audiocraft/utils/utils.py deleted file mode 100644 index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000 --- a/spaces/studiobrn/SplitTrack/audiocraft/utils/utils.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ProcessPoolExecutor -from functools import wraps -import hashlib -import logging -import typing as tp - -import flashy -import flashy.distrib -import omegaconf -import torch -from torch.nn.utils.rnn import pad_sequence - - -logger = logging.getLogger(__name__) - - -def dict_from_config(cfg: omegaconf.DictConfig) -> dict: - """Convenience function to map an omegaconf configuration to a dictionary. - - Args: - cfg (omegaconf.DictConfig): Original configuration to map to dict. - Returns: - dict: Config as dictionary object. - """ - dct = omegaconf.OmegaConf.to_container(cfg, resolve=True) - assert isinstance(dct, dict) - return dct - - -def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset: - if max_samples >= len(dataset): - return dataset - - generator = torch.Generator().manual_seed(seed) - perm = torch.randperm(len(dataset), generator=generator) - return torch.utils.data.Subset(dataset, perm[:max_samples].tolist()) - - -def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int, - num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader: - """Convenience function to load dataset into a dataloader with optional subset sampling. - - Args: - dataset: Dataset to load. - num_samples (Optional[int]): Number of samples to limit subset size. - batch_size (int): Batch size. - num_workers (int): Number of workers for data loading. - seed (int): Random seed. - """ - if num_samples is not None: - dataset = random_subset(dataset, num_samples, seed) - - dataloader = flashy.distrib.loader( - dataset, - batch_size=batch_size, - num_workers=num_workers, - **kwargs - ) - return dataloader - - -def get_dataset_from_loader(dataloader): - dataset = dataloader.dataset - if isinstance(dataset, torch.utils.data.Subset): - return dataset.dataset - else: - return dataset - - -def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None): - """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension. - - Args: - input (torch.Tensor): The input tensor containing probabilities. - num_samples (int): Number of samples to draw. - replacement (bool): Whether to draw with replacement or not. - Keywords args: - generator (torch.Generator): A pseudorandom number generator for sampling. - Returns: - torch.Tensor: Last dimension contains num_samples indices - sampled from the multinomial probability distribution - located in the last dimension of tensor input. - """ - input_ = input.reshape(-1, input.shape[-1]) - output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator) - output = output_.reshape(*list(input.shape[:-1]), -1) - return output - - -def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor: - """Sample next token from top K values along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - k (int): The k in “top-k”. - Returns: - torch.Tensor: Sampled tokens. - """ - top_k_value, _ = torch.topk(probs, k, dim=-1) - min_value_top_k = top_k_value[..., [-1]] - probs *= (probs >= min_value_top_k).float() - probs.div_(probs.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs, num_samples=1) - return next_token - - -def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor: - """Sample next token from top P probabilities along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - p (int): The p in “top-p”. - Returns: - torch.Tensor: Sampled tokens. - """ - probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True) - probs_sum = torch.cumsum(probs_sort, dim=-1) - mask = probs_sum - probs_sort > p - probs_sort *= (~mask).float() - probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs_sort, num_samples=1) - next_token = torch.gather(probs_idx, -1, next_token) - return next_token - - -class DummyPoolExecutor: - """Dummy pool executor to use when we actually have only 1 worker. - (e.g. instead of ProcessPoolExecutor). - """ - class DummyResult: - def __init__(self, func, *args, **kwargs): - self.func = func - self.args = args - self.kwargs = kwargs - - def result(self): - return self.func(*self.args, **self.kwargs) - - def __init__(self, workers, mp_context=None): - pass - - def submit(self, func, *args, **kwargs): - return DummyPoolExecutor.DummyResult(func, *args, **kwargs) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - return - - -def get_pool_executor(num_workers: int, mp_context=None): - return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1) - - -def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor: - """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences). - For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]] - - Args: - lengths (torch.Tensor): tensor with lengths - max_len (int): can set the max length manually. Defaults to None. - Returns: - torch.Tensor: mask with 0s where there is pad tokens else 1s - """ - assert len(lengths.shape) == 1, "Length shape should be 1 dimensional." - final_length = lengths.max().item() if not max_len else max_len - final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor - return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None] - - -def hash_trick(word: str, vocab_size: int) -> int: - """Hash trick to pair each word with an index - - Args: - word (str): word we wish to convert to an index - vocab_size (int): size of the vocabulary - Returns: - int: index of the word in the embedding LUT - """ - hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16) - return hash % vocab_size - - -def with_rank_rng(base_seed: int = 1234): - """Decorator for a function so that the function will use a Random Number Generator - whose state depend on the GPU rank. The original RNG state is restored upon returning. - - Args: - base_seed (int): Random seed. - """ - def _decorator(fun: tp.Callable): - @wraps(fun) - def _decorated(*args, **kwargs): - state = torch.get_rng_state() - seed = base_seed ^ flashy.distrib.rank() - torch.manual_seed(seed) - logger.debug('Rank dependent seed set to %d', seed) - try: - return fun(*args, **kwargs) - finally: - torch.set_rng_state(state) - logger.debug('RNG state restored.') - return _decorated - return _decorator - - -def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Get a list of tensors and collate them to a single tensor. according to the following logic: - - `dim` specifies the time dimension which will be stacked and padded. - - The output will contain 1 new dimension (dimension index 0) which will be the size of - of the original list. - - Args: - tensors (tp.List[torch.Tensor]): List of tensors to collate. - dim (int): Dimension which will be stacked and padded. - Returns: - tp.Tuple[torch.Tensor, torch.Tensor]: - torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension - (dimension index 0) which will be the size of the original list. - torch.Tensor: Tensor containing length of original tensor sizes (without padding). - """ - tensors = [x.transpose(0, dim) for x in tensors] - lens = torch.LongTensor([len(x) for x in tensors]) - padded_tensors = pad_sequence(tensors) - padded_tensors = padded_tensors.transpose(0, 1) - padded_tensors = padded_tensors.transpose(1, dim + 1) - return padded_tensors, lens diff --git a/spaces/sub314xxl/image-server-1/app.py b/spaces/sub314xxl/image-server-1/app.py deleted file mode 100644 index 1301e8b73fc23788623e8a8018bcdbaa0eb5e58d..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/image-server-1/app.py +++ /dev/null @@ -1,301 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import random - -import gradio as gr -import numpy as np -import PIL.Image -import torch -from diffusers import DiffusionPipeline - -DESCRIPTION = '# SD-XL' -if not torch.cuda.is_available(): - DESCRIPTION += '\n

                  Running on CPU 🥶 This demo does not work on CPU.

                  ' - -MAX_SEED = np.iinfo(np.int32).max -CACHE_EXAMPLES = torch.cuda.is_available() and os.getenv( - 'CACHE_EXAMPLES') == '1' -MAX_IMAGE_SIZE = int(os.getenv('MAX_IMAGE_SIZE', '1024')) -USE_TORCH_COMPILE = os.getenv('USE_TORCH_COMPILE') == '1' -ENABLE_CPU_OFFLOAD = os.getenv('ENABLE_CPU_OFFLOAD') == '1' - -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') -if torch.cuda.is_available(): - pipe = DiffusionPipeline.from_pretrained( - 'stabilityai/stable-diffusion-xl-base-1.0', - torch_dtype=torch.float16, - use_safetensors=True, - variant='fp16') - refiner = DiffusionPipeline.from_pretrained( - 'stabilityai/stable-diffusion-xl-refiner-1.0', - torch_dtype=torch.float16, - use_safetensors=True, - variant='fp16') - - if ENABLE_CPU_OFFLOAD: - pipe.enable_model_cpu_offload() - refiner.enable_model_cpu_offload() - else: - pipe.to(device) - refiner.to(device) - - if USE_TORCH_COMPILE: - pipe.unet = torch.compile(pipe.unet, - mode='reduce-overhead', - fullgraph=True) -else: - pipe = None - refiner = None - - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - - -def generate(prompt: str, - negative_prompt: str = '', - prompt_2: str = '', - negative_prompt_2: str = '', - use_negative_prompt: bool = False, - use_prompt_2: bool = False, - use_negative_prompt_2: bool = False, - seed: int = 0, - width: int = 1024, - height: int = 1024, - guidance_scale_base: float = 5.0, - guidance_scale_refiner: float = 5.0, - num_inference_steps_base: int = 50, - num_inference_steps_refiner: int = 50, - apply_refiner: bool = False) -> PIL.Image.Image: - generator = torch.Generator().manual_seed(seed) - - if not use_negative_prompt: - negative_prompt = None # type: ignore - if not use_prompt_2: - prompt_2 = None # type: ignore - if not use_negative_prompt_2: - negative_prompt_2 = None # type: ignore - - if not apply_refiner: - return pipe(prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type='pil').images[0] - else: - latents = pipe(prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type='latent').images - image = refiner(prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - guidance_scale=guidance_scale_refiner, - num_inference_steps=num_inference_steps_refiner, - image=latents, - generator=generator).images[0] - return image - - -examples = [ - 'Astronaut in a jungle, cold color palette, muted colors, detailed, 8k', - 'An astronaut riding a green horse', -] - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton(value='Duplicate Space for private use', - elem_id='duplicate-button', - visible=os.getenv('SHOW_DUPLICATE_BUTTON') == '1') - with gr.Box(): - with gr.Row(): - prompt = gr.Text( - label='Prompt', - show_label=False, - max_lines=1, - placeholder='Enter your prompt', - container=False, - ) - run_button = gr.Button('Run', scale=0) - result = gr.Image(label='Result', show_label=False) - with gr.Accordion('Advanced options', open=False): - with gr.Row(): - use_negative_prompt = gr.Checkbox(label='Use negative prompt', - value=False) - use_prompt_2 = gr.Checkbox(label='Use prompt 2', value=False) - use_negative_prompt_2 = gr.Checkbox( - label='Use negative prompt 2', value=False) - negative_prompt = gr.Text( - label='Negative prompt', - max_lines=1, - placeholder='Enter a negative prompt', - visible=False, - ) - prompt_2 = gr.Text( - label='Prompt 2', - max_lines=1, - placeholder='Enter your prompt', - visible=False, - ) - negative_prompt_2 = gr.Text( - label='Negative prompt 2', - max_lines=1, - placeholder='Enter a negative prompt', - visible=False, - ) - - seed = gr.Slider(label='Seed', - minimum=0, - maximum=MAX_SEED, - step=1, - value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - with gr.Row(): - width = gr.Slider( - label='Width', - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - height = gr.Slider( - label='Height', - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - apply_refiner = gr.Checkbox(label='Apply refiner', value=False) - with gr.Row(): - guidance_scale_base = gr.Slider( - label='Guidance scale for base', - minimum=1, - maximum=20, - step=0.1, - value=5.0) - num_inference_steps_base = gr.Slider( - label='Number of inference steps for base', - minimum=10, - maximum=100, - step=1, - value=50) - with gr.Row(visible=False) as refiner_params: - guidance_scale_refiner = gr.Slider( - label='Guidance scale for refiner', - minimum=1, - maximum=20, - step=0.1, - value=5.0) - num_inference_steps_refiner = gr.Slider( - label='Number of inference steps for refiner', - minimum=10, - maximum=100, - step=1, - value=50) - - gr.Examples(examples=examples, - inputs=prompt, - outputs=result, - fn=generate, - cache_examples=CACHE_EXAMPLES) - - use_negative_prompt.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt, - outputs=negative_prompt, - queue=False, - api_name=False, - ) - use_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_prompt_2, - outputs=prompt_2, - queue=False, - api_name=False, - ) - use_negative_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt_2, - outputs=negative_prompt_2, - queue=False, - api_name=False, - ) - apply_refiner.change( - fn=lambda x: gr.update(visible=x), - inputs=apply_refiner, - outputs=refiner_params, - queue=False, - api_name=False, - ) - - inputs = [ - prompt, - negative_prompt, - prompt_2, - negative_prompt_2, - use_negative_prompt, - use_prompt_2, - use_negative_prompt_2, - seed, - width, - height, - guidance_scale_base, - guidance_scale_refiner, - num_inference_steps_base, - num_inference_steps_refiner, - apply_refiner, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name='run', - ) - negative_prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name=False, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name=False, - ) -demo.queue(max_size=20).launch() diff --git a/spaces/subhajitmaji/MusicGen/audiocraft/modules/codebooks_patterns.py b/spaces/subhajitmaji/MusicGen/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/subhajitmaji/MusicGen/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/data/unaligned_lmdb_dataset.py b/spaces/sunshineatnoon/TextureScraping/swapae/data/unaligned_lmdb_dataset.py deleted file mode 100644 index 0fd688bbfcbbc304d8a741cbc1afbaac08465f60..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/data/unaligned_lmdb_dataset.py +++ /dev/null @@ -1,31 +0,0 @@ -import random -import os.path -from swapae.data.base_dataset import BaseDataset -from swapae.data.lmdb_dataset import LMDBDataset -import swapae.util - - -class UnalignedLMDBDataset(BaseDataset): - def __init__(self, opt): - super().__init__(opt) - self.dir_A = os.path.join(opt.dataroot, opt.phase + 'A') # create a path '/path/to/data/trainA' - self.dir_B = os.path.join(opt.dataroot, opt.phase + 'B') # create a path '/path/to/data/trainB' - - self.dataset_A = LMDBDataset(util.copyconf(opt, dataroot=self.dir_A)) - self.dataset_B = LMDBDataset(util.copyconf(opt, dataroot=self.dir_B)) - self.B_indices = list(range(len(self.dataset_B))) - - - def __len__(self): - return max(len(self.dataset_A), len(self.dataset_B)) - - def __getitem__(self, index): - if index == 0 and self.opt.isTrain: - random.shuffle(self.B_indices) - - result = self.dataset_A.__getitem__(index % len(self.dataset_A)) - B_index = self.B_indices[index % len(self.dataset_B)] - B_result = self.dataset_B.__getitem__(B_index) - result["real_B"] = B_result["real_A"] - result["path_B"] = B_result["path_A"] - return result diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Applied Acoustics Chromaphone V1.0.6 WIN.OSX Incl. Keygen AiR - Crack __HOT__.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Applied Acoustics Chromaphone V1.0.6 WIN.OSX Incl. Keygen AiR - Crack __HOT__.md deleted file mode 100644 index cc2e79438f6c8b9772f9f4203e39adcada5efda1..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Applied Acoustics Chromaphone V1.0.6 WIN.OSX Incl. Keygen AiR - Crack __HOT__.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Applied Acoustics Chromaphone v1.0.6 WIN.OSX Incl. Keygen AiR - crack


                  DOWNLOAD ---> https://cinurl.com/2uEXQJ



                  - -Free Download X Force 2019 Keygen 2018 Crack Patch AutoCAD ... 360 2017 ... Applied Acoustics Chromaphone v1.0.6 WIN.OSX Incl. 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Dowland GTA San Andreas 2012 ViP By SlimThug.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Dowland GTA San Andreas 2012 ViP By SlimThug.md deleted file mode 100644 index 555caa59cd4372d0e8fcf13fe8242b38c3929028..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Dowland GTA San Andreas 2012 ViP By SlimThug.md +++ /dev/null @@ -1,17 +0,0 @@ -

                  Dowland GTA San Andreas 2012 ViP By SlimThug


                  Download File ————— https://cinurl.com/2uEZcW



                  -
                  -Link to the fix release Dl: guys, I'll show you the gameplay of this Lithuanian mod for GTA San Andreas. I hope you like it if ... GTA San Andreas Walkthrough #14 - Kotb.ru Mod -Vor 2 years 3 -Passage of the game GTA San Andreas with mod Kotb.ru Mod. -Happy viewing everyone! -Mod link: ... -GTA San Andreas - Walkthrough #17 - Kotb.ru Mod -Vor 2 years 2 -GTA San Andreas - Walkthrough #17 - Kotb.ru Mod. -GTA San Andreas - Walkthrough #16 - Kotb.ru Mod -Vor 2 years 1 -GTA San Andreas - Walkthrough #16 - Kotb.ru Mod. -Mod Link: ... 8a78ff9644
                  -
                  -
                  -

                  diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/data/loaders/__init__.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/data/loaders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/t13718236382/bingoGPT4/src/components/tailwind-indicator.tsx b/spaces/t13718236382/bingoGPT4/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
                  -
                  xs
                  -
                  sm
                  -
                  md
                  -
                  lg
                  -
                  xl
                  -
                  2xl
                  -
                  - ) -} diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/modeling/roi_heads/res5_roi_heads.py b/spaces/taesiri/ChatGPT-ImageCaptioner/detic/modeling/roi_heads/res5_roi_heads.py deleted file mode 100644 index bab706999a9927e34a7b07dad84ba1259ab5ec64..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/modeling/roi_heads/res5_roi_heads.py +++ /dev/null @@ -1,173 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import inspect -import logging -import numpy as np -from typing import Dict, List, Optional, Tuple -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, nonzero_tuple -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry - -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference -from detectron2.modeling.roi_heads.roi_heads import ROI_HEADS_REGISTRY, Res5ROIHeads -from detectron2.modeling.roi_heads.cascade_rcnn import CascadeROIHeads, _ScaleGradient -from detectron2.modeling.roi_heads.box_head import build_box_head - -from .detic_fast_rcnn import DeticFastRCNNOutputLayers -from ..debug import debug_second_stage - -from torch.cuda.amp import autocast - -@ROI_HEADS_REGISTRY.register() -class CustomRes5ROIHeads(Res5ROIHeads): - @configurable - def __init__(self, **kwargs): - cfg = kwargs.pop('cfg') - super().__init__(**kwargs) - stage_channel_factor = 2 ** 3 - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * stage_channel_factor - - self.with_image_labels = cfg.WITH_IMAGE_LABELS - self.ws_num_props = cfg.MODEL.ROI_BOX_HEAD.WS_NUM_PROPS - self.add_image_box = cfg.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX - self.add_feature_to_prop = cfg.MODEL.ROI_BOX_HEAD.ADD_FEATURE_TO_PROP - self.image_box_size = cfg.MODEL.ROI_BOX_HEAD.IMAGE_BOX_SIZE - self.box_predictor = DeticFastRCNNOutputLayers( - cfg, ShapeSpec(channels=out_channels, height=1, width=1) - ) - - self.save_debug = cfg.SAVE_DEBUG - self.save_debug_path = cfg.SAVE_DEBUG_PATH - if self.save_debug: - self.debug_show_name = cfg.DEBUG_SHOW_NAME - self.vis_thresh = cfg.VIS_THRESH - self.pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - self.pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - self.bgr = (cfg.INPUT.FORMAT == 'BGR') - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - ret['cfg'] = cfg - return ret - - def forward(self, images, features, proposals, targets=None, - ann_type='box', classifier_info=(None,None,None)): - ''' - enable debug and image labels - classifier_info is shared across the batch - ''' - if not self.save_debug: - del images - - if self.training: - if ann_type in ['box']: - proposals = self.label_and_sample_proposals( - proposals, targets) - else: - proposals = self.get_top_proposals(proposals) - - proposal_boxes = [x.proposal_boxes for x in proposals] - box_features = self._shared_roi_transform( - [features[f] for f in self.in_features], proposal_boxes - ) - predictions = self.box_predictor( - box_features.mean(dim=[2, 3]), - classifier_info=classifier_info) - - if self.add_feature_to_prop: - feats_per_image = box_features.mean(dim=[2, 3]).split( - [len(p) for p in proposals], dim=0) - for feat, p in zip(feats_per_image, proposals): - p.feat = feat - - if self.training: - del features - if (ann_type != 'box'): - image_labels = [x._pos_category_ids for x in targets] - losses = self.box_predictor.image_label_losses( - predictions, proposals, image_labels, - classifier_info=classifier_info, - ann_type=ann_type) - else: - losses = self.box_predictor.losses( - (predictions[0], predictions[1]), proposals) - if self.with_image_labels: - assert 'image_loss' not in losses - losses['image_loss'] = predictions[0].new_zeros([1])[0] - if self.save_debug: - denormalizer = lambda x: x * self.pixel_std + self.pixel_mean - if ann_type != 'box': - image_labels = [x._pos_category_ids for x in targets] - else: - image_labels = [[] for x in targets] - debug_second_stage( - [denormalizer(x.clone()) for x in images], - targets, proposals=proposals, - save_debug=self.save_debug, - debug_show_name=self.debug_show_name, - vis_thresh=self.vis_thresh, - image_labels=image_labels, - save_debug_path=self.save_debug_path, - bgr=self.bgr) - return proposals, losses - else: - pred_instances, _ = self.box_predictor.inference(predictions, proposals) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - if self.save_debug: - denormalizer = lambda x: x * self.pixel_std + self.pixel_mean - debug_second_stage( - [denormalizer(x.clone()) for x in images], - pred_instances, proposals=proposals, - save_debug=self.save_debug, - debug_show_name=self.debug_show_name, - vis_thresh=self.vis_thresh, - save_debug_path=self.save_debug_path, - bgr=self.bgr) - return pred_instances, {} - - def get_top_proposals(self, proposals): - for i in range(len(proposals)): - proposals[i].proposal_boxes.clip(proposals[i].image_size) - proposals = [p[:self.ws_num_props] for p in proposals] - for i, p in enumerate(proposals): - p.proposal_boxes.tensor = p.proposal_boxes.tensor.detach() - if self.add_image_box: - proposals[i] = self._add_image_box(p) - return proposals - - def _add_image_box(self, p, use_score=False): - image_box = Instances(p.image_size) - n = 1 - h, w = p.image_size - if self.image_box_size < 1.0: - f = self.image_box_size - image_box.proposal_boxes = Boxes( - p.proposal_boxes.tensor.new_tensor( - [w * (1. - f) / 2., - h * (1. - f) / 2., - w * (1. - (1. - f) / 2.), - h * (1. - (1. - f) / 2.)] - ).view(n, 4)) - else: - image_box.proposal_boxes = Boxes( - p.proposal_boxes.tensor.new_tensor( - [0, 0, w, h]).view(n, 4)) - if use_score: - image_box.scores = \ - p.objectness_logits.new_ones(n) - image_box.pred_classes = \ - p.objectness_logits.new_zeros(n, dtype=torch.long) - image_box.objectness_logits = \ - p.objectness_logits.new_ones(n) - else: - image_box.objectness_logits = \ - p.objectness_logits.new_ones(n) - return Instances.cat([p, image_box]) \ No newline at end of file diff --git a/spaces/tarfandoon/CryptoEN/app.py b/spaces/tarfandoon/CryptoEN/app.py deleted file mode 100644 index 97b633d703f5fa274e7aaf6e352f8ab4bc484525..0000000000000000000000000000000000000000 --- a/spaces/tarfandoon/CryptoEN/app.py +++ /dev/null @@ -1,97 +0,0 @@ -import streamlit as st -import requests -import json -import pandas as pd -import numpy as np -import datetime -import babel.numbers - -from datetime import datetime - -current_dateTime = datetime.now().strftime("%Y-%m-%d %H:%M:%S") -st.markdown( - """ - - - - - - """, - unsafe_allow_html=True, -) - -req = requests.get( - "http://api.navasan.tech/latest/?api_key=premP5rqSpN8M4npULNgDWEpU6RhBpJG" -) - -dic = {} -names = [] -prices = [] -for k, v in req.json().items(): - try: - names.append(k) - prices.append(v) - except: - continue - - -dic = dict(zip(names, prices)) - -dat = pd.DataFrame(dic).T -#dat["timestamp"] = pd.to_datetime(dat["timestamp"], unit="s").dt.time -dat["timestamp"] = current_dateTime -dat.index.name = "Currency" -dat.rename(columns={"value": "Rate(Toman)", "timestamp": "Time"}, inplace=True) - - -currency_flags = { - "usdt": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/usdt.png", - "btc": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/btc.png", - "bch": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/bch.png", - "xrp": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/ripple.png", - "eth": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/eth.png", - "bnb": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/bnb.png", - "ltc": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/litecoin.png", - "doge": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/doge.png", - "dash": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/dash.png", - "eos": "https://huggingface.co/spaces/tarfandoon/CryptoEN/resolve/main/eos.png", -} - -# Add flag images to the dataframe -dat["Symbol"] = dat.index.map(lambda currency: ' {}'.format(currency_flags.get(currency, ""),currency.upper())) - - - - - -datacrypto = dat -resultcrypto = datacrypto.loc[ - ["usdt", "btc", "bch", "xrp", "eth", "bnb", "ltc", "doge", "dash", "eos", "doge"] -].rename( - { - "usdt": "Tether", - "btc": "Bitcoin", - "bch": "Bitcoin Cash", - "xrp": "Ripple", - "eth": "Ethereum", - "bnb": "Binance", - "ltc": "Litecoin", - "doge": "Dogecoin", - "dash": "Dash", - "eos": "EOS", - } -) -dfcrypto = pd.DataFrame(resultcrypto) -# dfcrypto[["Rate(Toman)", "change", "Time"]] -dfcrypto["Rate(Toman)"].astype(float) -dfcrypto.columns.name = dfcrypto.index.name -dfcrypto.index.name = None -z = dfcrypto[["Symbol","Rate(Toman)", "change", "Time"]] -st.write(z.to_html(escape=False), unsafe_allow_html=True) diff --git a/spaces/tdnathmlenthusiast/online-course-categorize-system/app.py b/spaces/tdnathmlenthusiast/online-course-categorize-system/app.py deleted file mode 100644 index d2af718fdb3a7683039e425862bd9d5bfab1cee5..0000000000000000000000000000000000000000 --- a/spaces/tdnathmlenthusiast/online-course-categorize-system/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -import onnxruntime as rt -from transformers import AutoTokenizer -import torch -import json - -# Initialize the tokenizer -tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") - -# Load genre types from a JSON file -try: - with open("genre_types_encoded.json", "r") as fp: - encode_genre_types = json.load(fp) -except FileNotFoundError: - print("Error: 'genre_types_encoded.json' not found. Make sure the file exists.") - exit(1) - -# Extract genres from the loaded data -genres = list(encode_genre_types.keys()) - -# Load the ONNX inference session -try: - inf_session = rt.InferenceSession('udemy-classifier-quantized.onnx') - input_name = inf_session.get_inputs()[0].name - output_name = inf_session.get_outputs()[0].name -except FileNotFoundError: - print("Error: 'udemy-classifier-quantized.onnx' not found. Make sure the file exists.") - exit(1) - -# Define the function for classifying courses' genres -def classify_courses_genre(description): - input_ids = tokenizer(description, truncation=True, padding=True, return_tensors="pt")['input_ids'][:,:512] - logits = inf_session.run([output_name], {input_name: input_ids.cpu().numpy()})[0] - logits = torch.FloatTensor(logits) - probs = torch.sigmoid(logits)[0] - return dict(zip(genres, map(float, probs))) - -# Create the Gradio interface -iface = gr.Interface(fn=classify_courses_genre, inputs="text", outputs=gr.components.Label(num_top_classes=5)) - -# Launch the Gradio interface -iface.launch(inline = False) diff --git a/spaces/temp-late/rhyme-ai/README.md b/spaces/temp-late/rhyme-ai/README.md deleted file mode 100644 index d6ee08f03df6fb113a94c1fd9ffdd392ff3f206e..0000000000000000000000000000000000000000 --- a/spaces/temp-late/rhyme-ai/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Rhyme Ai -emoji: 📊 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/Abbyy Business Card Reader 2.0 For Windows Crack Torrent. Orange.md b/spaces/terfces0erbo/CollegeProjectV2/Abbyy Business Card Reader 2.0 For Windows Crack Torrent. Orange.md deleted file mode 100644 index 4118c7d9b900f759cf1dbd7dca64aa5d93da8146..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Abbyy Business Card Reader 2.0 For Windows Crack Torrent. Orange.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Abbyy Business Card Reader 2.0 For Windows Crack Torrent. Orange


                  Download Zip ✺✺✺ https://bytlly.com/2uGlTK



                  - -It scans multiple business cards and accurately extracts and identifies all contact data by its type for exporting to a contact management system such as Microsoft® ... 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/terfces0erbo/CollegeProjectV2/Agisoft Metashape Professional 1.5.5 Build 9057 With Crack Key 2019.md b/spaces/terfces0erbo/CollegeProjectV2/Agisoft Metashape Professional 1.5.5 Build 9057 With Crack Key 2019.md deleted file mode 100644 index 25d8e97e3c7e8a3be8c51f4370a2ca303e6f1ae3..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Agisoft Metashape Professional 1.5.5 Build 9057 With Crack Key 2019.md +++ /dev/null @@ -1,58 +0,0 @@ -

                  Agisoft Metashape Professional 1.5.5 Build 9057 With Crack Key 2019


                  Download Zip 🌟 https://bytlly.com/2uGl7a



                  -
                  -It has the ability to extract individual key frames within a multi-frame sequence. - -Metashape Professional 1.5.5 Build 9057 With Crack Key 2019 - -With this software you can make 3D scans of real world objects. Your. Agisoft Metashape Professional Crack Key - -It is a Windows application. This software is a computer graphics system for building 3D models. - -It has the ability to extract individual key frames within a multi-frame sequence. - -Metashape Professional 1.5.5 Build 9057 With Crack Key 2019: Metashape Professional Crack Keygen with Serial Key generate their models from both static and dynamic viewpoints. You can also extract individual key frames within a multi-frame sequence, as well as generate 3D models from the frames and then use other features such as overlays, rays and G-code. - -Metashape Professional Crack Key 2019 1.5.5 (Latest Version) - -Agisoft Metashape Professional Key Features: - -A professional 2D and 3D application for creating and editing digital models in a way that goes beyond what can be done with CAD packages and other generic 3D modelling software. - -This application is used to create, edit, visualize and share a single or multiple digital models. - -It is the easiest way to create 3D models, such as building the house you’ve always wanted. - -You can make 3D models from scanning of real objects. - -It has a very easy interface and anyone can use it. - -You can edit digital models using a 2D interface or a 3D interface. - -It has great features and advantages. - -This software is compatible with different types of operating systems. - -This software is used to create models from real objects. - -It has the ability to change the perspective of the model or view. - -It has a lot of filters. - -It has a complete feature set. - -It has a variety of output formats. - -This software is an application for creating 3D models. - -It has the ability to extract individual key frames within a multi-frame sequence, as well as generate 3D models from the frames and then use other features such as overlays, rays and G-code. - -It has a very simple user interface. - -You can import data from all file formats. - -Metashape Professional Keygen 2019 1.5.5 (Updated) - -It has 4fefd39f24
                  -
                  -
                  -

                  diff --git a/spaces/terfces0erbo/CollegeProjectV2/Andrea Bocelli One Night In Central Park 720p Torrent.md b/spaces/terfces0erbo/CollegeProjectV2/Andrea Bocelli One Night In Central Park 720p Torrent.md deleted file mode 100644 index f3f287b88bd57249b8feec9da57fc21b11c3d45f..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Andrea Bocelli One Night In Central Park 720p Torrent.md +++ /dev/null @@ -1,18 +0,0 @@ -
                  -

                  Andrea Bocelli: One Night in Central Park - A Spectacular Concert Experience

                  -

                  Andrea Bocelli, one of the most celebrated tenors in the world, performed a stunning concert at the iconic Central Park in New York City on September 15, 2011. The event was a dream come true for the Italian singer, who had always wanted to sing at the historic venue. The concert was also a tribute to his late father, who had introduced him to opera music.

                  -

                  The concert featured a star-studded lineup of guests, including Céline Dion, Tony Bennett, Ana María Martínez, Bryn Terfel, Pretty Yende, Nicola Benedetti, Chris Botti and David Foster. Bocelli sang a selection of classical and popular songs, ranging from Verdi and Puccini to Amazing Grace and New York, New York. He also performed some of his own hits, such as Time to Say Goodbye and The Prayer.

                  -

                  Andrea Bocelli One Night In Central Park 720p Torrent


                  Download ✯✯✯ https://bytlly.com/2uGjzm



                  -

                  The concert was attended by more than 60,000 people and broadcast live on PBS. It was also recorded and released as a CD/DVD/Blu-ray package titled Andrea Bocelli: Concerto - One Night in Central Park. The album reached the top ten in several countries and sold over two million copies worldwide.

                  -

                  If you missed this spectacular concert or want to relive it again, you can download it from various torrent sites. Just search for "Andrea Bocelli One Night In Central Park 720p Torrent" and you will find several options to choose from. You will need a torrent client to download the file and a media player to watch it. Enjoy!

                  - -

                  Andrea Bocelli is one of the most successful and beloved singers of all time. He has sold over 90 million albums worldwide and has performed for popes, presidents and royalty. He has also collaborated with some of the biggest names in music, such as Luciano Pavarotti, Ed Sheeran, Sarah Brightman and Jennifer Lopez.

                  -

                  Bocelli was born with poor eyesight and became completely blind at the age of 12 after a soccer accident. However, he did not let his disability stop him from pursuing his passion for music. He learned to play the piano, flute, saxophone and guitar and studied law at the University of Pisa. He also sang in bars and clubs to earn money.

                  -

                  His big break came in 1992 when he was discovered by Italian rock star Zucchero, who invited him to sing on his duet with Pavarotti. The song, Miserere, became a hit and launched Bocelli's career. Since then, he has released 16 studio albums, three live albums and nine complete operas. He has also won numerous awards and honors, including a star on the Hollywood Walk of Fame, a Grammy nomination and the Order of Merit of the Italian Republic.

                  - -

                  Andrea Bocelli's concert in Central Park was a milestone in his career and a gift to his fans. He said that it was "the fulfillment of a dream" and that he felt "a great honor and privilege" to sing there. He also thanked the city of New York for its hospitality and support.

                  -

                  The concert was a showcase of Bocelli's versatility and talent. He sang in six languages: Italian, English, French, Spanish, Latin and Neapolitan. He also displayed his range of styles, from opera and classical to pop and folk. He was accompanied by the New York Philharmonic Orchestra, conducted by Alan Gilbert, and the Westminster Symphonic Choir.

                  -

                  The concert was also a celebration of music and friendship. Bocelli shared the stage with some of his musical heroes and friends, who praised him for his voice and spirit. He also dedicated some songs to his family and his country. He said that he wanted to "bring the music of Italy to the world" and that he hoped that his music would "bring joy and peace" to the listeners.

                  -

                  d5da3c52bf
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Benefits of Using Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub.md b/spaces/tialenAdioni/chat-gpt-api/logs/Benefits of Using Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub.md deleted file mode 100644 index abb31ddc94a5887ee377cbadb6928c6bb45a884b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Benefits of Using Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub.md +++ /dev/null @@ -1,183 +0,0 @@ - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub: Everything You Need to Know

                  - -

                  If you are looking for a reliable and easy way to flash, unlock and repair your Nokia feature phones powered by BB5, MeeGo and MediaTek chipset, then you should consider using Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub. This tool is also known as BEST Dongle and it is developed by Infinity Team, a well-known name in the mobile phone service industry.

                  - -

                  In this article, we will explain what Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub is, how to use it, what are its features and benefits, and where to download it. We will also provide some tips and tricks to make the most out of this tool.

                  -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub


                  Download Zip 🗸🗸🗸 https://urlcod.com/2uK7qC



                  - -

                  What is Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub?

                  - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub is a software application that allows you to service Nokia phones with various platforms, such as BB5, MeeGo, MediaTek and NXPlatform. You can use this tool to flash firmware files, unlock network and user locks, reset security codes, repair IMEI and other issues, backup and restore data, and more.

                  - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub works with a dongle or a box that connects to your computer via USB port. You need to have an Infinity account and a valid support period to use this tool. You can renew your support period online or through a seller near you.

                  - -

                  How to use Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub?

                  - -

                  To use Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub, you need to follow these steps:

                  - -
                    -
                  1. Download and install the tool from the official website or from the link provided below.
                  2. -
                  3. Install the USB drivers for your Nokia phone model.
                  4. -
                  5. Connect your dongle or box to your computer and update it with Dongle Manager [Smart-Card Manager].
                  6. -
                  7. Open the tool and select the platform of your phone (BB5, MeeGo, MediaTek or NXPlatform).
                  8. -
                  9. Go to the flashing tab and choose the firmware file that matches your phone model and region.
                  10. -
                  11. Click on FLASH and connect your phone to the computer in flash mode (usually by pressing some key combination or using a special cable).
                  12. -
                  13. Wait for the flashing process to complete and disconnect your phone.
                  14. -
                  15. If you want to unlock or repair your phone, go to the service tab and select the appropriate option.
                  16. -
                  17. Click on the desired button and connect your phone to the computer in test mode (usually by pressing some key combination or using a special cable).
                  18. -
                  19. Wait for the unlocking or repairing process to complete and disconnect your phone.
                  20. -
                  - -

                  Note: Before flashing or unlocking your phone, make sure to backup your data from the device, as these operations will erase your data.

                  - -

                  What are the features and benefits of Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub?

                  - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub has many features and benefits that make it a powerful and versatile tool for Nokia phone service. Here are some of them:

                  - -
                    -
                  • It supports a wide range of Nokia phones with different platforms, such as BB5, MeeGo, MediaTek and NXPlatform.
                  • -
                  • It allows you to flash firmware files with various options, such as dead mode flashing, factory reset flashing, downgrade flashing, language change flashing, etc.
                  • -
                  • It allows you to unlock network locks (SIM lock), user locks (security code), bootloader locks (SL3), etc.
                  • -
                  • It allows you to repair IMEI numbers, Bluetooth addresses, camera configuration, SIM lock data, etc.
                  • -
                  • It allows you to backup and restore data from your phone, such as phonebook, gallery, calendar, SMS messages, etc.
                  • -
                  • It allows you to read and write various settings from your phone, such as product code, product profile, life timer, etc.
                  • -
                  • It has a user-friendly interface that makes it easy to use even for beginners.
                  • -
                  • It has regular updates that add new features and support new models.
                  • -
                  • It has a low price compared to other similar tools in the market.
                  • -
                  - -

                  Where to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub?

                  - -

                  If you want to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub, you can do so from the official website of Infinity Team or from the link provided below:

                  - -

                  https://www.infinity-box.com/support/?s=1

                  - -

                  You will need an Infinity account and a valid support period to download this tool. You can also find other useful resources on this website, such as manuals, drivers, tutorials, etc.

                  -

                  How to use Nokia Best Bb5 Easy Service Tool for unlocking phones
                  -Nokia Best Bb5 Easy Service Tool tutorial pdf download
                  -Infinity Box Team software updates and support for Nokia Best Bb5 Easy Service Tool
                  -Nokia Best Bb5 Easy Service Tool crack free download
                  -Nokia Best Bb5 Easy Service Tool features and benefits
                  -Nokia Best Bb5 Easy Service Tool reviews and testimonials
                  -Nokia Best Bb5 Easy Service Tool compatible models and firmware versions
                  -Nokia Best Bb5 Easy Service Tool price and where to buy
                  -Nokia Best Bb5 Easy Service Tool alternatives and comparisons
                  -Nokia Best Bb5 Easy Service Tool troubleshooting and error codes
                  -Nokia Best Bb5 Easy Service Tool user manual and guide
                  -Nokia Best Bb5 Easy Service Tool activation and registration
                  -Nokia Best Bb5 Easy Service Tool license key and serial number
                  -Nokia Best Bb5 Easy Service Tool system requirements and specifications
                  -Nokia Best Bb5 Easy Service Tool installation and setup
                  -Nokia Best Bb5 Easy Service Tool backup and restore
                  -Nokia Best Bb5 Easy Service Tool flash and repair
                  -Nokia Best Bb5 Easy Service Tool reset and format
                  -Nokia Best Bb5 Easy Service Tool security and privacy
                  -Nokia Best Bb5 Easy Service Tool tips and tricks
                  -Nokia Best Bb5 Easy Service Tool online course and training
                  -Nokia Best Bb5 Easy Service Tool video tutorial and demo
                  -Nokia Best Bb5 Easy Service Tool forum and community
                  -Nokia Best Bb5 Easy Service Tool blog and news
                  -Nokia Best Bb5 Easy Service Tool ebook and epub reader
                  -Nokia Best Bb5 Easy Service Tool history and development
                  -Nokia Best Bb5 Easy Service Tool awards and recognition
                  -Nokia Best Bb5 Easy Service Tool case studies and success stories
                  -Nokia Best Bb5 Easy Service Tool FAQs and answers
                  -Nokia Best Bb5 Easy Service Tool pros and cons
                  -Nokia Best Bb5 Easy Service Tool discount and coupon codes
                  -Nokia Best Bb5 Easy Service Tool affiliate program and commission rates
                  -Nokia Best Bb5 Easy Service Tool refund policy and guarantee
                  -Nokia Best Bb5 Easy Service Tool customer service and contact details
                  -Nokia Best Bb5 Easy Service Tool testimonials and feedbacks
                  -Nokia Best Bb5 Easy Service Tool best practices and recommendations
                  -Nokia Best Bb5 Easy Service Tool limitations and drawbacks
                  -Nokia Best Bb5 Easy Service Tool advantages and disadvantages
                  -Nokia Best Bb5 Easy Service Tool latest version and update
                  -Nokia Best Bb5 Easy Service Tool free trial and demo version
                  -Nokia Best Bb5 Easy Service Tool customisation and personalisation options
                  -Nokia Best Bb5 Easy Service Tool performance and speed
                  -Nokia Best Bb5 Easy Service Tool compatibility issues and solutions
                  -Nokia Best Bb5 Easy Service Tool security risks and threats
                  -Nokia Best Bb5 Easy Service Tool common problems and solutions
                  -Nokia Best Bb5 Easy Service Tool future plans and roadmap
                  -Nokia Best Bb5 Easy Service Tool testimonials from experts
                  -How to get the most out of your nokia best bb easy service tool

                  - -

                  Tips and tricks for using Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub

                  - -

                  To make the most out of Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub, here are some tips and tricks that you can follow:

                  - -
                    -
                  • Always use the latest version of the tool and update your dongle or box regularly with Dongle Manager [Smart-Card Manager].
                  • -
                  • Always check the compatibility of your phone model and firmware file before flashing or unlocking it.
                  • -
                  • Always backup your data from your phone before flashing or unlocking it.
                  • -
                  • Always use good quality USB cables and ports when connecting your phone to the computer.
                  • -
                  • If you encounter any problem or error while using the tool, check the FAQ section on the website or contact the support team via email or forum.
                  • -
                  - -

                  Conclusion

                  - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub is a great tool for servicing Nokia phones with various platforms. It allows you to flash firmware files, unlock network and user locks, repair IMEI numbers and other issues, backup and restore data from your phone, and more. It has a user-friendly interface that makes it easy to use even for beginners. It has regular updates that add new features and support new models. It has a low price compared to other similar tools in the market.

                  - -

                  If you want to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub , you can do so from the official website of Infinity Team or from the link provided above . You will need an Infinity account and a valid support period to download this tool . You can also find other useful resources on this website , such as manuals , drivers , tutorials , etc .

                  - -

                  We hope this article has helped you understand what Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver . 1 . 11 C 2012 . epub is , how to use it , what are its features and benefits , and where to download it . We also hope you have learned some tips and tricks for using this tool effectively . If you have any questions or feedback , feel free to leave a comment below . Thank you for reading !

                  -

                  How to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub?

                  - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub is available as an epub file that you can download from various sources online. However, not all sources are reliable and safe, so you need to be careful when choosing where to download this tool.

                  - -

                  One of the best and most trusted sources to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub is the official website of Infinity Team, which is the developer of this tool. You can access their website by clicking on this link: https://www.infinity-box.com/support/?s=1.

                  - -

                  On their website, you will find a download section where you can find various software, drivers, firmware and tools for your Nokia phones. You will need to sign in to your Infinity account and have a valid support period to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub from their website.

                  - -

                  Another source to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub is GSM Official, which is a website that provides various mobile phone solutions and tools. You can access their website by clicking on this link: https://www.gsmofficial.com/infinity-nokia-best/.

                  - -

                  On their website, you will find a link to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub as a zip package that includes the USB driver and tutorial. You do not need to sign in or have a support period to download this tool from their website.

                  - -

                  What are the advantages and disadvantages of Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub?

                  - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub has many advantages and disadvantages that you should consider before using it. Here are some of them:

                  - -

                  Advantages

                  - -
                    -
                  • It is compatible with a wide range of Nokia phones with different platforms, such as BB5, MeeGo, MediaTek and NXPlatform.
                  • -
                  • It is easy to use and has a user-friendly interface that guides you through the process of flashing, unlocking and repairing your phone.
                  • -
                  • It has many features and options that allow you to customize your phone according to your needs and preferences.
                  • -
                  • It has regular updates that add new features and support new models.
                  • -
                  • It has a low price compared to other similar tools in the market.
                  • -
                  - -

                  Disadvantages

                  - -
                    -
                  • It requires a dongle or a box that connects to your computer via USB port, which adds an extra cost and may not be available everywhere.
                  • -
                  • It requires an Infinity account and a valid support period to use it, which may expire over time and need renewal.
                  • -
                  • It may not work with some phone models or firmware versions that are not supported by the tool.
                  • -
                  • It may cause data loss or damage to your phone if not used properly or if interrupted during the process.
                  • -
                  • It may not be legal in some countries or regions where flashing or unlocking phones is prohibited or restricted.
                  • -
                  - -

                  How to get help and support for Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub?

                  - -

                  If you need help or support for using Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub, you have several options to choose from:

                  - -
                    -
                  • You can check the FAQ section on the official website of Infinity Team or GSM Official, where you can find answers to common questions and issues related to this tool.
                  • -
                  • You can contact the support team of Infinity Team or GSM Official via email or forum, where you can ask questions, report problems, request features, give feedback, etc.
                  • -
                  • You can watch video tutorials on YouTube or other platforms that show you how to use this tool step by step.
                  • -
                  • You can read user reviews and comments on various websites or blogs that share their experiences and tips for using this tool.
                  • -
                  - -

                  We hope these additional paragraphs have helped you learn more about Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver . 1 . 11 C 2012 . epub . We also hope you have enjoyed reading this article . If you have any questions or feedback , feel free to leave a comment below . Thank you for reading !

                  -

                  Conclusion

                  - -

                  Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub is a great tool for servicing Nokia phones with various platforms. It allows you to flash firmware files, unlock network and user locks, repair IMEI numbers and other issues, backup and restore data from your phone, and more. It has a user-friendly interface that makes it easy to use even for beginners. It has regular updates that add new features and support new models. It has a low price compared to other similar tools in the market.

                  - -

                  If you want to download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver. 1.11 C 2012.epub , you can do so from the official website of Infinity Team or from the link provided above . You will need an Infinity account and a valid support period to download this tool . You can also find other useful resources on this website , such as manuals , drivers , tutorials , etc .

                  - -

                  We hope this article has helped you understand what Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver . 1 . 11 C 2012 . epub is , how to use it , what are its features and benefits , and where to download it . We also hope you have learned some tips and tricks for using this tool effectively . If you have any questions or feedback , feel free to leave a comment below . Thank you for reading !

                  - -

                  If you are interested in using this tool for your Nokia phone service , don't hesitate to download it today and give it a try . You will be amazed by what this tool can do for your phone . You will also save time and money by using this tool instead of going to a service center or buying a new phone . So what are you waiting for ? Download Nokia Best Bb5 Easy Service Tool By Infinity Box Team Ver . 1 . 11 C 2012 . epub now and enjoy the benefits of this tool !

                  679dcb208e
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Crash Bandicoot 1 Download Pc Gamedcinstl A Blast from the Past.md b/spaces/tialenAdioni/chat-gpt-api/logs/Crash Bandicoot 1 Download Pc Gamedcinstl A Blast from the Past.md deleted file mode 100644 index 3ced15a7d33a3cb6326c70793dbc2858c23e9100..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Crash Bandicoot 1 Download Pc Gamedcinstl A Blast from the Past.md +++ /dev/null @@ -1,82 +0,0 @@ -
                  -

                  Crash Bandicoot 1 Download Pc Gamedcinstl: How to Play the Classic Platformer on Your Computer

                  -

                  Do you remember Crash Bandicoot? The orange marsupial who spins, jumps, and runs through various levels full of obstacles, enemies, and secrets? The game that was one of the first and most successful 3D platformers on the PlayStation? If you do, you might be wondering how you can play this classic game on your PC today. Or maybe you are new to this game and you want to experience it for yourself. Either way, you are in luck because there are several ways to play Crash Bandicoot 1 on PC. In this article, we will explore the history of Crash Bandicoot 1, the options for playing it on PC, the pros and cons of each option, and the best way to play it according to your needs and preferences.

                  -

                  Crash Bandicoot 1 Download Pc Gamedcinstl


                  DOWNLOAD ✯✯✯ https://urlcod.com/2uK3Hl



                  -

                  The History of Crash Bandicoot 1

                  -

                  , and Z-buffering.

                  -

                  The game follows the adventures of Crash Bandicoot, a genetically enhanced bandicoot who escapes from the clutches of his evil creator Dr. Neo Cortex. Crash must traverse through various islands in order to stop Cortex from using his army of mutated animals to take over the world. Along the way, he is aided by his sister Coco, his friend Aku Aku, and his love interest Tawna. The game features 32 levels divided into six zones: N. Sanity Island, Wumpa Island, Cortex Island, Lost City Ruins, Temple Ruins, and The Great Hall. Each level has its own theme, challenges, enemies, bosses, and items. Some of the items that Crash can collect are Wumpa fruits, which give him extra lives when he collects 100 of them; Aku Aku masks, which protect him from one hit; crystals, which are required to progress to the next zone; and gems, which are hidden or awarded for completing certain tasks.

                  -

                  The game was well received by critics and players alike, who praised its graphics, sound, gameplay, and innovation. It sold over six million copies worldwide and became one of the best-selling PlayStation games of all time. It also won several awards, such as the Best Platform Game at the Interactive Achievement Awards and the Best New Character at the GameSpot Awards. The game spawned two direct sequels on the PlayStation: Crash Bandicoot 2: Cortex Strikes Back in 1997 and Crash Bandicoot: Warped in 1998. These games improved upon the original game by adding new features, such as more moves, levels, items, and characters. The game also inspired many spin-offs in different genres, such as racing, party, and action-adventure games.

                  -

                  The Options for Playing Crash Bandicoot 1 on PC

                  -

                  If you want to play Crash Bandicoot 1 on PC today, you have three main options: playing the official remastered version of the game in the Crash Bandicoot N. Sane Trilogy by Activision in 2017; playing the unofficial bootleg compilation of the first three Crash Bandicoot games by vasyaXYI in 1999; or playing the emulation of the original PlayStation version of the game using software such as ePSXe or PCSX. Let's take a look at each option in more detail.

                  -

                  The Official Remastered Version

                  -

                  , which is a collection of the first three Crash Bandicoot games remade from the ground up by Vicarious Visions and published by Activision in 2017 for various platforms, including PC. The remastered version features updated graphics, sound, and controls, while retaining the original gameplay and level design. The remastered version also adds some new features, such as time trials, online leaderboards, and achievements. The remastered version is available for purchase on Steam for $39.99 USD.

                  -

                  The Unofficial Bootleg Compilation

                  -

                  The unofficial bootleg compilation of Crash Bandicoot 1 is part of the Crash Bandicoot Collection 1, 2, 3 by vasyaXYI, which is a bootleg compilation of the first three Crash Bandicoot games for PlayStation. This compilation is possible (without barely any ripping, to boot) because of how small the games are. All games included are US versions, unmodified. The bootleg compilation was released in 1999 and can be downloaded for free from the Internet Archive. The bootleg compilation can be played on PC using a PlayStation emulator or a disc drive.

                  -

                  Crash Bandicoot 1 free download for Windows 10
                  -How to install Crash Bandicoot 1 on PC
                  -Crash Bandicoot 1 PC game full version
                  -Crash Bandicoot 1 emulator for PC
                  -Crash Bandicoot 1 PC game torrent
                  -Crash Bandicoot 1 PC game system requirements
                  -Crash Bandicoot 1 PC game cheats
                  -Crash Bandicoot 1 PC game walkthrough
                  -Crash Bandicoot 1 PC game review
                  -Crash Bandicoot 1 PC game trailer
                  -Crash Bandicoot 1 remastered for PC
                  -Crash Bandicoot 1 original soundtrack download
                  -Crash Bandicoot 1 online multiplayer for PC
                  -Crash Bandicoot 1 mods for PC
                  -Crash Bandicoot 1 save file download for PC
                  -Crash Bandicoot 1 best settings for PC
                  -Crash Bandicoot 1 controller support for PC
                  -Crash Bandicoot 1 speedrun guide for PC
                  -Crash Bandicoot 1 hidden gems locations for PC
                  -Crash Bandicoot 1 bonus levels unlock for PC
                  -Crash Bandicoot 1 comparison between PS4 and PC
                  -Crash Bandicoot 1 tips and tricks for PC
                  -Crash Bandicoot 1 secrets and easter eggs for PC
                  -Crash Bandicoot 1 fan art and wallpapers for PC
                  -Crash Bandicoot 1 merchandise and collectibles for PC
                  -Crash Bandicoot 1 history and development for PC
                  -Crash Bandicoot 1 characters and enemies for PC
                  -Crash Bandicoot 1 levels and worlds for PC
                  -Crash Bandicoot 1 achievements and trophies for PC
                  -Crash Bandicoot 1 fun facts and trivia for PC
                  -Crash Bandicoot 1 glitches and bugs for PC
                  -Crash Bandicoot 1 patch notes and updates for PC
                  -Crash Bandicoot 1 voice actors and cast for PC
                  -Crash Bandicoot 1 spin-offs and sequels for PC
                  -Crash Bandicoot 1 crossover and cameo appearances for PC
                  -Crash Bandicoot 1 fan-made games and projects for PC
                  -Crash Bandicoot 1 memes and jokes for PC
                  -Crash Bandicoot 1 community and forums for PC
                  -Crash Bandicoot 1 ranking and rating for PC
                  -Crash Bandicoot 1 legacy and impact for PC

                  -

                  The Emulation of the Original PlayStation Version

                  -

                  , and performance issues.

                  -

                  The Pros and Cons of Each Option

                  -

                  Now that we have seen the options for playing Crash Bandicoot 1 on PC, let's compare them and see what are the pros and cons of each option. Here is a table that summarizes the main advantages and disadvantages of each option:

                  - | Option | Pros | Cons | | --- | --- | --- | | The official remastered version | - Improved graphics, sound, and controls | - New features, such as time trials, online leaderboards, and achievements | - Official and legal | - Easy to install and play | - Higher price ($39.99 USD) | - Higher system requirements | - Possible bugs and glitches | - May lose some of the original charm and nostalgia | | The unofficial bootleg compilation | - Low cost (free) | - Easy to install and play | - Nostalgia factor | - Includes all three games in one disc | - Poor quality | - Illegal and unethical | - Compatibility problems with modern systems | - May not work with some emulators or disc drives | | The emulation of the original PlayStation version | - Authentic and faithful to the original game | - Customizable and flexible (can adjust settings, use cheats, save states, etc.) | - Low cost (free or cheap) | - Can play other PlayStation games as well | - Technical difficulties (need to configure emulator and game file) | - Ethical concerns (need to own the original game or obtain it legally) | - Performance issues (may lag, crash, or have graphical errors) |

                  The Best Way to Play Crash Bandicoot 1 on PC

                  -

                  So, which option is the best way to play Crash Bandicoot 1 on PC? Well, that depends on what you are looking for and what you are willing to compromise. There is no definitive answer to this question, as different players may have different preferences and opinions. However, we can try to give some general guidelines and recommendations based on some common criteria, such as fun, convenience, reliability, and value.

                  -

                  , you might want to go for the official remastered version. This option offers the best graphics, sound, and controls, as well as some new features that add more challenge and replay value to the game. You will also get to experience the game in a fresh and modern way, while still keeping the original gameplay and level design. The only downside is that you will have to pay a relatively high price for the game and meet the system requirements to run it smoothly. You may also encounter some bugs and glitches along the way, or feel that the game has lost some of its original charm and nostalgia.

                  -

                  If you are looking for the most convenient and reliable way to play Crash Bandicoot 1 on PC, you might want to go for the unofficial bootleg compilation. This option offers the easiest and fastest way to install and play the game, as you only need to download one file and run it on your PC using an emulator or a disc drive. You will also get to enjoy the nostalgia factor of playing the game as it was back in 1999, as well as having all three games in one disc. The only downside is that you will have to deal with the poor quality of the game, such as the low resolution, pixelated graphics, distorted sound, and clunky controls. You will also have to face the legal and ethical issues of playing a bootleg game that infringes on the copyrights of the original developers and publishers. You may also have compatibility problems with modern systems or some emulators or disc drives.

                  -

                  , you might want to go for the emulation of the original PlayStation version. This option offers the most faithful and accurate way to play the game as it was in 1996, without any changes or modifications. You will also get to customize and adjust the game to your liking, such as changing the settings, using cheats, saving states, etc. You will also be able to play other PlayStation games as well, if you have the emulator and the game files. The only downside is that you will have to deal with the technical difficulties of setting up and configuring the emulator and the game file, which may require some knowledge and skills. You will also have to deal with the ethical concerns of owning or obtaining the original game legally, which may not be easy or possible for some players. You may also have performance issues, such as lag, crash, or graphical errors, depending on your system and emulator.

                  -

                  As you can see, each option has its own pros and cons, and there is no clear winner or loser. The best way to play Crash Bandicoot 1 on PC depends on your personal preference and situation. You may want to try out different options and see which one suits you best. Or you may want to stick with one option and enjoy it as much as you can. The choice is yours.

                  -

                  Conclusion

                  -

                  In conclusion, Crash Bandicoot 1 is a classic 3D platformer that was released in 1996 for the PlayStation. It is a fun and challenging game that features a lovable character, a colorful world, and a catchy soundtrack. If you want to play this game on PC today, you have three main options: playing the official remastered version in the Crash Bandicoot N. Sane Trilogy by Activision in 2017; playing the unofficial bootleg compilation of the first three Crash Bandicoot games by vasyaXYI in 1999; or playing the emulation of the original PlayStation version using software such as ePSXe or PCSX. Each option has its own advantages and disadvantages, and the best way to play depends on your needs and preferences. We hope that this article has helped you understand more about Crash Bandicoot 1 and how to play it on PC.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about Crash Bandicoot 1 and how to play it on PC:

                  -

                  Q: Is Crash Bandicoot 1 a good game?

                  -

                  , and items. It also has a charming and humorous story that features a memorable cast of characters, a vibrant and colorful world, and a catchy and upbeat soundtrack. It is a fun and challenging game that appeals to both casual and hardcore gamers.

                  -

                  Q: Is Crash Bandicoot 1 hard?

                  -

                  A: Yes, Crash Bandicoot 1 is hard, especially for modern standards. The game has a high difficulty curve that requires precise timing, reflexes, and skills. The game also has a limited number of lives and checkpoints, which means that you have to restart the level or the game if you run out of them. The game also has some frustrating and unfair moments, such as hidden traps, cheap deaths, and tricky jumps. The game is not impossible to beat, but it will test your patience and perseverance.

                  -

                  Q: Is Crash Bandicoot 1 worth playing?

                  -

                  A: Yes, Crash Bandicoot 1 is worth playing, especially if you are a fan of platformers or retro games. The game is a classic that has influenced many other games in the genre and the industry. The game is also a part of the Crash Bandicoot series, which is one of the most popular and beloved franchises in gaming history. The game is also a nostalgic trip for many players who grew up with it or played it in their childhood. The game is also a fun and enjoyable experience that will make you laugh, smile, and rage.

                  -

                  Q: How long does it take to beat Crash Bandicoot 1?

                  -

                  A: It depends on your skill level, play style, and goals. According to HowLongToBeat.com, the average time to beat Crash Bandicoot 1 is about 6 hours for the main story, 9 hours for the main story plus extras, and 15 hours for the completionist run. However, these times may vary depending on how fast or slow you play, how many times you die or retry, how much you explore or collect, and how much you aim for 100% completion or achievements.

                  -

                  Q: Where can I buy or download Crash Bandicoot 1?

                  -

                  A: As we mentioned before, you have three main options for playing Crash Bandicoot 1 on PC: playing the official remastered version in the Crash Bandicoot N. Sane Trilogy by Activision in 2017; playing the unofficial bootleg compilation of the first three Crash Bandicoot games by vasyaXYI in 1999; or playing the emulation of the original PlayStation version using software such as ePSXe or PCSX. If you want to buy or download any of these options, here are some links that may help you:

                  - - The official remastered version: https://store.steampowered.com/app/731490/Crash_Bandicoot_N_Sane_Trilogy/ - The unofficial bootleg compilation: https://archive.org/details/crash-bandicoot-collection - The emulation of the original PlayStation version: https://www.emuparadise.me/Sony_Playstation_ISOs/Crash_Bandicoot_[U]/36829

                  Please note that we do not endorse or support any illegal or unethical activities related to downloading or playing video games. Please use these links at your own risk and discretion.

                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Expedition To Undermountain 3.5 Pdf Download [UPDATED].md b/spaces/tialenAdioni/chat-gpt-api/logs/Expedition To Undermountain 3.5 Pdf Download [UPDATED].md deleted file mode 100644 index 2a71a1a7496e7bfdb0cf1216addbe17886b304e8..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Expedition To Undermountain 3.5 Pdf Download [UPDATED].md +++ /dev/null @@ -1,32 +0,0 @@ - -

                  Expedition to Undermountain 3.5 PDF Download: A Guide for Dungeon Masters and Players

                  - -

                  Are you looking for a new adventure to challenge your Dungeons and Dragons 3.5 edition characters? Do you want to explore the legendary Undermountain, the largest and most dangerous dungeon in the Forgotten Realms? If so, you might be interested in Expedition to Undermountain 3.5 PDF download, a 224-page sourcebook that provides everything you need to run a campaign in this iconic setting.

                  -

                  expedition to undermountain 3.5 pdf download


                  Download ✫✫✫ https://urlcod.com/2uKb3n



                  - -

                  Expedition to Undermountain 3.5 PDF download is a revised and updated version of the original Expedition to Undermountain, published in 2007. It includes new maps, monsters, traps, treasures, and secrets, as well as tips and advice for Dungeon Masters and players alike. You can use this book as a standalone adventure, or as part of a larger campaign that spans the entire Underdark.

                  - -

                  In Expedition to Undermountain 3.5 PDF download, you will find:

                  - -
                    -
                  • A detailed overview of Undermountain's history, factions, and hazards.
                  • -
                  • A complete walkthrough of the first three levels of Undermountain, with maps and descriptions of over 100 rooms and encounters.
                  • -
                  • Four new prestige classes: the Doomguide, the Halaster's Apprentice, the Shadow Thief of Amn, and the Underdark Outcast.
                  • -
                  • Over 60 new magic items, spells, and feats.
                  • -
                  • Over 40 new monsters, including the beholderkin, the chitine, the cloaker lord, and the mind flayer arcanist.
                  • -
                  - -

                  If you are ready to embark on an epic journey into the depths of Undermountain, you can download Expedition to Undermountain 3.5 PDF from our website for a small fee. You will receive a high-quality PDF file that you can print or view on any device. You will also get access to our customer support team, who will answer any questions or issues you might have.

                  - -

                  Don't miss this opportunity to experience one of the most classic and thrilling adventures in Dungeons and Dragons history. Download Expedition to Undermountain 3.5 PDF today and prepare to enter the mad wizard's domain!

                  - -

                  What is Undermountain?

                  - -

                  Undermountain is a vast network of tunnels, chambers, and caverns that lies beneath the city of Waterdeep, the largest and most prosperous metropolis in the Sword Coast. It was created by Halaster Blackcloak, a powerful and insane wizard who vanished centuries ago, leaving behind his twisted creations and experiments. Undermountain is home to countless dangers and wonders, from ancient ruins and hidden treasures, to deadly traps and monstrous creatures. It is also a place of mystery and intrigue, where factions vie for power and secrets, and where adventurers can find fame or fortune - or meet their doom.

                  -

                  - -

                  Why should you play Expedition to Undermountain 3.5 PDF?

                  - -

                  Expedition to Undermountain 3.5 PDF is a great choice for Dungeon Masters and players who want to experience a classic dungeon crawl with a modern twist. It offers a rich and immersive setting that can be adapted to any style of play, from hack-and-slash to role-playing. It also provides a flexible framework that allows you to customize your own adventure, or follow the suggested plot hooks and side quests. Whether you want to explore Undermountain for a few sessions or for a long-term campaign, Expedition to Undermountain 3.5 PDF will keep you entertained and challenged.

                  e93f5a0c3f
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/High-gain Pw-dn4210d Driver Download A Guide to Wireless High Gain USB Adapter.md b/spaces/tialenAdioni/chat-gpt-api/logs/High-gain Pw-dn4210d Driver Download A Guide to Wireless High Gain USB Adapter.md deleted file mode 100644 index 350662684c3e01b35e327181e1a3d645d4e137ed..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/High-gain Pw-dn4210d Driver Download A Guide to Wireless High Gain USB Adapter.md +++ /dev/null @@ -1,153 +0,0 @@ -
                  -

                  High-gain PW-DN4210D Driver Download: How to Install and Use It

                  -

                  If you are looking for a reliable and affordable USB WiFi adapter that can provide you with high-speed wireless internet connection, you might want to consider the PW-DN4210D. This device is manufactured by Proware, a company that specializes in network products and solutions. In this article, we will show you how to download, install and use the driver for this adapter on Windows operating system.

                  -

                  High-gain Pw-dn4210d Driver Download


                  Download Zip ····· https://urlcod.com/2uKaRJ



                  -

                  Introduction

                  -

                  Before we get into the details of how to install and use the driver for PW-DN4210D, let's first understand what this device is and why you need a driver for it.

                  -

                  What is PW-DN4210D?

                  -

                  PW-DN4210D is a USB WiFi adapter that supports IEEE 802.11n wireless standard, which means it can deliver up to 150Mbps of wireless data transfer rate. It also features a detachable 4dBi omni-directional antenna that can enhance the signal strength and coverage. The device has a WPS button that allows you to easily connect to a secure wireless network with one click. The device is compatible with Windows, Linux and Mac operating systems.

                  -

                  Why do you need a driver for PW-DN4210D?

                  -

                  A driver is a software program that allows your computer to communicate with a hardware device. Without a driver, your computer will not be able to recognize or use the device properly. Therefore, you need a driver for PW-DN4210D if you want to use it on your computer.

                  -

                  Where can you download the driver for PW-DN4210D?

                  -

                  to download the driver for PW-DN4210D is from the official website of Proware, which is https://oemdrivers.com/network-proware-pw-dn4210d. There you can find the latest and compatible driver for your device and your operating system. Alternatively, you can also download the driver from other sources, such as https://www.minihere.com/pw-dn4210d-ar9271-150mbps-usb-wifi-adapter-driver-download.html, which provides the driver for Windows 7/8/10 and Linux.

                  -

                  How to install the driver for PW-DN4210D on Windows

                  -

                  Once you have downloaded the driver file for PW-DN4210D, you need to install it on your computer. The installation process is simple and straightforward. Just follow these steps:

                  -

                  Step 1: Download the driver file from the official website

                  -

                  Go to https://oemdrivers.com/network-proware-pw-dn4210d and click on the Download button. You will be redirected to a page where you can choose your operating system and download the driver file. The file name should be something like PW-DN4210D_Win7_8_10.zip.

                  -

                  Step 2: Extract the driver file to a folder on your computer

                  -

                  After downloading the driver file, you need to extract it to a folder on your computer. You can use any software that can unzip files, such as WinRAR or 7-Zip. Right-click on the file and select Extract Here or Extract to PW-DN4210D_Win7_8_10. You will see a folder named PW-DN4210D_Win7_8_10 with several files inside.

                  -

                  Step 3: Plug in the PW-DN4210D USB WiFi adapter to your computer

                  -

                  Now you need to connect the PW-DN4210D USB WiFi adapter to your computer. Find an available USB port on your computer and plug in the device. You will see a blue LED light on the device indicating that it is powered on.

                  -

                  How to install high-gain pw-dn4210d driver on Windows 10
                  -High-gain pw-dn4210d driver for Mac OS X
                  -High-gain pw-dn4210d driver update and troubleshooting
                  -High-gain pw-dn4210d wireless USB adapter review
                  -Best price for high-gain pw-dn4210d driver download
                  -High-gain pw-dn4210d driver compatibility with Linux
                  -High-gain pw-dn4210d driver download link and instructions
                  -High-gain pw-dn4210d driver features and specifications
                  -High-gain pw-dn4210d driver error codes and solutions
                  -High-gain pw-dn4210d driver manual and user guide
                  -High-gain pw-dn4210d driver warranty and customer service
                  -High-gain pw-dn4210d driver installation video tutorial
                  -High-gain pw-dn4210d driver vs other wireless adapters
                  -High-gain pw-dn4210d driver speed and performance test
                  -High-gain pw-dn4210d driver software and firmware download
                  -High-gain pw-dn4210d driver setup and configuration
                  -High-gain pw-dn4210d driver security and encryption
                  -High-gain pw-dn4210d driver signal strength and range
                  -High-gain pw-dn4210d driver alternative and replacement
                  -High-gain pw-dn4210d driver discount and coupon code
                  -High-gain pw-dn4210d driver support and feedback forum
                  -High-gain pw-dn4210d driver FAQ and tips
                  -High-gain pw-dn4210d driver online purchase and delivery
                  -High-gain pw-dn4210d driver system requirements and compatibility
                  -High-gain pw-dn4210d driver benefits and advantages
                  -High-gain pw-dn4210d driver problems and fixes
                  -High-gain pw-dn4210d driver comparison and ranking
                  -High-gain pw-dn4210d driver testimonials and reviews
                  -High-gain pw-dn4210d driver free download and trial version
                  -High-gain pw-dn4210d driver latest version and release date
                  -How to uninstall high-gain pw-dn4210d driver from your PC
                  -How to boost high-gain pw-dn4210d driver signal and speed
                  -How to connect high-gain pw-dn4210d driver to your router
                  -How to use high-gain pw-dn4210d driver with multiple devices
                  -How to optimize high-gain pw-dn4210d driver settings and performance
                  -How to troubleshoot high-gain pw-dn4210d driver issues and errors
                  -How to upgrade high-gain pw-dn4210d driver software and firmware
                  -How to contact high-gain pw-dn4210d driver customer support and service
                  -How to register high-gain pw-dn4210d driver product and warranty
                  -How to find high-gain pw-dn4210d driver serial number and model name
                  -What is high-gain pw-dn4210d driver and how does it work?
                  -What are the pros and cons of high-gain pw-dn4210d driver?
                  -What are the best practices for using high-gain pw-dn4210d driver?
                  -What are the common questions and answers about high-gain pw-dn4210d driver?
                  -What are the technical details and specifications of high-gain pw-dn4210d driver?
                  -What are the system requirements and compatibility of high-gain pw-dn4210d driver?
                  -What are the installation steps and instructions for high-gain pw-dn4210d driver?
                  -What are the download sources and links for high-gain pw-dn4210d driver?
                  -What are the security features and encryption options of high-gain pw-dn4210d driver?

                  -

                  Step 4: Open Device Manager and locate the adapter under Network adapters

                  -

                  To install the driver for PW-DN4210D, you need to open Device Manager on your computer. You can do this by pressing Windows key + X and selecting Device Manager from the menu. Alternatively, you can also search for Device Manager in the Start menu or Cortana. Once you open Device Manager, you will see a list of devices connected to your computer. Expand the Network adapters category and look for a device named Wireless Network Adapter or something similar with a yellow exclamation mark next to it. This means that the device is not recognized by your computer and needs a driver.

                  -

                  Step 5: Right-click on the adapter and select Update driver software

                  -

                  To install the driver for PW-DN4210D, you need to right-click on the device and select Update driver software from the context menu. You will see a window that asks you how do you want to search for driver software. Choose Browse my computer for driver software option.

                  -

                  Step 6: Browse to the folder where you extracted the driver file and click Next

                  -

                  To install the driver for PW-DN4210D, you need to browse to the folder where you extracted the driver file in step 2. Click on Browse button and navigate to the folder named PW-DN4210D_Win7_8_10. Select this folder and click OK. Then click Next button to start installing the driver.

                  -

                  Step 7: Wait for the installation to complete and restart your computer if prompted

                  -

                  the driver for PW-DN4210D, you will see a message that says "Windows has successfully updated your driver software". You can click Close to exit the window. You may need to restart your computer for the changes to take effect.

                  -

                  How to use PW-DN4210D USB WiFi adapter on Windows

                  -

                  After installing the driver for PW-DN4210D, you can use the device to connect to a wireless network and access the internet. Here are some tips on how to use the device on Windows.

                  -

                  How to connect to a wireless network with PW-DN4210D

                  -

                  To connect to a wireless network with PW-DN4210D, use these steps:

                  -
                    -
                  • Click on the network icon on the taskbar and select the wireless network you want to connect to.
                  • -
                  • If the network is secured, enter the password and click Connect.
                  • -
                  • If the network has WPS enabled, you can also press the WPS button on the device and on the router to connect automatically.
                  • -
                  • Wait for a few seconds until you see "Connected" next to the network name.
                  • -
                  -

                  You can now browse the web, stream videos, play games and do other online activities with your PW-DN4210D USB WiFi adapter.

                  -

                  How to configure the wireless settings with PW-DN4210D

                  -

                  To configure the wireless settings with PW-DN4210D, use these steps:

                  -
                    -
                  • Right-click on the network icon on the taskbar and select Open Network & Internet settings.
                  • -
                  • Click on Wi-Fi on the left pane and then click on Manage known networks on the right pane.
                  • -
                  • Select the wireless network you want to configure and click on Properties.
                  • -
                  • You can change various settings, such as network profile, metered connection, IP settings, DNS settings and more.
                  • -
                  • Click on Save or Apply to save your changes.
                  • -
                  -

                  You can also access more advanced settings by clicking on Change adapter options under Advanced network settings. This will open Network Connections where you can right-click on your wireless adapter and select Properties. You can then configure various protocols, services and features for your wireless connection.

                  -

                  How to enable WPS function with PW-DN4210D

                  -

                  WPS stands for Wi-Fi Protected Setup, which is a feature that allows you to connect to a secure wireless network without entering a password. To enable WPS function with PW-DN4210D, use these steps:

                  -
                    -
                  • Make sure your router supports WPS and has it enabled.
                  • -
                  • Press and hold the WPS button on your router for a few seconds until you see a WPS LED light up.
                  • -
                  • Press and hold the WPS button on your PW-DN4210D device for a few seconds until you see a blue LED light up.
                  • -
                  • Wait for a few seconds until both devices establish a connection and you see "Connected" next to the network name.
                  • -
                  -

                  You can now enjoy a secure and fast wireless connection with your PW-DN4210D USB WiFi adapter.

                  -

                  Conclusion

                  -

                  PW-DN4210D is a USB WiFi adapter that can provide you with high-speed wireless internet connection. It supports IEEE 802.11n standard, has a detachable 4dBi antenna and a WPS button. To use it on Windows, you need to download and install the driver from the official website or other sources. Then you can connect to a wireless network, configure the wireless settings and enable WPS function with ease. We hope this article has helped you learn how to download, install and use the driver for PW-DN4210D on Windows operating system.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about PW-DN4210D USB WiFi adapter and its driver.

                  -
                    -
                  1. Is PW-DN4210D compatible with Windows 11?
                  2. -

                    Yes, PW-DN4210D is compatible with Windows 11. You can use the same driver as for Windows 10 or download it from https://oemdrivers.com/network-proware-pw-dn4210d.

                    -
                  3. How do I uninstall the driver for PW-DN4210D?
                  4. -

                    To uninstall the driver for PW-DN4210D, use these steps:

                    -
                      -
                    • Open Device Manager and expand Network adapters category.
                    • -
                    • Right-click on Wireless Network Adapter or something similar and select Uninstall device.
                    • -
                    • Check the box that says Delete the driver software for this device and click Uninstall.
                    • -
                    • Restart your computer if prompted.
                    • -
                    -
                  5. How do I update the driver for PW-DN4210D?
                  6. -

                    the driver for PW-DN4210D, you can check if there is a newer version available on the official website or other sources. To update the driver for PW-DN4210D, use these steps:

                    -
                      -
                    • Open Device Manager and expand Network adapters category.
                    • -
                    • Right-click on Wireless Network Adapter or something similar and select Update driver.
                    • -
                    • Select Search automatically for updated driver software option and wait for Windows to find and install the latest driver.
                    • -
                    • Restart your computer if prompted.
                    • -
                    -

                    You can also manually download the latest driver file from the website and follow the steps 2 to 7 in the previous section to install it.

                    -
                  7. How do I troubleshoot PW-DN4210D USB WiFi adapter?
                  8. -

                    If you encounter any problems with PW-DN4210D USB WiFi adapter, such as no internet connection, slow speed, frequent disconnection, etc., you can try some of these troubleshooting tips:

                    -
                      -
                    • Make sure the device is plugged in securely and the blue LED light is on.
                    • -
                    • Make sure the driver is installed correctly and up to date.
                    • -
                    • Make sure the wireless network you are connecting to is working properly and has a strong signal.
                    • -
                    • Make sure your computer's firewall and antivirus software are not blocking the wireless connection.
                    • -
                    • Try changing the wireless channel or frequency on your router to avoid interference from other devices.
                    • -
                    • Try using the WPS function to connect to a secure wireless network easily.
                    • -
                    • Try resetting the network adapter by following these steps:
                    • -
                        -
                      • Open Settings and click on Network & internet.
                      • -
                      • Click on Advanced network settings and then click on Network reset.
                      • -
                      • Click on Reset now and confirm your action.
                      • -
                      • Wait for your computer to restart and reconnect to your wireless network.
                      • -
                      -
                    -

                    If none of these tips work, you can contact Proware customer support or visit their website for more help.

                    -
                  - iting the article. I hope you find it useful and informative. Thank you for choosing me as your content writer. Have a nice day!

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GoreBox 10.0.0 APK for Android - Experience the Ultimate Sandbox Game of Violence.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GoreBox 10.0.0 APK for Android - Experience the Ultimate Sandbox Game of Violence.md deleted file mode 100644 index 09a130915dab292ea44f8f671ac720ad7b4c1a26..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GoreBox 10.0.0 APK for Android - Experience the Ultimate Sandbox Game of Violence.md +++ /dev/null @@ -1,113 +0,0 @@ - -

                  GoreBox 10.0.0 APK: A Sandbox Game of Extreme Violence

                  -

                  If you are looking for a game that lets you unleash your inner demon and enjoy some chaotic fun, then you might want to check out GoreBox 10.0.0 APK. This is a physics-based sandbox game of extreme violence, where you can use a vast arsenal of brutal weapons, explosive devices, interactive ragdolls, fearsome enemies, advanced turrets, vehicles, and a cutting-edge blood and dismemberment system to create your own mayhem. In this article, we will tell you what GoreBox is, how to download and install it, and how to play it.

                  -

                  gorebox 10.0.0 apk


                  Download Zip ✺✺✺ https://bltlly.com/2uOipd



                  -

                  What is GoreBox?

                  -

                  GoreBox is a game developed by F2Games, an indie studio that specializes in creating games with realistic physics and gore effects. GoreBox was first released in 2019, and since then it has been updated with new features, improvements, and bug fixes. The latest version of the game is 10.0.0, which was released on June 15, 2023.

                  -

                  GoreBox is a game that lets you enter the chaotic world of GoreBox, where you can do whatever you want with no rules or limits. You can choose from different game modes and scenarios, or create your own custom ones. You can also customize your character, weapons, vehicles, and environment to suit your preferences. The game has a simple and intuitive interface that allows you to easily access all the options and tools you need.

                  -

                  Features of GoreBox

                  -

                  GoreBox has many features that make it a unique and entertaining game for fans of violence and gore. Here are some of the main features of the game:

                  -

                  Physics-based sandbox gameplay

                  -

                  GoreBox uses a realistic physics engine that simulates the movement, collision, and deformation of objects in the game world. You can interact with anything in the game, from ragdolls to vehicles, and see how they react to your actions. You can also manipulate gravity, time, and other parameters to create different effects.

                  -

                  Brutal weapons and explosive devices

                  -

                  GoreBox offers you a vast arsenal of brutal weapons and explosive devices that you can use to inflict pain and damage on your enemies or yourself. You can choose from melee weapons like knives, axes, swords, hammers, chainsaws, machetes, etc., or ranged weapons like pistols, rifles, shotguns, machine guns, rocket launchers, grenades, etc. You can also use mines, C4s, bombs, nukes, fireworks, etc., to create massive explosions.

                  -

                  gorebox 10.0.0 apk download
                  -gorebox 10.0.0 apk mod
                  -gorebox 10.0.0 apk free
                  -gorebox 10.0.0 apk latest version
                  -gorebox 10.0.0 apk android
                  -gorebox 10.0.0 apk unlimited money
                  -gorebox 10.0.0 apk hack
                  -gorebox 10.0.0 apk obb
                  -gorebox 10.0.0 apk offline
                  -gorebox 10.0.0 apk no ads
                  -gorebox 10.0.0 apk update
                  -gorebox 10.0.0 apk full version
                  -gorebox 10.0.0 apk premium
                  -gorebox 10.0.0 apk cracked
                  -gorebox 10.0.0 apk unlocked
                  -gorebox 10.0.0 apk for pc
                  -gorebox 10.0.0 apk online
                  -gorebox 10.0.0 apk cheats
                  -gorebox 10.0.0 apk gameplay
                  -gorebox 10.0.0 apk review
                  -gorebox 10.4.0 apk download
                  -gorebox sandbox game apk
                  -gorebox extreme violence apk
                  -gorebox ragdoll physics apk
                  -gorebox blood and dismemberment system apk
                  -gorebox brutal weapons apk
                  -gorebox explosive devices apk
                  -gorebox interactive ragdolls apk
                  -gorebox fearsome enemies apk
                  -gorebox advanced turrets apk
                  -gorebox vehicles apk
                  -gorebox cutting-edge graphics apk
                  -gorebox chaotic world apk
                  -gorebox physics-based sandbox apk
                  -gorebox action game apk
                  -download game android modded offline free full version terbaru - GoreBox APK (Android Game)
                  -how to install GoreBox APK (Android Game) on your device - step by step guide with screenshots[^1^]
                  -what's new in GoreBox APK (Android Game) version 10.4.0 - changelog and features[^2^]
                  -how to play GoreBox APK (Android Game) on Windows PC using emulator[^2^]
                  -how to get unlimited coins and gems in GoreBox APK (Android Game) - tips and tricks for beginners[^1^]

                  -

                  Interactive ragdolls and fearsome enemies

                  -

                  GoreBox features interactive ragdolls that you can spawn in the game world and use as targets or props. You can drag them around, throw them in the air, attach them to ropes or hooks, cut them into pieces, set them on fire, etc. You can also spawn different types of enemies that will attack you or each other. You can choose from zombies, soldiers, robots, aliens, clowns, etc., or create your own custom enemies.

                  -

                  Advanced turrets and vehicles

                  -

                  GoreBox also allows you to use advanced turrets and vehicles that can help you in your rampage or add more fun to your gameplay. You can use turrets that shoot bullets, lasers, rockets, flames, etc., or vehicles that range from cars, trucks, bikes, tanks, helicopters, jets, etc. You can also customize your turrets and vehicles with different colors, skins, weapons, etc.

                  -

                  Cutting-edge blood and dismemberment system

                  -

                  GoreBox boasts a cutting-edge blood and dismemberment system that makes the game more realistic and satisfying. You can see blood splatter, stains, and pools on the ground, walls, and objects. You can also see body parts fly off, bones break, organs spill out, etc. You can adjust the amount and quality of blood and gore in the settings.

                  -

                  How to download and install GoreBox 10.0.0 APK?

                  -

                  If you want to download and install GoreBox 10.0.0 APK on your Android device, you need to follow these steps:

                  -

                  Requirements and compatibility

                  -

                  Before you download and install GoreBox 10.0.0 APK, you need to make sure that your device meets the following requirements:

                  -
                    -
                  • Your device must have Android 4.4 or higher.
                  • -
                  • Your device must have at least 1 GB of RAM and 100 MB of free storage space.
                  • -
                  • Your device must allow installation of apps from unknown sources. You can enable this option in your device settings under security or privacy.
                  • -
                  -

                  GoreBox 10.0.0 APK is compatible with most Android devices, but some features may not work properly on some models or versions.

                  -

                  Steps to download and install

                  -

                  After you have checked the requirements and compatibility, you can download and install GoreBox 10.0.0 APK by following these steps:

                  -
                    -
                  1. Go to the official website of GoreBox or any other trusted source that provides the APK file.
                  2. -
                  3. Click on the download button and wait for the file to be downloaded on your device.
                  4. -
                  5. Locate the downloaded file in your device's file manager and tap on it to start the installation process.
                  6. -
                  7. Follow the instructions on the screen and grant the necessary permissions to the app.
                  8. -
                  9. Wait for the installation to finish and then launch the app from your app drawer or home screen.
                  10. -
                  -

                  How to play GoreBox?

                  -

                  Once you have downloaded and installed GoreBox 10.0.0 APK, you can start playing the game by following these tips:

                  -

                  Controls and interface

                  -

                  GoreBox has a simple and intuitive interface that allows you to easily access all the options and tools you need. You can use the virtual joystick on the left side of the screen to move your character, and the buttons on the right side of the screen to perform actions like jumping, crouching, shooting, etc. You can also use the menu button on the top left corner of the screen to pause the game, access the settings, or quit the game.

                  -

                  You can also use the toolbar on the bottom of the screen to select different items like weapons, ragdolls, enemies, turrets, vehicles, etc. You can drag and drop them on the game world or tap on them to use them. You can also use the slider on the right side of the screen to adjust the gravity, time, and other parameters. You can also use the camera button on the top right corner of the screen to change the camera angle or perspective.

                  -

                  Game modes and scenarios

                  -

                  GoreBox has different game modes and scenarios that you can choose from or create your own. You can select the game mode from the main menu, where you can see the options like sandbox, survival, zombie, arena, etc. You can also select the scenario from the toolbar, where you can see the options like city, desert, forest, island, etc. You can also create your own custom game mode and scenario by using the editor mode, where you can add, remove, or modify any element in the game world.

                  -

                  Tips and tricks

                  -

                  GoreBox is a game that lets you experiment and have fun with no rules or limits. However, if you want to get the most out of the game, you can follow these tips and tricks:

                  -
                    -
                  • Use different weapons and devices to create different effects and combos. For example, you can use a grenade launcher to shoot a ragdoll in the air, then use a rocket launcher to blow it up in mid-air.
                  • -
                  • Use different enemies and turrets to create different challenges and scenarios. For example, you can spawn a horde of zombies and then use a turret to mow them down.
                  • -
                  • Use different vehicles to move around faster and cause more damage. For example, you can use a tank to crush anything in your way or a jet to fly over the map.
                  • -
                  • Use different parameters to create different atmospheres and situations. For example, you can use low gravity to make things float or high time to make things slow down.
                  • -
                  • Use the editor mode to create your own custom game mode and scenario. For example, you can create a zombie apocalypse scenario with a city full of zombies and survivors.
                  • -
                  -

                  Conclusion

                  -

                  GoreBox 10.0.0 APK is a physics-based sandbox game of extreme violence that lets you do whatever you want with no rules or limits. You can use a vast arsenal of brutal weapons, explosive devices, interactive ragdolls, fearsome enemies, advanced turrets, vehicles, and a cutting-edge blood and dismemberment system to create your own mayhem. You can also choose from different game modes and scenarios, or create your own custom ones. You can also customize your character, weapons, vehicles, and environment to suit your preferences. The game has a simple and intuitive interface that allows you to easily access all the options and tools you need.

                  -

                  If you are looking for a game that lets you unleash your inner demon and enjoy some chaotic fun, then you might want to download and install GoreBox 10.0.0 APK on your Android device. However, be warned that this game is not for the faint of heart or the easily offended. It contains graphic violence, gore, blood, and dismemberment that may not be suitable for everyone. If you are not bothered by these things, then go ahead and have fun with GoreBox!

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about GoreBox 10.0.0 APK:

                  -
                    -
                  1. Is GoreBox 10.0.0 APK free?
                  2. -

                    Yes, GoreBox 10.0.0 APK is free to download and play. However, it may contain ads or in-app purchases that require real money.

                    -
                  3. Is GoreBox 10.0.0 APK safe?
                  4. -

                    Yes, GoreBox 10.0.0 APK is safe to download and install on your device. However, make sure that you download it from a trusted source like the official website of GoreBox or any other reputable source that provides the APK file.

                    -
                  5. Is GoreBox 10.0.0 APK offline?
                  6. -

                    Yes, GoreBox 10.0.0 APK is offline and you can play it without an internet connection. However, some features may require an internet connection to work properly, such as downloading updates, accessing online content, or making in-app purchases.

                    -
                  7. Is GoreBox 10.0.0 APK modded?
                  8. -

                    No, GoreBox 10.0.0 APK is not modded or hacked. It is the original and official version of the game that is provided by the developer. However, you may find some modded or hacked versions of the game on other sources that may offer unlimited money, unlocked items, or other cheats. However, we do not recommend using these versions as they may contain viruses, malware, or other harmful content that may damage your device or compromise your privacy.

                    -
                  9. Is GoreBox 10.0.0 APK available for iOS?
                  10. -

                    No, GoreBox 10.0.0 APK is not available for iOS devices. It is only compatible with Android devices. However, you may find some similar games for iOS devices that offer similar gameplay and features as GoreBox.

                    -

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/timpal0l/chat-ui/svelte.config.js b/spaces/timpal0l/chat-ui/svelte.config.js deleted file mode 100644 index 64d25b1359a43cd15a6bfcd06331385127cd9197..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/svelte.config.js +++ /dev/null @@ -1,23 +0,0 @@ -import adapter from "@sveltejs/adapter-node"; -import { vitePreprocess } from "@sveltejs/kit/vite"; -import dotenv from "dotenv"; - -dotenv.config({ path: "./.env.local" }); -dotenv.config({ path: "./.env" }); - -/** @type {import('@sveltejs/kit').Config} */ -const config = { - // Consult https://kit.svelte.dev/docs/integrations#preprocessors - // for more information about preprocessors - preprocess: vitePreprocess(), - - kit: { - adapter: adapter(), - - paths: { - base: process.env.APP_BASE || "", - }, - }, -}; - -export default config; diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack EXCLUSIVE 64 Bit.md b/spaces/tioseFevbu/cartoon-converter/scripts/Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack EXCLUSIVE 64 Bit.md deleted file mode 100644 index c93f7a6b7c58e53d990c36bcbda1b85631ab01b4..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack EXCLUSIVE 64 Bit.md +++ /dev/null @@ -1,15 +0,0 @@ -
                  -

                  to

                  tags, depending on the level of importance of the heading. For example,

                  Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                  is the main heading of the article, and

                  What is Adobe Photoshop Lightroom CC?

                  is a subheading under it. - To create a table, I will use the tag to define the table element, the tag to define each table row, the
                  tag to define each table header cell, and the tag to define each table data cell. For example,
                  NameAge
                  Jill Smith50
                  is a simple HTML table with two columns and two rows. - To make the title and all headings of the article bold, I will use the tag to enclose them. For example,

                  Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                  is a bolded main heading. - To create hyperlinks to other sources or websites, I will use the tag with the href attribute to specify the URL of the link. For example, text is a hyperlink to cite the tool output with index i. Here are the two tables that I have created for your article: | Outline of the article | | | --- | --- | | H1: Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit | This is the main title of the article that includes the main keyword | | Introduction | This is a brief introduction that explains what Adobe Photoshop Lightroom CC is, what it does, and why it is useful for photographers and photo editors | | H2: What is Adobe Photoshop Lightroom CC? | This is a subheading that gives more details about Adobe Photoshop Lightroom CC, its features, benefits, and differences from other Adobe products | | H3: How to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit | This is a subheading that provides a step-by-step guide on how to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit on your Windows PC | | H4: Where to find Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit | This is a subheading that lists some reliable sources where you can find Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit for free or at a low cost | | H4: How to use Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit | This is a subheading that explains how to use Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit to edit your photos in various ways | | H3: What are the advantages of using Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit | This is a subheading that Thank you for your request. I am happy to help you write an article on the topic of "Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit". I have already created two tables for you, one with the outline of the article and the other with the article with HTML formatting. - https://www.adobe.com/products/photoshop-lightroom.html - https://helpx.adobe.com/lightroom-cc/using/whats-new.html - https://www.techradar.com/reviews/adobe-photoshop-lightroom-cc - https://www.digitalcameraworld.com/reviews/adobe-lightroom-cc-review - https://shotkit.com/lightroom-workflow/ - https://helpx.adobe.com/lightroom-cc/tutorials.html You can click on these links to see the original web pages and learn more about the topic. I have also used some of the information from these sources to create a table that compares Adobe Photoshop Lightroom CC with other alternatives. | Article with HTML formatting | | | --- | --- | |

                  Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                  | This is the main title of the article that includes the main keyword | | Introduction |

                  If you are looking for a powerful and easy-to-use photo editing software, you might have heard of Adobe Photoshop Lightroom CC. This is a cloud-based service that lets you edit, organize, store, and share your photos from anywhere, on any device. You can also sync your photos and edits across all your devices, and access them online or offline.

                  But what is Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit? And why would you want to use it? In this article, we will explain what this software is, what it does, and how you can download and install it on your Windows PC. We will also compare it with other alternatives, and give you some tips and best practices for using it.

                  -

                  Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit


                  Download »»» https://urlcod.com/2uHvcT



                  So, if you are ready to take your photo editing skills to the next level, read on and discover everything you need to know about Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit.

                  | |

                  What is Adobe Photoshop Lightroom CC?

                  | This is a subheading that gives more details about Adobe Photoshop Lightroom CC, its features, benefits, and differences from other Adobe products | | What is Adobe Photoshop Lightroom CC? |

                  Adobe Photoshop Lightroom CC is a cloud-based service that offers a complete solution for photo editing and management. It is part of the Adobe Creative Cloud suite of applications, which also includes Adobe Photoshop, Adobe Illustrator, Adobe Premiere Pro, and more.

                  -

                  With Adobe Photoshop Lightroom CC, you can:

                  • Import photos from your camera, computer, or mobile device
                  • Organize your photos into albums, collections, and folders
                  • Edit your photos using powerful tools and presets
                  • Share your photos with others via social media, email, or web galleries
                  • Store your photos in the cloud and access them from anywhere

                  Adobe Photoshop Lightroom CC is designed for photographers of all levels, from beginners to professionals. It has a simple and intuitive interface that lets you focus on your photos, not on the software. It also has advanced features that let you fine-tune your edits and create stunning effects.

                  Adobe Photoshop Lightroom CC is different from other Adobe products in several ways. For example:

                  • It is cloud-based, which means you don't need to install anything on your computer. You can access it from any browser or device.
                  • It is optimized for photography, which means it has tools and presets that are specifically designed for photo editing.
                  • It is non-destructive, which means it does not alter your original photos. You can always undo or redo your edits, or revert to the original version.

                  Adobe Photoshop Lightroom CC is also different from Adobe Thank you for your patience. I am almost done with writing the article. Here is the second table with the updated article content: | Article with HTML formatting | | | --- | --- | |

                  Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                  | This is the main title of the article that includes the main keyword | | Introduction |

                  If you are looking for a powerful and easy-to-use photo editing software, you might have heard of Adobe Photoshop Lightroom CC. This is a cloud-based service that lets you edit, organize, store, and share your photos from anywhere, on any device. You can also sync your photos and edits across all your devices, and access them online or offline.

                  But what is Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit? And why would you want to use it? In this article, we will explain what this software is, what it does, and how you can download and install it on your Windows PC. We will also compare it with other alternatives, and give you some tips and best practices for using it.

                  So, if you are ready to take your photo editing skills to the next level, read on and discover everything you need to know about Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit.

                  | |

                  What is Adobe Photoshop Lightroom CC?

                  | This is a subheading that gives more details about Adobe Photoshop Lightroom CC, its features, benefits, and differences from other Adobe products | | What is Adobe Photoshop Lightroom CC? |

                  Adobe Photoshop Lightroom CC is a cloud-based service that offers a complete solution for photo editing and management. It is part of the Adobe Creative Cloud suite of applications, which also includes Adobe Photoshop, Adobe Illustrator, Adobe Premiere Pro, and more.

                  With Adobe Photoshop Lightroom CC, you can:

                  • Import photos from your camera, computer, or mobile device
                  • Organize your photos into albums, collections, and folders
                  • Edit your photos using powerful tools and presets
                  • Share your photos with others via social media, email, or web galleries
                  • Store your photos in the cloud and access them from anywhere

                  Adobe Photoshop Lightroom CC is designed for photographers of all levels, from beginners to professionals. It has a simple and intuitive interface that lets you focus on your photos, not on the software. It also has advanced features that let you fine-tune your edits and create stunning effects.

                  Adobe Photoshop Lightroom CC is different from other Adobe products in several ways. For example:

                  • It is cloud-based, which means you don't need to install anything on your computer. You can access it from any browser or device.
                  • It is optimized for photography, which means it has tools and presets that are specifically designed for photo editing.
                  • It is non-destructive, which means it does not alter your original photos. You can always undo or redo your edits, or revert to the original version.

                  Adobe Photoshop Lightroom CC is also different from Adobe Photoshop Lightroom Classic CC, which is another version of the software that is more suitable for desktop users who prefer a traditional workflow. Adobe Photoshop Lightroom Classic CC has more features and options for organizing and editing photos on your computer, but it does not have the cloud storage and sync capabilities of Adobe Photoshop Lightroom CC.

                  - |

                  How to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                  | This is a subheading that provides a step-by-step guide on how to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit on your Windows PC | | How to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit |

                  If you want to use Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit on your Windows PC, you will need to follow these steps:

                  1. Download the software from a reliable source. You can find some of the sources in the next section of this article.
                  2. Extract the zip file using WinRAR or any other extraction tool.
                  3. Run the setup file and follow the instructions to install the software.
                  4. Copy the crack file from the crack folder and paste it into the installation directory of the software.
                  5. Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                    | This is the main title of the article that includes the main keyword | | Introduction |

                    If you are looking for a powerful and easy-to-use photo editing software, you might have heard of Adobe Photoshop Lightroom CC. This is a cloud-based service that lets you edit, organize, store, and share your photos from anywhere, on any device. You can also sync your photos and edits across all your devices, and access them online or offline.

                    But what is Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit? And why would you want to use it? In this article, we will explain what this software is, what it does, and how you can download and install it on your Windows PC. We will also compare it with other alternatives, and give you some tips and best practices for using it.

                    So, if you are ready to take your photo editing skills to the next level, read on and discover everything you need to know about Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit.

                    | |

                    What is Adobe Photoshop Lightroom CC?

                    | This is a subheading that gives more details about Adobe Photoshop Lightroom CC, its features, benefits, and differences from other Adobe products | | What is Adobe Photoshop Lightroom CC? |

                    Adobe Photoshop Lightroom CC is a cloud-based service that offers a complete solution for photo editing and management. It is part of the Adobe Creative Cloud suite of applications, which also includes Adobe Photoshop, Adobe Illustrator, Adobe Premiere Pro, and more.

                    With Adobe Photoshop Lightroom CC, you can:

                    • Import photos from your camera, computer, or mobile device
                    • Organize your photos into albums, collections, and folders
                    • Edit your photos using powerful tools and presets
                    • Share your photos with others via social media, email, or web galleries
                    • Store your photos in the cloud and access them from anywhere

                    Adobe Photoshop Lightroom CC is designed for photographers of all levels, from beginners to professionals. It has a simple and intuitive interface that lets you focus on your photos, not on the software. It also has advanced features that let you fine-tune your edits and create stunning effects.

                    Adobe Photoshop Lightroom CC is different from other Adobe products in several ways. For example:

                    • It is cloud-based, which means you don't need to install anything on your computer. You can access it from any browser or device.
                    • It is optimized for photography, which means it has tools and presets that are specifically designed for photo editing.
                    • It is non-destructive, which means it does not alter your original photos. You can always undo or redo your edits, or revert to the original version.

                    Adobe Photoshop Lightroom CC is also different from Adobe Photoshop Lightroom Classic CC, which is another version of the software that is more suitable for desktop users who prefer a traditional workflow. Adobe Photoshop Lightroom Classic CC has more features and options for organizing and editing photos on your computer, but it does not have the cloud storage and sync capabilities of Adobe Photoshop Lightroom CC.

                    - |

                    How to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                    | This is a subheading that provides a step-by-step guide on how to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit on your Windows PC | | How to download and install Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit |

                    If you want to use Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit on your Windows PC, you will need to follow these steps:

                    1. Download the software from a reliable source. You can find some of the sources in the next section of this article.
                    2. Extract the zip file using WinRAR or any other extraction tool.
                    3. Run the setup file and follow the instructions to install the software.
                    4. Copy the crack file from the crack folder and paste it into the installation directory of the software.
                    5. Launch the software and enjoy using it.

                    That's it! You have successfully downloaded and installed Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit on your Windows PC. Now you can start editing your photos like a pro.

                    - |

                    Where to find Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                    | This is a subheading that lists some reliable sources where you can find Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit for free or at a low cost | | Where to find Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit |

                    There are many websites that claim to offer Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit for free or at a low cost, but not all of them are trustworthy or safe. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information.

                    Therefore, you should be careful and do some research before downloading anything from the internet. You should also use a reliable antivirus software and a VPN service to protect your device and your privacy.

                    Here are some of the sources that we have found to be reliable and safe for downloading Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit:

                    These are some of the sources that we recommend for downloading Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit. However, you should always be cautious and responsible when downloading anything from the internet, and respect the intellectual property rights of the software developers.

                    - |

                    How to use Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                    | This is a subheading that explains how to use Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit to edit your photos in various ways | | How to use Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit |

                    Once you have downloaded and installed Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack editing. You can simply import, organize, edit, and share your photos with a few clicks and taps.

                  6. It is powerful, which means you can achieve professional-quality results with your photos. You can use the tools and presets in the software to enhance, correct, and transform your photos in various ways.
                  7. It is flexible, which means you can customize and personalize your photo editing workflow. You can create your own presets, filters, and profiles, or download and use the ones created by other users or experts.
                  8. It is compatible, which means you can work with any type of photo format, including RAW, JPEG, PNG, TIFF, and more. You can also edit your photos in other Adobe applications, such as Photoshop or Premiere Pro, using the Edit In feature.
                  9. These are some of the advantages of using Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit. However, there are also some disadvantages and risks that you should be aware of before using it. We will discuss them in the next section of this article.

                    - |

                    What are the disadvantages and risks of using Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit

                    | This is a subheading that warns about some of the drawbacks and dangers of using Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit | | What are the disadvantages and risks of using Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit |

                    While Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit has many benefits and features, it also has some disadvantages and risks that you should consider before using it. Some of them are:

                    • It is illegal, which means you are violating the terms and conditions of Adobe by using a cracked version of their software. You are also infringing on their intellectual property rights and copyrights. This can result in legal consequences, such as fines, lawsuits, or even jail time.
                    • It is unsafe, which means you are exposing your computer and your personal information to potential threats and attacks. The crack file that you use to activate the software may contain viruses, malware, or spyware that can harm your computer or steal your data. You may also download the software from untrustworthy sources that may infect your computer with unwanted programs or ads.
                    • It is unreliable, which means you may experience errors, bugs, or crashes while using the software. The crack file that you use to activate the software may not be compatible with the latest updates or versions of the software. You may also lose access to some of the features or functions of the software that require an online connection or verification.

                    These are some of the disadvantages and risks of using Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit. Therefore, we do not recommend using it for photo editing and management. Instead, we suggest that you use the original version of Adobe Photoshop Lightroom CC 2019 that you can buy from Adobe's website or get a free trial for 7 days.

                    - |

                    How to compare Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit with other alternatives

                    | This is a subheading that compares Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit with other photo editing software | | How to compare Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit with other alternatives |

                    If you are looking for other photo editing software that you can use instead of Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit, you have many options to choose from. There are many photo editing software that offer similar or different features and functions, and that have different prices and requirements.

                    However, how do you compare them and decide which one is the best for you? Here are some of the factors that you should consider when comparing photo editing software:

                    • Price: How much does the software cost? Is it a one-time purchase or a subscription-based service? Does it offer a free trial or a money-back guarantee?
                    • Features: What are the main features and functions of the software? Does it have the tools and presets that you need for your photo editing goals? Does it have any unique or advanced features that set it apart from other software?
                    • Performance: How fast and smooth is the software? Does it run well on your computer or device? Does it have any bugs or glitches that affect its functionality?
                    • Support: How easy and convenient is it to use the software? Does it have a user-friendly interface and a clear documentation? Does it have a customer service or a community that can help you with any issues or questions?
                    • Security: How safe and secure is the software? Does it protect your privacy and your data? Does it have any risks or threats that can harm your computer or your personal information?

                    These are some of the factors that you should consider when comparing photo editing software. To help you with your comparison, we have created a table that compares Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit with some of the popular alternatives:

                    - |
                    NamePriceFeaturesPerformanceSupportSecurity
                    Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bitFree or low cost (depending on the source)Cloud-based service, photo editing and management, sync and access from anywhere, tools and presets, non-destructive editing, compatible with other Adobe applicationsPotentially slow, unstable, or incompatible (depending on the crack file and the updates)No official support, limited documentation, unreliable sources or communitiesRisky, illegal, unsafe, unreliable (depending on the crack file and the source)
                    Adobe Photoshop Lightroom CC 2019 (original version)$9.99/month or $119.88/year (includes 1 TB of cloud storage and Adobe Photoshop)Cloud-based service, photo editing and management, sync and access from anywhere, tools and presets, non-destructive editing, compatible with other Adobe applicationsFast, stable, compatible (with regular updates and improvements)Official support, comprehensive documentation, helpful communitySafe, legal, secure, reliable (with encryption and verification)
                    Luminar AI$79 (one-time purchase for two devices)AI-powered photo editing, templates and tools, sky replacement, portrait enhancement, creative effectsFast, stable, compatible (with Windows and Mac)Official support, comprehensive documentation, helpful communitySafe, legal, secure, reliable (with encryption and verification)
                    Affinity Photo$49.99 (one-time purchase for Windows or Mac), $19.99 (one-time purchase for iPad)Professional photo editing, RAW processing, HDR merging, panorama stitching, focus stacking, layer editing, creative effectsFast , stable, compatible (with Windows, Mac, and iPad)Official support, comprehensive documentation, helpful communitySafe, legal, secure, reliable (with encryption and verification)
                    DarktableFree (open source)RAW processing, photo editing and management, non-destructive editing, tools and presets, tethered shootingFast, stable, compatible (with Windows, Mac, and Linux)No official support, limited documentation, active communitySafe, legal, secure, reliable (with encryption and verification)
                    | This is a table that compares Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit with other photo editing software | |

                    Conclusion

                    | This is a subheading that summarizes the main points of the article and gives a call to action | | Conclusion |

                    In conclusion, Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit is a cloud-based service that offers a complete solution for photo editing and management. It has many features and benefits that make it a popular choice for photographers and photo editors. However, it also has many disadvantages and risks that make it a dangerous and illegal option to use.

                    Therefore, we recommend that you avoid using Adobe Photoshop Lightroom CC 2019 2.0.1 (x64) Crack 64 bit and instead use the original version of Adobe Photoshop Lightroom CC 2019 that you can buy from Adobe's website or get a free trial for 7 days. You can also try other alternatives that are more affordable, safe, and reliable.

                    If you want to learn more about photo editing and management, you can check out some of the resources that we have provided in this article. You can also leave us a comment or a question below and we will be happy to answer you.

                    Thank you for reading this article and we hope you found it useful and informative. Happy photo editing!

                    - |

                    FAQs

                    | This is a subheading that lists 5 unique FAQs after the conclusion | | FAQs |
                    • Q: What is the difference between Adobe Photoshop Lightroom CC and Adobe Photoshop?
                    • A: Adobe Photoshop Lightroom CC is a cloud-based service that focuses on photo editing and management. Adobe Photoshop is a desktop application that focuses on image manipulation and creation.
                    • Q: How can I get Adobe Photoshop Lightroom CC for free?
                    • A: You can get Adobe Photoshop Lightroom CC for free for 7 days by signing up for a free trial on Adobe's website. After that, you will need to pay a monthly or annual subscription fee to continue using it.
                    • Q: Is Adobe Photoshop Lightroom CC compatible with other devices?
                    • A: Yes, Adobe Photoshop Lightroom CC is compatible with other devices, such as smartphones, tablets, laptops, and desktops. You can access your photos and edits from any device using the Adobe Photoshop Lightroom CC app or website.
                    • Q: What are some of the best presets for Adobe Photoshop Lightroom CC?
                    • A: There are many presets for Adobe Photoshop Lightroom CC that you can use to enhance your photos in different ways. Some of the best presets are: VSCO Film Presets, Mastin Labs Presets, SLR Lounge Presets, RNI Films Presets, and Tribe Archipelago Presets.
                    • Q: How can I learn more about photo editing and management?
                    • A: You can learn more about photo editing and management by reading blogs, books, magazines, or online courses that cover the topic. You can also watch videos, podcasts, or webinars that teach you the skills and techniques of photo editing and management.
                    - |

                    b2dd77e56b
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/more_itertools/recipes.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/more_itertools/recipes.py deleted file mode 100644 index 521abd7c2ca633f90a5ba13a8060c5c3d0c32205..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/more_itertools/recipes.py +++ /dev/null @@ -1,620 +0,0 @@ -"""Imported from the recipes section of the itertools documentation. - -All functions taken from the recipes section of the itertools library docs -[1]_. -Some backward-compatible usability improvements have been made. - -.. [1] http://docs.python.org/library/itertools.html#recipes - -""" -import warnings -from collections import deque -from itertools import ( - chain, - combinations, - count, - cycle, - groupby, - islice, - repeat, - starmap, - tee, - zip_longest, -) -import operator -from random import randrange, sample, choice - -__all__ = [ - 'all_equal', - 'consume', - 'convolve', - 'dotproduct', - 'first_true', - 'flatten', - 'grouper', - 'iter_except', - 'ncycles', - 'nth', - 'nth_combination', - 'padnone', - 'pad_none', - 'pairwise', - 'partition', - 'powerset', - 'prepend', - 'quantify', - 'random_combination_with_replacement', - 'random_combination', - 'random_permutation', - 'random_product', - 'repeatfunc', - 'roundrobin', - 'tabulate', - 'tail', - 'take', - 'unique_everseen', - 'unique_justseen', -] - - -def take(n, iterable): - """Return first *n* items of the iterable as a list. - - >>> take(3, range(10)) - [0, 1, 2] - - If there are fewer than *n* items in the iterable, all of them are - returned. - - >>> take(10, range(3)) - [0, 1, 2] - - """ - return list(islice(iterable, n)) - - -def tabulate(function, start=0): - """Return an iterator over the results of ``func(start)``, - ``func(start + 1)``, ``func(start + 2)``... - - *func* should be a function that accepts one integer argument. - - If *start* is not specified it defaults to 0. It will be incremented each - time the iterator is advanced. - - >>> square = lambda x: x ** 2 - >>> iterator = tabulate(square, -3) - >>> take(4, iterator) - [9, 4, 1, 0] - - """ - return map(function, count(start)) - - -def tail(n, iterable): - """Return an iterator over the last *n* items of *iterable*. - - >>> t = tail(3, 'ABCDEFG') - >>> list(t) - ['E', 'F', 'G'] - - """ - return iter(deque(iterable, maxlen=n)) - - -def consume(iterator, n=None): - """Advance *iterable* by *n* steps. If *n* is ``None``, consume it - entirely. - - Efficiently exhausts an iterator without returning values. Defaults to - consuming the whole iterator, but an optional second argument may be - provided to limit consumption. - - >>> i = (x for x in range(10)) - >>> next(i) - 0 - >>> consume(i, 3) - >>> next(i) - 4 - >>> consume(i) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - If the iterator has fewer items remaining than the provided limit, the - whole iterator will be consumed. - - >>> i = (x for x in range(3)) - >>> consume(i, 5) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - """ - # Use functions that consume iterators at C speed. - if n is None: - # feed the entire iterator into a zero-length deque - deque(iterator, maxlen=0) - else: - # advance to the empty slice starting at position n - next(islice(iterator, n, n), None) - - -def nth(iterable, n, default=None): - """Returns the nth item or a default value. - - >>> l = range(10) - >>> nth(l, 3) - 3 - >>> nth(l, 20, "zebra") - 'zebra' - - """ - return next(islice(iterable, n, None), default) - - -def all_equal(iterable): - """ - Returns ``True`` if all the elements are equal to each other. - - >>> all_equal('aaaa') - True - >>> all_equal('aaab') - False - - """ - g = groupby(iterable) - return next(g, True) and not next(g, False) - - -def quantify(iterable, pred=bool): - """Return the how many times the predicate is true. - - >>> quantify([True, False, True]) - 2 - - """ - return sum(map(pred, iterable)) - - -def pad_none(iterable): - """Returns the sequence of elements and then returns ``None`` indefinitely. - - >>> take(5, pad_none(range(3))) - [0, 1, 2, None, None] - - Useful for emulating the behavior of the built-in :func:`map` function. - - See also :func:`padded`. - - """ - return chain(iterable, repeat(None)) - - -padnone = pad_none - - -def ncycles(iterable, n): - """Returns the sequence elements *n* times - - >>> list(ncycles(["a", "b"], 3)) - ['a', 'b', 'a', 'b', 'a', 'b'] - - """ - return chain.from_iterable(repeat(tuple(iterable), n)) - - -def dotproduct(vec1, vec2): - """Returns the dot product of the two iterables. - - >>> dotproduct([10, 10], [20, 20]) - 400 - - """ - return sum(map(operator.mul, vec1, vec2)) - - -def flatten(listOfLists): - """Return an iterator flattening one level of nesting in a list of lists. - - >>> list(flatten([[0, 1], [2, 3]])) - [0, 1, 2, 3] - - See also :func:`collapse`, which can flatten multiple levels of nesting. - - """ - return chain.from_iterable(listOfLists) - - -def repeatfunc(func, times=None, *args): - """Call *func* with *args* repeatedly, returning an iterable over the - results. - - If *times* is specified, the iterable will terminate after that many - repetitions: - - >>> from operator import add - >>> times = 4 - >>> args = 3, 5 - >>> list(repeatfunc(add, times, *args)) - [8, 8, 8, 8] - - If *times* is ``None`` the iterable will not terminate: - - >>> from random import randrange - >>> times = None - >>> args = 1, 11 - >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP - [2, 4, 8, 1, 8, 4] - - """ - if times is None: - return starmap(func, repeat(args)) - return starmap(func, repeat(args, times)) - - -def _pairwise(iterable): - """Returns an iterator of paired items, overlapping, from the original - - >>> take(4, pairwise(count())) - [(0, 1), (1, 2), (2, 3), (3, 4)] - - On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`. - - """ - a, b = tee(iterable) - next(b, None) - yield from zip(a, b) - - -try: - from itertools import pairwise as itertools_pairwise -except ImportError: - pairwise = _pairwise -else: - - def pairwise(iterable): - yield from itertools_pairwise(iterable) - - pairwise.__doc__ = _pairwise.__doc__ - - -def grouper(iterable, n, fillvalue=None): - """Collect data into fixed-length chunks or blocks. - - >>> list(grouper('ABCDEFG', 3, 'x')) - [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')] - - """ - if isinstance(iterable, int): - warnings.warn( - "grouper expects iterable as first parameter", DeprecationWarning - ) - n, iterable = iterable, n - args = [iter(iterable)] * n - return zip_longest(fillvalue=fillvalue, *args) - - -def roundrobin(*iterables): - """Yields an item from each iterable, alternating between them. - - >>> list(roundrobin('ABC', 'D', 'EF')) - ['A', 'D', 'E', 'B', 'F', 'C'] - - This function produces the same output as :func:`interleave_longest`, but - may perform better for some inputs (in particular when the number of - iterables is small). - - """ - # Recipe credited to George Sakkis - pending = len(iterables) - nexts = cycle(iter(it).__next__ for it in iterables) - while pending: - try: - for next in nexts: - yield next() - except StopIteration: - pending -= 1 - nexts = cycle(islice(nexts, pending)) - - -def partition(pred, iterable): - """ - Returns a 2-tuple of iterables derived from the input iterable. - The first yields the items that have ``pred(item) == False``. - The second yields the items that have ``pred(item) == True``. - - >>> is_odd = lambda x: x % 2 != 0 - >>> iterable = range(10) - >>> even_items, odd_items = partition(is_odd, iterable) - >>> list(even_items), list(odd_items) - ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9]) - - If *pred* is None, :func:`bool` is used. - - >>> iterable = [0, 1, False, True, '', ' '] - >>> false_items, true_items = partition(None, iterable) - >>> list(false_items), list(true_items) - ([0, False, ''], [1, True, ' ']) - - """ - if pred is None: - pred = bool - - evaluations = ((pred(x), x) for x in iterable) - t1, t2 = tee(evaluations) - return ( - (x for (cond, x) in t1 if not cond), - (x for (cond, x) in t2 if cond), - ) - - -def powerset(iterable): - """Yields all possible subsets of the iterable. - - >>> list(powerset([1, 2, 3])) - [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)] - - :func:`powerset` will operate on iterables that aren't :class:`set` - instances, so repeated elements in the input will produce repeated elements - in the output. Use :func:`unique_everseen` on the input to avoid generating - duplicates: - - >>> seq = [1, 1, 0] - >>> list(powerset(seq)) - [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)] - >>> from more_itertools import unique_everseen - >>> list(powerset(unique_everseen(seq))) - [(), (1,), (0,), (1, 0)] - - """ - s = list(iterable) - return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1)) - - -def unique_everseen(iterable, key=None): - """ - Yield unique elements, preserving order. - - >>> list(unique_everseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D'] - >>> list(unique_everseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'D'] - - Sequences with a mix of hashable and unhashable items can be used. - The function will be slower (i.e., `O(n^2)`) for unhashable items. - - Remember that ``list`` objects are unhashable - you can use the *key* - parameter to transform the list to a tuple (which is hashable) to - avoid a slowdown. - - >>> iterable = ([1, 2], [2, 3], [1, 2]) - >>> list(unique_everseen(iterable)) # Slow - [[1, 2], [2, 3]] - >>> list(unique_everseen(iterable, key=tuple)) # Faster - [[1, 2], [2, 3]] - - Similary, you may want to convert unhashable ``set`` objects with - ``key=frozenset``. For ``dict`` objects, - ``key=lambda x: frozenset(x.items())`` can be used. - - """ - seenset = set() - seenset_add = seenset.add - seenlist = [] - seenlist_add = seenlist.append - use_key = key is not None - - for element in iterable: - k = key(element) if use_key else element - try: - if k not in seenset: - seenset_add(k) - yield element - except TypeError: - if k not in seenlist: - seenlist_add(k) - yield element - - -def unique_justseen(iterable, key=None): - """Yields elements in order, ignoring serial duplicates - - >>> list(unique_justseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D', 'A', 'B'] - >>> list(unique_justseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'A', 'D'] - - """ - return map(next, map(operator.itemgetter(1), groupby(iterable, key))) - - -def iter_except(func, exception, first=None): - """Yields results from a function repeatedly until an exception is raised. - - Converts a call-until-exception interface to an iterator interface. - Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel - to end the loop. - - >>> l = [0, 1, 2] - >>> list(iter_except(l.pop, IndexError)) - [2, 1, 0] - - """ - try: - if first is not None: - yield first() - while 1: - yield func() - except exception: - pass - - -def first_true(iterable, default=None, pred=None): - """ - Returns the first true value in the iterable. - - If no true value is found, returns *default* - - If *pred* is not None, returns the first item for which - ``pred(item) == True`` . - - >>> first_true(range(10)) - 1 - >>> first_true(range(10), pred=lambda x: x > 5) - 6 - >>> first_true(range(10), default='missing', pred=lambda x: x > 9) - 'missing' - - """ - return next(filter(pred, iterable), default) - - -def random_product(*args, repeat=1): - """Draw an item at random from each of the input iterables. - - >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP - ('c', 3, 'Z') - - If *repeat* is provided as a keyword argument, that many items will be - drawn from each iterable. - - >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP - ('a', 2, 'd', 3) - - This equivalent to taking a random selection from - ``itertools.product(*args, **kwarg)``. - - """ - pools = [tuple(pool) for pool in args] * repeat - return tuple(choice(pool) for pool in pools) - - -def random_permutation(iterable, r=None): - """Return a random *r* length permutation of the elements in *iterable*. - - If *r* is not specified or is ``None``, then *r* defaults to the length of - *iterable*. - - >>> random_permutation(range(5)) # doctest:+SKIP - (3, 4, 0, 1, 2) - - This equivalent to taking a random selection from - ``itertools.permutations(iterable, r)``. - - """ - pool = tuple(iterable) - r = len(pool) if r is None else r - return tuple(sample(pool, r)) - - -def random_combination(iterable, r): - """Return a random *r* length subsequence of the elements in *iterable*. - - >>> random_combination(range(5), 3) # doctest:+SKIP - (2, 3, 4) - - This equivalent to taking a random selection from - ``itertools.combinations(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(sample(range(n), r)) - return tuple(pool[i] for i in indices) - - -def random_combination_with_replacement(iterable, r): - """Return a random *r* length subsequence of elements in *iterable*, - allowing individual elements to be repeated. - - >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP - (0, 0, 1, 2, 2) - - This equivalent to taking a random selection from - ``itertools.combinations_with_replacement(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(randrange(n) for i in range(r)) - return tuple(pool[i] for i in indices) - - -def nth_combination(iterable, r, index): - """Equivalent to ``list(combinations(iterable, r))[index]``. - - The subsequences of *iterable* that are of length *r* can be ordered - lexicographically. :func:`nth_combination` computes the subsequence at - sort position *index* directly, without computing the previous - subsequences. - - >>> nth_combination(range(5), 3, 5) - (0, 3, 4) - - ``ValueError`` will be raised If *r* is negative or greater than the length - of *iterable*. - ``IndexError`` will be raised if the given *index* is invalid. - """ - pool = tuple(iterable) - n = len(pool) - if (r < 0) or (r > n): - raise ValueError - - c = 1 - k = min(r, n - r) - for i in range(1, k + 1): - c = c * (n - k + i) // i - - if index < 0: - index += c - - if (index < 0) or (index >= c): - raise IndexError - - result = [] - while r: - c, n, r = c * r // n, n - 1, r - 1 - while index >= c: - index -= c - c, n = c * (n - r) // n, n - 1 - result.append(pool[-1 - n]) - - return tuple(result) - - -def prepend(value, iterator): - """Yield *value*, followed by the elements in *iterator*. - - >>> value = '0' - >>> iterator = ['1', '2', '3'] - >>> list(prepend(value, iterator)) - ['0', '1', '2', '3'] - - To prepend multiple values, see :func:`itertools.chain` - or :func:`value_chain`. - - """ - return chain([value], iterator) - - -def convolve(signal, kernel): - """Convolve the iterable *signal* with the iterable *kernel*. - - >>> signal = (1, 2, 3, 4, 5) - >>> kernel = [3, 2, 1] - >>> list(convolve(signal, kernel)) - [3, 8, 14, 20, 26, 14, 5] - - Note: the input arguments are not interchangeable, as the *kernel* - is immediately consumed and stored. - - """ - kernel = tuple(kernel)[::-1] - n = len(kernel) - window = deque([0], maxlen=n) * n - for x in chain(signal, repeat(0, n - 1)): - window.append(x) - yield sum(map(operator.mul, kernel, window)) diff --git a/spaces/tobiascz/demotime/pytorch_grad_cam/eigen_cam.py b/spaces/tobiascz/demotime/pytorch_grad_cam/eigen_cam.py deleted file mode 100644 index 89563748d14672ff026d21f134c2d234659523b5..0000000000000000000000000000000000000000 --- a/spaces/tobiascz/demotime/pytorch_grad_cam/eigen_cam.py +++ /dev/null @@ -1,20 +0,0 @@ -from pytorch_grad_cam.base_cam import BaseCAM -from pytorch_grad_cam.utils.svd_on_activations import get_2d_projection - -# https://arxiv.org/abs/2008.00299 - - -class EigenCAM(BaseCAM): - def __init__(self, model, target_layers, use_cuda=False, - reshape_transform=None): - super(EigenCAM, self).__init__(model, target_layers, use_cuda, - reshape_transform) - - def get_cam_image(self, - input_tensor, - target_layer, - target_category, - activations, - grads, - eigen_smooth): - return get_2d_projection(activations) diff --git a/spaces/tomofi/NDLOCR/cli/procs/page_deskew.py b/spaces/tomofi/NDLOCR/cli/procs/page_deskew.py deleted file mode 100644 index 21bb69b149792803991169a07efc54d9e6bed8d5..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/cli/procs/page_deskew.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) 2022, National Diet Library, Japan -# -# This software is released under the CC BY 4.0. -# https://creativecommons.org/licenses/by/4.0/ - - -import copy -import numpy - -from .base_proc import BaseInferenceProcess - - -class PageDeskewProcess(BaseInferenceProcess): - """ - 傾き補正を実行するプロセスのクラス。 - BaseInferenceProcessを継承しています。 - """ - def __init__(self, cfg, pid): - """ - Parameters - ---------- - cfg : dict - 本推論処理における設定情報です。 - pid : int - 実行される順序を表す数値。 - """ - super().__init__(cfg, pid, '_page_deskew') - from src.deskew_HT.alyn3.deskew import Deskew - self.deskewer = Deskew('', '', - r_angle=cfg['page_deskew']['r_angle'], - skew_max=cfg['page_deskew']['skew_max'], - acc_deg=cfg['page_deskew']['acc_deg'], - method=cfg['page_deskew']['method'], - gray=cfg['page_deskew']['gray'], - quality=cfg['page_deskew']['quality'], - short=cfg['page_deskew']['short'], - roi_w=cfg['page_deskew']['roi_w'], - roi_h=cfg['page_deskew']['roi_h']) - self._run_src_inference = self.deskewer.deskew_on_memory - - - def _is_valid_input(self, input_data): - """ - 本クラスの推論処理における入力データのバリデーション。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - [変数なし] : bool -  入力データが正しければTrue, そうでなければFalseを返します。 - """ - if type(input_data['img']) is not numpy.ndarray: - print('PageDeskewProcess: input img is not numpy.ndarray') - return False - return True - - def _run_process(self, input_data): - """ - 推論処理の本体部分。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - result : dict - 推論処理の結果を保持する辞書型データ。 - 基本的にinput_dataと同じ構造です。 - """ - print('### Page Deskew Process ###') - inference_output = self._run_src_inference(input_data['img']) - - # Create result to pass img_path and img data - result = [] - output_data = copy.deepcopy(input_data) - output_data['img'] = inference_output - result.append(output_data) - - return result diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_x101_64x4d_fpn_gn-head_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_x101_64x4d_fpn_gn-head_2x_coco.py deleted file mode 100644 index 2fdc53c8c04c12bed16a31281127f9774bb70b64..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_x101_64x4d_fpn_gn-head_2x_coco.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = './grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py deleted file mode 100644 index a6a668c4e33611e2b69009741558d83558cc9b4f..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py +++ /dev/null @@ -1,53 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_caffe_c4.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - type='TridentFasterRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='TridentResNet', - trident_dilations=(1, 2, 3), - num_branch=3, - test_branch_idx=1), - roi_head=dict(type='TridentRoIHead', num_branch=3, test_branch_idx=1), - train_cfg=dict( - rpn_proposal=dict(max_per_img=500), - rcnn=dict( - sampler=dict(num=128, pos_fraction=0.5, - add_gt_as_proposals=False)))) - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/triple-t/ttt-space/schema.sql b/spaces/triple-t/ttt-space/schema.sql deleted file mode 100644 index e490a829d5852317c33fb426e4eca864bb6632cd..0000000000000000000000000000000000000000 --- a/spaces/triple-t/ttt-space/schema.sql +++ /dev/null @@ -1,59 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE rooms ( - id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, - room_id TEXT NOT NULL -);INSERT INTO rooms VALUES(1,'room-0'); -INSERT INTO rooms VALUES(2,'room-1'); -INSERT INTO rooms VALUES(3,'room-2'); -INSERT INTO rooms VALUES(4,'room-3'); -INSERT INTO rooms VALUES(5,'room-4'); -INSERT INTO rooms VALUES(6,'room-5'); -INSERT INTO rooms VALUES(7,'room-6'); -INSERT INTO rooms VALUES(8,'room-7'); -INSERT INTO rooms VALUES(9,'room-8'); -INSERT INTO rooms VALUES(10,'room-9'); -INSERT INTO rooms VALUES(11,'room-10'); -INSERT INTO rooms VALUES(12,'room-11'); -INSERT INTO rooms VALUES(13,'room-12'); -INSERT INTO rooms VALUES(14,'room-13'); -INSERT INTO rooms VALUES(15,'room-14'); -INSERT INTO rooms VALUES(16,'room-15'); -INSERT INTO rooms VALUES(17,'room-16'); -INSERT INTO rooms VALUES(18,'room-17'); -INSERT INTO rooms VALUES(19,'room-18'); -INSERT INTO rooms VALUES(20,'room-19'); -INSERT INTO rooms VALUES(21,'room-20'); -INSERT INTO rooms VALUES(22,'room-21'); -INSERT INTO rooms VALUES(23,'room-22'); -INSERT INTO rooms VALUES(24,'room-23'); -INSERT INTO rooms VALUES(25,'room-24'); -INSERT INTO rooms VALUES(26,'room-25'); -INSERT INTO rooms VALUES(27,'room-26'); -INSERT INTO rooms VALUES(28,'room-27'); -INSERT INTO rooms VALUES(29,'room-28'); -INSERT INTO rooms VALUES(30,'room-29'); -INSERT INTO rooms VALUES(31,'room-30'); -INSERT INTO rooms VALUES(32,'room-31'); -INSERT INTO rooms VALUES(33,'room-32'); -INSERT INTO rooms VALUES(34,'room-33'); -INSERT INTO rooms VALUES(35,'room-34'); -INSERT INTO rooms VALUES(36,'room-35'); -INSERT INTO rooms VALUES(37,'room-36'); -INSERT INTO rooms VALUES(38,'room-37'); -INSERT INTO rooms VALUES(39,'room-38'); -INSERT INTO rooms VALUES(40,'room-39'); -INSERT INTO rooms VALUES(41,'room-40'); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('rooms',41); -CREATE TABLE rooms_data ( - id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, - room_id TEXT NOT NULL, - uuid TEXT NOT NULL, - x INTEGER NOT NULL, - y INTEGER NOT NULL, - prompt TEXT NOT NULL, - time DATETIME NOT NULL, - key TEXT NOT NULL, - UNIQUE (key) ON CONFLICT IGNORE -);COMMIT; diff --git a/spaces/trttung1610/musicgen/audiocraft/losses/__init__.py b/spaces/trttung1610/musicgen/audiocraft/losses/__init__.py deleted file mode 100644 index d55107b2c11822cab749ed3683cf19020802898a..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/losses/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Loss related classes and functions. In particular the loss balancer from -EnCodec, and the usual spectral losses.""" - -# flake8: noqa -from .balancer import Balancer -from .sisnr import SISNR -from .stftloss import ( - LogSTFTMagnitudeLoss, - MRSTFTLoss, - SpectralConvergenceLoss, - STFTLoss -) -from .specloss import ( - MelSpectrogramL1Loss, - MultiScaleMelSpectrogramLoss, -) diff --git a/spaces/tsi-org/LLaVA/llava/eval/model_qa.py b/spaces/tsi-org/LLaVA/llava/eval/model_qa.py deleted file mode 100644 index 6c8c1138ac166387d82cba868d00f64ab4e6a33c..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/eval/model_qa.py +++ /dev/null @@ -1,85 +0,0 @@ -import argparse -from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria -import torch -import os -import json -from tqdm import tqdm -import shortuuid - -from llava.conversation import default_conversation -from llava.utils import disable_torch_init - - -# new stopping implementation -class KeywordsStoppingCriteria(StoppingCriteria): - def __init__(self, keywords, tokenizer, input_ids): - self.keywords = keywords - self.tokenizer = tokenizer - self.start_len = None - self.input_ids = input_ids - - def __call__(self, output_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - if self.start_len is None: - self.start_len = self.input_ids.shape[1] - else: - outputs = self.tokenizer.batch_decode(output_ids[:, self.start_len:], skip_special_tokens=True)[0] - for keyword in self.keywords: - if keyword in outputs: - return True - return False - - -@torch.inference_mode() -def eval_model(model_name, questions_file, answers_file): - # Model - disable_torch_init() - model_name = os.path.expanduser(model_name) - tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) - model = AutoModelForCausalLM.from_pretrained(model_name, - torch_dtype=torch.float16).cuda() - - - ques_file = open(os.path.expanduser(questions_file), "r") - ans_file = open(os.path.expanduser(answers_file), "w") - for i, line in enumerate(tqdm(ques_file)): - idx = json.loads(line)["question_id"] - qs = json.loads(line)["text"] - cat = json.loads(line)["category"] - conv = default_conversation.copy() - conv.append_message(conv.roles[0], qs) - prompt = conv.get_prompt() - inputs = tokenizer([prompt]) - input_ids = torch.as_tensor(inputs.input_ids).cuda() - stopping_criteria = KeywordsStoppingCriteria([conv.sep], tokenizer, input_ids) - output_ids = model.generate( - input_ids, - do_sample=True, - use_cache=True, - temperature=0.7, - max_new_tokens=1024, - stopping_criteria=[stopping_criteria]) - outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0] - try: - index = outputs.index(conv.sep, len(prompt)) - except ValueError: - outputs += conv.sep - index = outputs.index(conv.sep, len(prompt)) - - outputs = outputs[len(prompt) + len(conv.roles[1]) + 2:index].strip() - ans_id = shortuuid.uuid() - ans_file.write(json.dumps({"question_id": idx, - "text": outputs, - "answer_id": ans_id, - "model_id": model_name, - "metadata": {}}) + "\n") - ans_file.flush() - ans_file.close() - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model-name", type=str, default="facebook/opt-350m") - parser.add_argument("--question-file", type=str, default="tables/question.jsonl") - parser.add_argument("--answers-file", type=str, default="answer.jsonl") - args = parser.parse_args() - - eval_model(args.model_name, args.question_file, args.answers_file) diff --git a/spaces/tsi-org/LLaVA/llava/serve/register_worker.py b/spaces/tsi-org/LLaVA/llava/serve/register_worker.py deleted file mode 100644 index 2c2c40295e0351f25709ba25554c9329f15bf0d2..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/serve/register_worker.py +++ /dev/null @@ -1,26 +0,0 @@ -""" -Manually register workers. - -Usage: -python3 -m fastchat.serve.register_worker --controller http://localhost:21001 --worker-name http://localhost:21002 -""" - -import argparse - -import requests - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--controller-address", type=str) - parser.add_argument("--worker-name", type=str) - parser.add_argument("--check-heart-beat", action="store_true") - args = parser.parse_args() - - url = args.controller_address + "/register_worker" - data = { - "worker_name": args.worker_name, - "check_heart_beat": args.check_heart_beat, - "worker_status": None, - } - r = requests.post(url, json=data) - assert r.status_code == 200 diff --git a/spaces/ulysses115/ulysses115-pmvoice/attentions.py b/spaces/ulysses115/ulysses115-pmvoice/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/ulysses115-pmvoice/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Airport Simulator 2015 Crack Serial Key.md b/spaces/usbethFlerru/sovits-modelsV2/example/Airport Simulator 2015 Crack Serial Key.md deleted file mode 100644 index 8cb63219f71064e400b94ccffc5b42a890d854be..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Airport Simulator 2015 Crack Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

                    Airport Simulator 2015 Crack Serial Key


                    DOWNLOADhttps://urlcod.com/2uyX7M



                    - - aaccfb2cb3
                    -
                    -
                    -

                    diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/dist.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/dist.py deleted file mode 100644 index 6de029f5c96ea237d8b9e4fc5f8e1d605f506d35..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/dist.py +++ /dev/null @@ -1,67 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import os -import re -import shutil -import socket -import sys -import tempfile -from pathlib import Path - -from . import USER_CONFIG_DIR -from .torch_utils import TORCH_1_9 - - -def find_free_network_port() -> int: - """Finds a free port on localhost. - - It is useful in single-node training when we don't want to connect to a real main node but have to set the - `MASTER_PORT` environment variable. - """ - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - s.bind(('127.0.0.1', 0)) - return s.getsockname()[1] # port - - -def generate_ddp_file(trainer): - """Generates a DDP file and returns its file name.""" - module, name = f'{trainer.__class__.__module__}.{trainer.__class__.__name__}'.rsplit('.', 1) - - content = f'''overrides = {vars(trainer.args)} \nif __name__ == "__main__": - from {module} import {name} - from ultralytics.yolo.utils import DEFAULT_CFG_DICT - - cfg = DEFAULT_CFG_DICT.copy() - cfg.update(save_dir='') # handle the extra key 'save_dir' - trainer = {name}(cfg=cfg, overrides=overrides) - trainer.train()''' - (USER_CONFIG_DIR / 'DDP').mkdir(exist_ok=True) - with tempfile.NamedTemporaryFile(prefix='_temp_', - suffix=f'{id(trainer)}.py', - mode='w+', - encoding='utf-8', - dir=USER_CONFIG_DIR / 'DDP', - delete=False) as file: - file.write(content) - return file.name - - -def generate_ddp_command(world_size, trainer): - """Generates and returns command for distributed training.""" - import __main__ # noqa local import to avoid https://github.com/Lightning-AI/lightning/issues/15218 - if not trainer.resume: - shutil.rmtree(trainer.save_dir) # remove the save_dir - file = str(Path(sys.argv[0]).resolve()) - safe_pattern = re.compile(r'^[a-zA-Z0-9_. /\\-]{1,128}$') # allowed characters and maximum of 100 characters - if not (safe_pattern.match(file) and Path(file).exists() and file.endswith('.py')): # using CLI - file = generate_ddp_file(trainer) - dist_cmd = 'torch.distributed.run' if TORCH_1_9 else 'torch.distributed.launch' - port = find_free_network_port() - cmd = [sys.executable, '-m', dist_cmd, '--nproc_per_node', f'{world_size}', '--master_port', f'{port}', file] - return cmd, file - - -def ddp_cleanup(trainer, file): - """Delete temp file if created.""" - if f'{id(trainer)}.py' in file: # if temp_file suffix in file - os.remove(file) diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/channels_encoder.py b/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/channels_encoder.py deleted file mode 100644 index c39cd3980100354b973cbaa7e3b5e26e5729b288..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/channels_encoder.py +++ /dev/null @@ -1,959 +0,0 @@ -from abc import ABC, abstractmethod -from dataclasses import dataclass -from functools import partial -from typing import Any, Dict, Iterable, List, Optional, Tuple, Union - -import numpy as np -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from PIL import Image -from torch import torch - -from shap_e.models.generation.perceiver import SimplePerceiver -from shap_e.models.generation.transformer import Transformer -from shap_e.models.nn.camera import DifferentiableProjectiveCamera -from shap_e.models.nn.encoding import ( - MultiviewPointCloudEmbedding, - MultiviewPoseEmbedding, - PosEmbLinear, -) -from shap_e.models.nn.ops import PointSetEmbedding -from shap_e.rendering.point_cloud import PointCloud -from shap_e.rendering.view_data import ProjectiveCamera -from shap_e.util.collections import AttrDict - -from .base import ChannelsEncoder - - -class TransformerChannelsEncoder(ChannelsEncoder, ABC): - """ - Encode point clouds using a transformer model with an extra output - token used to extract a latent vector. - """ - - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - param_shapes: Dict[str, Tuple[int]], - params_proj: Dict[str, Any], - d_latent: int = 512, - latent_bottleneck: Optional[Dict[str, Any]] = None, - latent_warp: Optional[Dict[str, Any]] = None, - n_ctx: int = 1024, - width: int = 512, - layers: int = 12, - heads: int = 8, - init_scale: float = 0.25, - latent_scale: float = 1.0, - ): - super().__init__( - device=device, - param_shapes=param_shapes, - params_proj=params_proj, - d_latent=d_latent, - latent_bottleneck=latent_bottleneck, - latent_warp=latent_warp, - ) - self.width = width - self.device = device - self.dtype = dtype - - self.n_ctx = n_ctx - - self.backbone = Transformer( - device=device, - dtype=dtype, - n_ctx=n_ctx + self.latent_ctx, - width=width, - layers=layers, - heads=heads, - init_scale=init_scale, - ) - self.ln_pre = nn.LayerNorm(width, device=device, dtype=dtype) - self.ln_post = nn.LayerNorm(width, device=device, dtype=dtype) - self.register_parameter( - "output_tokens", - nn.Parameter(torch.randn(self.latent_ctx, width, device=device, dtype=dtype)), - ) - self.output_proj = nn.Linear(width, d_latent, device=device, dtype=dtype) - self.latent_scale = latent_scale - - @abstractmethod - def encode_input(self, batch: AttrDict, options: Optional[AttrDict] = None) -> torch.Tensor: - pass - - def encode_to_channels( - self, batch: AttrDict, options: Optional[AttrDict] = None - ) -> torch.Tensor: - h = self.encode_input(batch, options=options) - h = torch.cat([h, self.output_tokens[None].repeat(len(h), 1, 1)], dim=1) - h = self.ln_pre(h) - h = self.backbone(h) - h = h[:, -self.latent_ctx :] - h = self.ln_post(h) - h = self.output_proj(h) - return h - - -class PerceiverChannelsEncoder(ChannelsEncoder, ABC): - """ - Encode point clouds using a perceiver model with an extra output - token used to extract a latent vector. - """ - - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - param_shapes: Dict[str, Tuple[int]], - params_proj: Dict[str, Any], - min_unrolls: int, - max_unrolls: int, - d_latent: int = 512, - latent_bottleneck: Optional[Dict[str, Any]] = None, - latent_warp: Optional[Dict[str, Any]] = None, - width: int = 512, - layers: int = 12, - xattn_layers: int = 1, - heads: int = 8, - init_scale: float = 0.25, - # Training hparams - inner_batch_size: Union[int, List[int]] = 1, - data_ctx: int = 1, - ): - super().__init__( - device=device, - param_shapes=param_shapes, - params_proj=params_proj, - d_latent=d_latent, - latent_bottleneck=latent_bottleneck, - latent_warp=latent_warp, - ) - self.width = width - self.device = device - self.dtype = dtype - - if isinstance(inner_batch_size, int): - inner_batch_size = [inner_batch_size] - self.inner_batch_size = inner_batch_size - self.data_ctx = data_ctx - self.min_unrolls = min_unrolls - self.max_unrolls = max_unrolls - - encoder_fn = lambda inner_batch_size: SimplePerceiver( - device=device, - dtype=dtype, - n_ctx=self.data_ctx + self.latent_ctx, - n_data=inner_batch_size, - width=width, - layers=xattn_layers, - heads=heads, - init_scale=init_scale, - ) - self.encoder = ( - encoder_fn(self.inner_batch_size[0]) - if len(self.inner_batch_size) == 1 - else nn.ModuleList([encoder_fn(inner_bsz) for inner_bsz in self.inner_batch_size]) - ) - self.processor = Transformer( - device=device, - dtype=dtype, - n_ctx=self.data_ctx + self.latent_ctx, - layers=layers - xattn_layers, - width=width, - heads=heads, - init_scale=init_scale, - ) - self.ln_pre = nn.LayerNorm(width, device=device, dtype=dtype) - self.ln_post = nn.LayerNorm(width, device=device, dtype=dtype) - self.register_parameter( - "output_tokens", - nn.Parameter(torch.randn(self.latent_ctx, width, device=device, dtype=dtype)), - ) - self.output_proj = nn.Linear(width, d_latent, device=device, dtype=dtype) - - @abstractmethod - def get_h_and_iterator( - self, batch: AttrDict, options: Optional[AttrDict] = None - ) -> Tuple[torch.Tensor, Iterable[Union[torch.Tensor, Tuple]]]: - """ - :return: a tuple of ( - the initial output tokens of size [batch_size, data_ctx + latent_ctx, width], - an iterator over the given data - ) - """ - - def encode_to_channels( - self, batch: AttrDict, options: Optional[AttrDict] = None - ) -> torch.Tensor: - h, it = self.get_h_and_iterator(batch, options=options) - n_unrolls = self.get_n_unrolls() - - for _ in range(n_unrolls): - data = next(it) - if isinstance(data, tuple): - for data_i, encoder_i in zip(data, self.encoder): - h = encoder_i(h, data_i) - else: - h = self.encoder(h, data) - h = self.processor(h) - - h = self.output_proj(self.ln_post(h[:, -self.latent_ctx :])) - return h - - def get_n_unrolls(self): - if self.training: - n_unrolls = torch.randint( - self.min_unrolls, self.max_unrolls + 1, size=(), device=self.device - ) - dist.broadcast(n_unrolls, 0) - n_unrolls = n_unrolls.item() - else: - n_unrolls = self.max_unrolls - return n_unrolls - - -@dataclass -class DatasetIterator: - - embs: torch.Tensor # [batch_size, dataset_size, *shape] - batch_size: int - - def __iter__(self): - self._reset() - return self - - def __next__(self): - _outer_batch_size, dataset_size, *_shape = self.embs.shape - - while True: - start = self.idx - self.idx += self.batch_size - end = self.idx - if end <= dataset_size: - break - self._reset() - - return self.embs[:, start:end] - - def _reset(self): - self._shuffle() - self.idx = 0 # pylint: disable=attribute-defined-outside-init - - def _shuffle(self): - outer_batch_size, dataset_size, *shape = self.embs.shape - idx = torch.stack( - [ - torch.randperm(dataset_size, device=self.embs.device) - for _ in range(outer_batch_size) - ], - dim=0, - ) - idx = idx.view(outer_batch_size, dataset_size, *([1] * len(shape))) - idx = torch.broadcast_to(idx, self.embs.shape) - self.embs = torch.gather(self.embs, 1, idx) - - -class PointCloudTransformerChannelsEncoder(TransformerChannelsEncoder): - """ - Encode point clouds using a transformer model with an extra output - token used to extract a latent vector. - """ - - def __init__( - self, - *, - input_channels: int = 6, - **kwargs, - ): - super().__init__(**kwargs) - self.input_channels = input_channels - self.input_proj = nn.Linear( - input_channels, self.width, device=self.device, dtype=self.dtype - ) - - def encode_input(self, batch: AttrDict, options: Optional[AttrDict] = None) -> torch.Tensor: - _ = options - points = batch.points - h = self.input_proj(points.permute(0, 2, 1)) # NCL -> NLC - return h - - -class PointCloudPerceiverChannelsEncoder(PerceiverChannelsEncoder): - """ - Encode point clouds using a transformer model with an extra output - token used to extract a latent vector. - """ - - def __init__( - self, - *, - cross_attention_dataset: str = "pcl", - fps_method: str = "fps", - # point cloud hyperparameters - input_channels: int = 6, - pos_emb: Optional[str] = None, - # multiview hyperparameters - image_size: int = 256, - patch_size: int = 32, - pose_dropout: float = 0.0, - use_depth: bool = False, - max_depth: float = 5.0, - # point conv hyperparameters - pointconv_radius: float = 0.5, - pointconv_samples: int = 32, - pointconv_hidden: Optional[List[int]] = None, - pointconv_patch_size: int = 1, - pointconv_stride: int = 1, - pointconv_padding_mode: str = "zeros", - use_pointconv: bool = False, - # other hyperparameters - **kwargs, - ): - super().__init__(**kwargs) - assert cross_attention_dataset in ( - "pcl", - "multiview", - "dense_pose_multiview", - "multiview_pcl", - "pcl_and_multiview_pcl", - "incorrect_multiview_pcl", - "pcl_and_incorrect_multiview_pcl", - ) - assert fps_method in ("fps", "first") - self.cross_attention_dataset = cross_attention_dataset - self.fps_method = fps_method - self.input_channels = input_channels - self.input_proj = PosEmbLinear( - pos_emb, - input_channels, - self.width, - device=self.device, - dtype=self.dtype, - ) - self.use_pointconv = use_pointconv - if use_pointconv: - if pointconv_hidden is None: - pointconv_hidden = [self.width] - self.point_conv = PointSetEmbedding( - n_point=self.data_ctx, - radius=pointconv_radius, - n_sample=pointconv_samples, - d_input=self.input_proj.weight.shape[0], - d_hidden=pointconv_hidden, - patch_size=pointconv_patch_size, - stride=pointconv_stride, - padding_mode=pointconv_padding_mode, - fps_method=fps_method, - device=self.device, - dtype=self.dtype, - ) - if self.cross_attention_dataset == "multiview": - self.image_size = image_size - self.patch_size = patch_size - self.pose_dropout = pose_dropout - self.use_depth = use_depth - self.max_depth = max_depth - pos_ctx = (image_size // patch_size) ** 2 - self.register_parameter( - "pos_emb", - nn.Parameter( - torch.randn( - pos_ctx * self.inner_batch_size, - self.width, - device=self.device, - dtype=self.dtype, - ) - ), - ) - self.patch_emb = nn.Conv2d( - in_channels=3 if not use_depth else 4, - out_channels=self.width, - kernel_size=patch_size, - stride=patch_size, - device=self.device, - dtype=self.dtype, - ) - self.camera_emb = nn.Sequential( - nn.Linear( - 3 * 4 + 1, self.width, device=self.device, dtype=self.dtype - ), # input size is for origin+x+y+z+fov - nn.GELU(), - nn.Linear(self.width, 2 * self.width, device=self.device, dtype=self.dtype), - ) - elif self.cross_attention_dataset == "dense_pose_multiview": - # The number of output features is halved, because a patch_size of - # 32 ends up with a large patch_emb weight. - self.view_pose_width = self.width // 2 - self.image_size = image_size - self.patch_size = patch_size - self.use_depth = use_depth - self.max_depth = max_depth - self.mv_pose_embed = MultiviewPoseEmbedding( - posemb_version="nerf", - n_channels=4 if self.use_depth else 3, - out_features=self.view_pose_width, - device=self.device, - dtype=self.dtype, - ) - pos_ctx = (image_size // patch_size) ** 2 - # Positional embedding is unnecessary because pose information is baked into each pixel - self.patch_emb = nn.Conv2d( - in_channels=self.view_pose_width, - out_channels=self.width, - kernel_size=patch_size, - stride=patch_size, - device=self.device, - dtype=self.dtype, - ) - - elif ( - self.cross_attention_dataset == "multiview_pcl" - or self.cross_attention_dataset == "incorrect_multiview_pcl" - ): - self.view_pose_width = self.width // 2 - self.image_size = image_size - self.patch_size = patch_size - self.max_depth = max_depth - assert use_depth - self.mv_pcl_embed = MultiviewPointCloudEmbedding( - posemb_version="nerf", - n_channels=3, - out_features=self.view_pose_width, - device=self.device, - dtype=self.dtype, - ) - self.patch_emb = nn.Conv2d( - in_channels=self.view_pose_width, - out_channels=self.width, - kernel_size=patch_size, - stride=patch_size, - device=self.device, - dtype=self.dtype, - ) - - elif ( - self.cross_attention_dataset == "pcl_and_multiview_pcl" - or self.cross_attention_dataset == "pcl_and_incorrect_multiview_pcl" - ): - self.view_pose_width = self.width // 2 - self.image_size = image_size - self.patch_size = patch_size - self.max_depth = max_depth - assert use_depth - self.mv_pcl_embed = MultiviewPointCloudEmbedding( - posemb_version="nerf", - n_channels=3, - out_features=self.view_pose_width, - device=self.device, - dtype=self.dtype, - ) - self.patch_emb = nn.Conv2d( - in_channels=self.view_pose_width, - out_channels=self.width, - kernel_size=patch_size, - stride=patch_size, - device=self.device, - dtype=self.dtype, - ) - - def get_h_and_iterator( - self, batch: AttrDict, options: Optional[AttrDict] = None - ) -> Tuple[torch.Tensor, Iterable]: - """ - :return: a tuple of ( - the initial output tokens of size [batch_size, data_ctx + latent_ctx, width], - an iterator over the given data - ) - """ - options = AttrDict() if options is None else options - - # Build the initial query embeddings - points = batch.points.permute(0, 2, 1) # NCL -> NLC - if self.use_pointconv: - points = self.input_proj(points).permute(0, 2, 1) # NLC -> NCL - xyz = batch.points[:, :3] - data_tokens = self.point_conv(xyz, points).permute(0, 2, 1) # NCL -> NLC - else: - fps_samples = self.sample_pcl_fps(points) - data_tokens = self.input_proj(fps_samples) - batch_size = points.shape[0] - latent_tokens = self.output_tokens.unsqueeze(0).repeat(batch_size, 1, 1) - h = self.ln_pre(torch.cat([data_tokens, latent_tokens], dim=1)) - assert h.shape == (batch_size, self.data_ctx + self.latent_ctx, self.width) - - # Build the dataset embedding iterator - dataset_fn = { - "pcl": self.get_pcl_dataset, - "multiview": self.get_multiview_dataset, - "dense_pose_multiview": self.get_dense_pose_multiview_dataset, - "pcl_and_multiview_pcl": self.get_pcl_and_multiview_pcl_dataset, - "multiview_pcl": self.get_multiview_pcl_dataset, - }[self.cross_attention_dataset] - it = dataset_fn(batch, options=options) - - return h, it - - def sample_pcl_fps(self, points: torch.Tensor) -> torch.Tensor: - return sample_pcl_fps(points, data_ctx=self.data_ctx, method=self.fps_method) - - def get_pcl_dataset( - self, - batch: AttrDict, - options: Optional[AttrDict[str, Any]] = None, - inner_batch_size: Optional[int] = None, - ) -> Iterable: - _ = options - if inner_batch_size is None: - inner_batch_size = self.inner_batch_size[0] - points = batch.points.permute(0, 2, 1) # NCL -> NLC - dataset_emb = self.input_proj(points) - assert dataset_emb.shape[1] >= inner_batch_size - return iter(DatasetIterator(dataset_emb, batch_size=inner_batch_size)) - - def get_multiview_dataset( - self, - batch: AttrDict, - options: Optional[AttrDict] = None, - inner_batch_size: Optional[int] = None, - ) -> Iterable: - _ = options - - if inner_batch_size is None: - inner_batch_size = self.inner_batch_size[0] - - dataset_emb = self.encode_views(batch) - batch_size, num_views, n_patches, width = dataset_emb.shape - - assert num_views >= inner_batch_size - - it = iter(DatasetIterator(dataset_emb, batch_size=inner_batch_size)) - - def gen(): - while True: - examples = next(it) - assert examples.shape == (batch_size, self.inner_batch_size, n_patches, self.width) - views = examples.reshape(batch_size, -1, width) + self.pos_emb - yield views - - return gen() - - def get_dense_pose_multiview_dataset( - self, - batch: AttrDict, - options: Optional[AttrDict] = None, - inner_batch_size: Optional[int] = None, - ) -> Iterable: - _ = options - - if inner_batch_size is None: - inner_batch_size = self.inner_batch_size[0] - - dataset_emb = self.encode_dense_pose_views(batch) - batch_size, num_views, n_patches, width = dataset_emb.shape - - assert num_views >= inner_batch_size - - it = iter(DatasetIterator(dataset_emb, batch_size=inner_batch_size)) - - def gen(): - while True: - examples = next(it) - assert examples.shape == (batch_size, inner_batch_size, n_patches, self.width) - views = examples.reshape(batch_size, -1, width) - yield views - - return gen() - - def get_pcl_and_multiview_pcl_dataset( - self, - batch: AttrDict, - options: Optional[AttrDict] = None, - use_distance: bool = True, - ) -> Iterable: - _ = options - - pcl_it = self.get_pcl_dataset( - batch, options=options, inner_batch_size=self.inner_batch_size[0] - ) - multiview_pcl_emb = self.encode_multiview_pcl(batch, use_distance=use_distance) - batch_size, num_views, n_patches, width = multiview_pcl_emb.shape - - assert num_views >= self.inner_batch_size[1] - - multiview_pcl_it = iter( - DatasetIterator(multiview_pcl_emb, batch_size=self.inner_batch_size[1]) - ) - - def gen(): - while True: - pcl = next(pcl_it) - multiview_pcl = next(multiview_pcl_it) - assert multiview_pcl.shape == ( - batch_size, - self.inner_batch_size[1], - n_patches, - self.width, - ) - yield pcl, multiview_pcl.reshape(batch_size, -1, width) - - return gen() - - def get_multiview_pcl_dataset( - self, - batch: AttrDict, - options: Optional[AttrDict] = None, - inner_batch_size: Optional[int] = None, - use_distance: bool = True, - ) -> Iterable: - _ = options - - if inner_batch_size is None: - inner_batch_size = self.inner_batch_size[0] - - multiview_pcl_emb = self.encode_multiview_pcl(batch, use_distance=use_distance) - batch_size, num_views, n_patches, width = multiview_pcl_emb.shape - - assert num_views >= inner_batch_size - - multiview_pcl_it = iter(DatasetIterator(multiview_pcl_emb, batch_size=inner_batch_size)) - - def gen(): - while True: - multiview_pcl = next(multiview_pcl_it) - assert multiview_pcl.shape == ( - batch_size, - inner_batch_size, - n_patches, - self.width, - ) - yield multiview_pcl.reshape(batch_size, -1, width) - - return gen() - - def encode_views(self, batch: AttrDict) -> torch.Tensor: - """ - :return: [batch_size, num_views, n_patches, width] - """ - all_views = self.views_to_tensor(batch.views).to(self.device) - if self.use_depth: - all_views = torch.cat([all_views, self.depths_to_tensor(batch.depths)], dim=2) - all_cameras = self.cameras_to_tensor(batch.cameras).to(self.device) - - batch_size, num_views, _, _, _ = all_views.shape - - views_proj = self.patch_emb( - all_views.reshape([batch_size * num_views, *all_views.shape[2:]]) - ) - views_proj = ( - views_proj.reshape([batch_size, num_views, self.width, -1]) - .permute(0, 1, 3, 2) - .contiguous() - ) # [batch_size x num_views x n_patches x width] - - # [batch_size, num_views, 1, 2 * width] - camera_proj = self.camera_emb(all_cameras).reshape( - [batch_size, num_views, 1, self.width * 2] - ) - pose_dropout = self.pose_dropout if self.training else 0.0 - mask = torch.rand(batch_size, 1, 1, 1, device=views_proj.device) >= pose_dropout - camera_proj = torch.where(mask, camera_proj, torch.zeros_like(camera_proj)) - scale, shift = camera_proj.chunk(2, dim=3) - views_proj = views_proj * (scale + 1.0) + shift - return views_proj - - def encode_dense_pose_views(self, batch: AttrDict) -> torch.Tensor: - """ - :return: [batch_size, num_views, n_patches, width] - """ - all_views = self.views_to_tensor(batch.views).to(self.device) - if self.use_depth: - depths = self.depths_to_tensor(batch.depths) - all_views = torch.cat([all_views, depths], dim=2) - - dense_poses, _ = self.dense_pose_cameras_to_tensor(batch.cameras) - dense_poses = dense_poses.permute(0, 1, 4, 5, 2, 3) - position, direction = dense_poses[:, :, 0], dense_poses[:, :, 1] - all_view_poses = self.mv_pose_embed(all_views, position, direction) - - batch_size, num_views, _, _, _ = all_view_poses.shape - - views_proj = self.patch_emb( - all_view_poses.reshape([batch_size * num_views, *all_view_poses.shape[2:]]) - ) - views_proj = ( - views_proj.reshape([batch_size, num_views, self.width, -1]) - .permute(0, 1, 3, 2) - .contiguous() - ) # [batch_size x num_views x n_patches x width] - - return views_proj - - def encode_multiview_pcl(self, batch: AttrDict, use_distance: bool = True) -> torch.Tensor: - """ - :return: [batch_size, num_views, n_patches, width] - """ - all_views = self.views_to_tensor(batch.views).to(self.device) - depths = self.raw_depths_to_tensor(batch.depths) - all_view_alphas = self.view_alphas_to_tensor(batch.view_alphas).to(self.device) - mask = all_view_alphas >= 0.999 - - dense_poses, camera_z = self.dense_pose_cameras_to_tensor(batch.cameras) - dense_poses = dense_poses.permute(0, 1, 4, 5, 2, 3) - - origin, direction = dense_poses[:, :, 0], dense_poses[:, :, 1] - if use_distance: - ray_depth_factor = torch.sum(direction * camera_z[..., None, None], dim=2, keepdim=True) - depths = depths / ray_depth_factor - position = origin + depths * direction - all_view_poses = self.mv_pcl_embed(all_views, origin, position, mask) - - batch_size, num_views, _, _, _ = all_view_poses.shape - - views_proj = self.patch_emb( - all_view_poses.reshape([batch_size * num_views, *all_view_poses.shape[2:]]) - ) - views_proj = ( - views_proj.reshape([batch_size, num_views, self.width, -1]) - .permute(0, 1, 3, 2) - .contiguous() - ) # [batch_size x num_views x n_patches x width] - - return views_proj - - def views_to_tensor(self, views: Union[torch.Tensor, List[List[Image.Image]]]) -> torch.Tensor: - """ - Returns a [batch x num_views x 3 x size x size] tensor in the range [-1, 1]. - """ - if isinstance(views, torch.Tensor): - return views - - tensor_batch = [] - num_views = len(views[0]) - for inner_list in views: - assert len(inner_list) == num_views - inner_batch = [] - for img in inner_list: - img = img.resize((self.image_size,) * 2).convert("RGB") - inner_batch.append( - torch.from_numpy(np.array(img)).to(device=self.device, dtype=torch.float32) - / 127.5 - - 1 - ) - tensor_batch.append(torch.stack(inner_batch, dim=0)) - return torch.stack(tensor_batch, dim=0).permute(0, 1, 4, 2, 3) - - def depths_to_tensor( - self, depths: Union[torch.Tensor, List[List[Image.Image]]] - ) -> torch.Tensor: - """ - Returns a [batch x num_views x 1 x size x size] tensor in the range [-1, 1]. - """ - if isinstance(depths, torch.Tensor): - return depths - - tensor_batch = [] - num_views = len(depths[0]) - for inner_list in depths: - assert len(inner_list) == num_views - inner_batch = [] - for arr in inner_list: - tensor = torch.from_numpy(arr).clamp(max=self.max_depth) / self.max_depth - tensor = tensor * 2 - 1 - tensor = F.interpolate( - tensor[None, None], - (self.image_size,) * 2, - mode="nearest", - ) - inner_batch.append(tensor.to(device=self.device, dtype=torch.float32)) - tensor_batch.append(torch.cat(inner_batch, dim=0)) - return torch.stack(tensor_batch, dim=0) - - def view_alphas_to_tensor( - self, view_alphas: Union[torch.Tensor, List[List[Image.Image]]] - ) -> torch.Tensor: - """ - Returns a [batch x num_views x 1 x size x size] tensor in the range [0, 1]. - """ - if isinstance(view_alphas, torch.Tensor): - return view_alphas - - tensor_batch = [] - num_views = len(view_alphas[0]) - for inner_list in view_alphas: - assert len(inner_list) == num_views - inner_batch = [] - for img in inner_list: - tensor = ( - torch.from_numpy(np.array(img)).to(device=self.device, dtype=torch.float32) - / 255.0 - ) - tensor = F.interpolate( - tensor[None, None], - (self.image_size,) * 2, - mode="nearest", - ) - inner_batch.append(tensor) - tensor_batch.append(torch.cat(inner_batch, dim=0)) - return torch.stack(tensor_batch, dim=0) - - def raw_depths_to_tensor( - self, depths: Union[torch.Tensor, List[List[Image.Image]]] - ) -> torch.Tensor: - """ - Returns a [batch x num_views x 1 x size x size] tensor - """ - if isinstance(depths, torch.Tensor): - return depths - - tensor_batch = [] - num_views = len(depths[0]) - for inner_list in depths: - assert len(inner_list) == num_views - inner_batch = [] - for arr in inner_list: - tensor = torch.from_numpy(arr).clamp(max=self.max_depth) - tensor = F.interpolate( - tensor[None, None], - (self.image_size,) * 2, - mode="nearest", - ) - inner_batch.append(tensor.to(device=self.device, dtype=torch.float32)) - tensor_batch.append(torch.cat(inner_batch, dim=0)) - return torch.stack(tensor_batch, dim=0) - - def cameras_to_tensor( - self, cameras: Union[torch.Tensor, List[List[ProjectiveCamera]]] - ) -> torch.Tensor: - """ - Returns a [batch x num_views x 3*4+1] tensor of camera information. - """ - if isinstance(cameras, torch.Tensor): - return cameras - outer_batch = [] - for inner_list in cameras: - inner_batch = [] - for camera in inner_list: - inner_batch.append( - np.array( - [ - *camera.x, - *camera.y, - *camera.z, - *camera.origin, - camera.x_fov, - ] - ) - ) - outer_batch.append(np.stack(inner_batch, axis=0)) - return torch.from_numpy(np.stack(outer_batch, axis=0)).float() - - def dense_pose_cameras_to_tensor( - self, cameras: Union[torch.Tensor, List[List[ProjectiveCamera]]] - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Returns a tuple of (rays, z_directions) where - - rays: [batch, num_views, height, width, 2, 3] tensor of camera information. - - z_directions: [batch, num_views, 3] tensor of camera z directions. - """ - if isinstance(cameras, torch.Tensor): - raise NotImplementedError - - for inner_list in cameras: - assert len(inner_list) == len(cameras[0]) - - camera = cameras[0][0] - flat_camera = DifferentiableProjectiveCamera( - origin=torch.from_numpy( - np.stack( - [cam.origin for inner_list in cameras for cam in inner_list], - axis=0, - ) - ).to(self.device), - x=torch.from_numpy( - np.stack( - [cam.x for inner_list in cameras for cam in inner_list], - axis=0, - ) - ).to(self.device), - y=torch.from_numpy( - np.stack( - [cam.y for inner_list in cameras for cam in inner_list], - axis=0, - ) - ).to(self.device), - z=torch.from_numpy( - np.stack( - [cam.z for inner_list in cameras for cam in inner_list], - axis=0, - ) - ).to(self.device), - width=camera.width, - height=camera.height, - x_fov=camera.x_fov, - y_fov=camera.y_fov, - ) - batch_size = len(cameras) * len(cameras[0]) - coords = ( - flat_camera.image_coords() - .to(flat_camera.origin.device) - .unsqueeze(0) - .repeat(batch_size, 1, 1) - ) - rays = flat_camera.camera_rays(coords) - return ( - rays.view(len(cameras), len(cameras[0]), camera.height, camera.width, 2, 3).to( - self.device - ), - flat_camera.z.view(len(cameras), len(cameras[0]), 3).to(self.device), - ) - - -def sample_pcl_fps(points: torch.Tensor, data_ctx: int, method: str = "fps") -> torch.Tensor: - """ - Run farthest-point sampling on a batch of point clouds. - - :param points: batch of shape [N x num_points]. - :param data_ctx: subsample count. - :param method: either 'fps' or 'first'. Using 'first' assumes that the - points are already sorted according to FPS sampling. - :return: batch of shape [N x min(num_points, data_ctx)]. - """ - n_points = points.shape[1] - if n_points == data_ctx: - return points - if method == "first": - return points[:, :data_ctx] - elif method == "fps": - batch = points.cpu().split(1, dim=0) - fps = [sample_fps(x, n_samples=data_ctx) for x in batch] - return torch.cat(fps, dim=0).to(points.device) - else: - raise ValueError(f"unsupported farthest-point sampling method: {method}") - - -def sample_fps(example: torch.Tensor, n_samples: int) -> torch.Tensor: - """ - :param example: [1, n_points, 3 + n_channels] - :return: [1, n_samples, 3 + n_channels] - """ - points = example.cpu().squeeze(0).numpy() - coords, raw_channels = points[:, :3], points[:, 3:] - n_points, n_channels = raw_channels.shape - assert n_samples <= n_points - channels = {str(idx): raw_channels[:, idx] for idx in range(n_channels)} - max_points = min(32768, n_points) - fps_pcl = ( - PointCloud(coords=coords, channels=channels) - .random_sample(max_points) - .farthest_point_sample(n_samples) - ) - fps_channels = np.stack([fps_pcl.channels[str(idx)] for idx in range(n_channels)], axis=1) - fps = np.concatenate([fps_pcl.coords, fps_channels], axis=1) - fps = torch.from_numpy(fps).unsqueeze(0) - assert fps.shape == (1, n_samples, 3 + n_channels) - return fps diff --git a/spaces/weiren119/AudiogramDigitization/src/digitizer/yolov5/detect_symbols.py b/spaces/weiren119/AudiogramDigitization/src/digitizer/yolov5/detect_symbols.py deleted file mode 100644 index 16626d7b1b57f4a5940d5cd242119856bfda2e31..0000000000000000000000000000000000000000 --- a/spaces/weiren119/AudiogramDigitization/src/digitizer/yolov5/detect_symbols.py +++ /dev/null @@ -1,153 +0,0 @@ -import argparse -import json -import os -import platform -import shutil -import time -from pathlib import Path - -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random - -from models.experimental import attempt_load -from utils.datasets import LoadStreams, LoadImages -from utils.general import ( - check_img_size, non_max_suppression, apply_classifier, scale_coords, - xyxy2xywh, plot_one_box, strip_optimizer, set_logging) -from utils.torch_utils import select_device, load_classifier, time_synchronized - - -def detect(save_img=False): - results = [] - out, source, weights, view_img, save_txt, imgsz = \ - opt.output, opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size - webcam = source.isnumeric() or source.startswith('rtsp') or source.startswith('http') or source.endswith('.txt') - - # Initialize - set_logging() - device = select_device(opt.device) - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']) # load weights - modelc.to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = True - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz) - else: - save_img = True - dataset = LoadImages(source, img_size=imgsz) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(names))] - - # Run inference - t0 = time.time() - img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img - _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Inference - t1 = time_synchronized() - pred = model(img, augment=opt.augment)[0] - - # Apply NMS - pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) - t2 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0 = path[i], '%g: ' % i, im0s[i].copy() - else: - p, s, im0 = path, '', im0s - - save_path = str(Path(out) / Path(p).name) - txt_path = str(Path(out) / Path(p).stem) + ('_%g' % dataset.frame if dataset.mode == 'video' else '') - s += '%gx%g ' % img.shape[2:] # print string - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if det is not None and len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += '%g %ss, ' % (n, names[int(c)]) # add to string - - # Write results - #for *xyxy, conf, cls in reversed(det): - # if save_txt: # Write to file - # xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - # with open(txt_path + '.txt', 'a') as f: - # f.write(('%g ' * 5 + '\n') % (cls, *xywh)) # label format - - - for *xyxy, conf, cls in reversed(det): - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4))).view(-1).tolist() # ADDED BY FRANCOIS - results.append({ - "measurementType": names[int(cls)], - "noResponse": False, - "boundingBox": { - "x": int(xywh[0] - xywh[2]/2), - "y": int(xywh[1] - xywh[3]/2), - "width": int(xywh[2]), - "height": int(xywh[3]) - }, - "confidence": float(conf) - }) - - print("\n$$$") - print(json.dumps(results)) - print("$$$\n") - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)') - parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam - parser.add_argument('--output', type=str, default='inference/output', help='output folder') # output folder - parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.4, help='object confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.5, help='IOU threshold for NMS') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='display results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--update', action='store_true', help='update all models') - opt = parser.parse_args() - print(opt) - - with torch.no_grad(): - if opt.update: # update all models (to fix SourceChangeWarning) - for opt.weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']: - detect() - strip_optimizer(opt.weights) - else: - detect() diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/search_engine_serpapi.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/search_engine_serpapi.py deleted file mode 100644 index 750184198c17873ca20c84ac3a40b0365b7f1f29..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/search_engine_serpapi.py +++ /dev/null @@ -1,115 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/23 18:27 -@Author : alexanderwu -@File : search_engine_serpapi.py -""" -from typing import Any, Dict, Optional, Tuple - -import aiohttp -from pydantic import BaseModel, Field, validator - -from metagpt.config import CONFIG - - -class SerpAPIWrapper(BaseModel): - search_engine: Any #: :meta private: - params: dict = Field( - default={ - "engine": "google", - "google_domain": "google.com", - "gl": "us", - "hl": "en", - } - ) - serpapi_api_key: Optional[str] = None - aiosession: Optional[aiohttp.ClientSession] = None - - class Config: - arbitrary_types_allowed = True - - @validator("serpapi_api_key", always=True) - @classmethod - def check_serpapi_api_key(cls, val: str): - val = val or CONFIG.serpapi_api_key - if not val: - raise ValueError( - "To use, make sure you provide the serpapi_api_key when constructing an object. Alternatively, " - "ensure that the environment variable SERPAPI_API_KEY is set with your API key. You can obtain " - "an API key from https://serpapi.com/." - ) - return val - - async def run(self, query, max_results: int = 8, as_string: bool = True, **kwargs: Any) -> str: - """Run query through SerpAPI and parse result async.""" - return self._process_response(await self.results(query, max_results), as_string=as_string) - - async def results(self, query: str, max_results: int) -> dict: - """Use aiohttp to run query through SerpAPI and return the results async.""" - - def construct_url_and_params() -> Tuple[str, Dict[str, str]]: - params = self.get_params(query) - params["source"] = "python" - params["num"] = max_results - params["output"] = "json" - url = "https://serpapi.com/search" - return url, params - - url, params = construct_url_and_params() - if not self.aiosession: - async with aiohttp.ClientSession() as session: - async with session.get(url, params=params) as response: - res = await response.json() - else: - async with self.aiosession.get(url, params=params) as response: - res = await response.json() - - return res - - def get_params(self, query: str) -> Dict[str, str]: - """Get parameters for SerpAPI.""" - _params = { - "api_key": self.serpapi_api_key, - "q": query, - } - params = {**self.params, **_params} - return params - - @staticmethod - def _process_response(res: dict, as_string: bool) -> str: - """Process response from SerpAPI.""" - # logger.debug(res) - focus = ["title", "snippet", "link"] - get_focused = lambda x: {i: j for i, j in x.items() if i in focus} - - if "error" in res.keys(): - raise ValueError(f"Got error from SerpAPI: {res['error']}") - if "answer_box" in res.keys() and "answer" in res["answer_box"].keys(): - toret = res["answer_box"]["answer"] - elif "answer_box" in res.keys() and "snippet" in res["answer_box"].keys(): - toret = res["answer_box"]["snippet"] - elif "answer_box" in res.keys() and "snippet_highlighted_words" in res["answer_box"].keys(): - toret = res["answer_box"]["snippet_highlighted_words"][0] - elif "sports_results" in res.keys() and "game_spotlight" in res["sports_results"].keys(): - toret = res["sports_results"]["game_spotlight"] - elif "knowledge_graph" in res.keys() and "description" in res["knowledge_graph"].keys(): - toret = res["knowledge_graph"]["description"] - elif "snippet" in res["organic_results"][0].keys(): - toret = res["organic_results"][0]["snippet"] - else: - toret = "No good search result found" - - toret_l = [] - if "answer_box" in res.keys() and "snippet" in res["answer_box"].keys(): - toret_l += [get_focused(res["answer_box"])] - if res.get("organic_results"): - toret_l += [get_focused(i) for i in res.get("organic_results")] - - return str(toret) + "\n" + str(toret_l) if as_string else toret_l - - -if __name__ == "__main__": - import fire - - fire.Fire(SerpAPIWrapper().run) diff --git a/spaces/wouaf/WOUAF-Text-to-Image/attribution.py b/spaces/wouaf/WOUAF-Text-to-Image/attribution.py deleted file mode 100644 index 6975880cd40239d1b4c1fa96a4ece988c4f53228..0000000000000000000000000000000000000000 --- a/spaces/wouaf/WOUAF-Text-to-Image/attribution.py +++ /dev/null @@ -1,190 +0,0 @@ -import torch -import numpy as np -from torch_utils.ops import bias_act -from torch_utils import misc - - - -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - - -class FullyConnectedLayer_normal(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias = True, # Apply additive bias before the activation function? - bias_init = 0, # Initial value for the additive bias. - ): - super().__init__() - self.fc = torch.nn.Linear(in_features, out_features, bias=bias) - if bias: - with torch.no_grad(): - self.fc.bias.fill_(bias_init) - - def forward(self, x): - output = self.fc(x) - return output - - -class MappingNetwork_normal(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - int_dim, - num_layers = 8, # Number of mapping layers. - mapping_normalization = False #2nd normalization - ): - super().__init__() - layers = [torch.nn.Linear(in_features, int_dim), torch.nn.LeakyReLU(0.2)] - for i in range(1, num_layers): - layers.append(torch.nn.Linear(int_dim, int_dim)) - layers.append(torch.nn.LeakyReLU(0.2)) - - self.net = torch.nn.Sequential(*layers) - self.normalization = mapping_normalization - - def forward(self, x): - if self.normalization: - x = normalize_2nd_moment(x) - output = self.net(x) - return output - - -class DecodingNetwork(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_dim, - num_layers = 8, # Number of mapping layers. - ): - super().__init__() - layers = [] - for i in range(num_layers-1): - layers.append(torch.nn.Linear(in_features, in_features)) - layers.append(torch.nn.ReLU()) - - layers.append(torch.nn.Linear(in_features, out_dim)) - - self.net = torch.nn.Sequential(*layers) - - def forward(self, x): - x = torch.nn.functional.normalize(x, dim=1) - output = self.net(x) - return output - - -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 1, # Learning rate multiplier. - bias_init = 0, # Initial value for the additive bias. - ): - super().__init__() - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full([out_features], np.float32(bias_init))) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - - -class MappingNetwork(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality, 0 = no latent. - c_dim, # Conditioning label (C) dimensionality, 0 = no label. - w_dim, # Intermediate latent (W) dimensionality. - num_ws, # Number of intermediate latents to output, None = do not broadcast. - num_layers = 8, # Number of mapping layers. - embed_features = None, # Label embedding dimensionality, None = same as w_dim. - layer_features = None, # Number of intermediate features in the mapping layers, None = same as w_dim. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers. - w_avg_beta = 0.995, # Decay for tracking the moving average of W during training, None = do not track. - normalization = None # Normalization input using normalize_2nd_moment - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - self.normalization = normalization - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer(in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c=None, truncation_psi=1, truncation_cutoff=None, skip_w_avg_update=False): - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - misc.assert_shape(z, [None, self.z_dim]) - if self.normalization: - x = normalize_2nd_moment(z.to(torch.float32)) - else: - x = z - x = z.to(torch.float32) - if self.c_dim > 0: - raise ValueError("This implementation does not need class index") - misc.assert_shape(c, [None, self.c_dim]) - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - y = self.embed(c.to(torch.float32)) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if self.w_avg_beta is not None and self.training and not skip_w_avg_update: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - if self.num_ws is not None: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi) - return x diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/vlpencoder.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/vlpencoder.py deleted file mode 100644 index ce6fd4709255e8869749d7401babb373b187d697..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/vlpencoder.py +++ /dev/null @@ -1,168 +0,0 @@ - -import torch -from torch import nn -from torch.nn import functional as F - -from timm.models.layers import trunc_normal_ - -from .registry import register_model -from ..utils import configurable -from .LangEncoder import build_tokenizer, build_lang_encoder -from utils.misc import prompt_engineering, get_prompt_templates - - -class LanguageEncoder(nn.Module): - - @configurable - def __init__( - self, - tokenizer, - tokenizer_type, - lang_encoder, - lang_projection, - max_token_num, - ): - super().__init__() - self.tokenizer = tokenizer - self.tokenizer_type = tokenizer_type - self.lang_encoder = lang_encoder - self.lang_proj = lang_projection - self.max_token_num = max_token_num - self.logit_scale = nn.Parameter(torch.ones([])) - - @classmethod - def from_config(cls, cfg): - tokenizer = build_tokenizer(cfg['MODEL']['TEXT']) - tokenizer_type = cfg['MODEL']['TEXT']['TOKENIZER'] - lang_encoder = build_lang_encoder(cfg['MODEL']['TEXT'], tokenizer, cfg['VERBOSE']) - max_token_num = cfg['MODEL']['TEXT']['CONTEXT_LENGTH'] - - dim_lang = cfg['MODEL']['TEXT']['WIDTH'] - dim_projection = cfg['MODEL']['DIM_PROJ'] - lang_projection = nn.Parameter(torch.empty(dim_lang, dim_projection)) - trunc_normal_(lang_projection, std=.02) - - return { - "tokenizer": tokenizer, - "tokenizer_type": tokenizer_type, - "lang_encoder": lang_encoder, - "lang_projection": lang_projection, - "max_token_num": max_token_num, - } - - def get_text_embeddings(self, class_names, name='default', is_eval=False, add_bgd=False, prompt=True, norm=True): - if not is_eval: - if prompt: - # randomly sample one template - arbitary_concepts = [ - prompt_engineering(class_names[label].replace('-other','').replace('-merged','').replace('-stuff',''), topk=10000, suffix='.') \ - for label in range(len(class_names)) - ] - if add_bgd: - arbitary_concepts.append("A background in coco.") - else: - arbitary_concepts = class_names - - input_ids = [] - attention_masks = [] - for txt in arbitary_concepts: - tokens = self.tokenizer( - txt, padding='max_length', truncation=True, max_length=self.max_token_num, return_tensors='pt' - ) - tokens['input_ids'].squeeze_() - tokens['attention_mask'].squeeze_() - - input_ids.append(tokens['input_ids']) - attention_masks.append(tokens['attention_mask']) - - arbitary_tokens = torch.stack(input_ids) - arbitary_attention_masks = torch.stack(attention_masks) - - text_emb = self.forward_language((arbitary_tokens.cuda(), arbitary_attention_masks.cuda()), norm=norm) - setattr(self, '{}_text_embeddings'.format(name), text_emb) - else: - with torch.no_grad(): - def extract_mean_emb(txts): - tokens = self.tokenizer( - txts, padding='max_length', truncation=True, max_length=self.max_token_num, return_tensors='pt' - ) - clss_embedding = self.forward_language((tokens['input_ids'].cuda(), tokens['attention_mask'].cuda()), norm=norm) - clss_embedding = clss_embedding.mean(dim=0) - clss_embedding /= clss_embedding.norm() - return clss_embedding - - templates = get_prompt_templates() - clss_embeddings = [] - if prompt: - for clss in class_names: - txts = [template.format(clss.replace('-other','').replace('-merged','').replace('-stuff','')) for template in templates] - clss_embeddings.append(extract_mean_emb(txts)) - else: - clss_embeddings.append(extract_mean_emb(class_names)) - - if add_bgd: - txts = ["A background in coco."] - clss_embeddings.append(extract_mean_emb(txts)) - - text_emb = torch.stack(clss_embeddings, dim=0) - setattr(self, '{}_text_embeddings'.format(name), text_emb) - - def get_text_token_embeddings(self, txts, name='default', token=False, norm=False): - if not token: - tokens = self.tokenizer( - txts, padding='max_length', truncation=True, max_length=self.max_token_num, return_tensors='pt' - ) - tokens = {key: value.cuda() for key, value in tokens.items()} - else: - tokens = txts - token_emb, class_emb = self.forward_language_token((tokens['input_ids'], tokens['attention_mask']), norm=norm) - ret = {"tokens": tokens, - "token_emb": token_emb, - "class_emb": class_emb,} - setattr(self, '{}_token_embeddings'.format(name), ret) - return ret - - def forward_language(self, texts, norm=True): - x = self.lang_encoder(*texts) - x = x['last_hidden_state'] - - if self.tokenizer_type == 'clip': - x = x[torch.arange(x.size(0)), texts[0].argmax(dim=-1)] - else: - x = x[:, 0] - - x = x @ self.lang_proj - if norm: - x = x / (x.norm(dim=-1, keepdim=True) + 1e-7) - return x - - def forward_language_token(self, texts, norm=False): - x = self.lang_encoder(*texts) - token_x = x['last_hidden_state'] - - if self.tokenizer_type == 'clip': - class_x = token_x[torch.arange(token_x.size(0)), texts[0].argmax(dim=-1)] - else: - class_x = token_x[:, 0] - - class_x = class_x @ self.lang_proj - token_x = token_x @ self.lang_proj - - if norm: - class_x = class_x / (class_x.norm(dim=-1, keepdim=True) + 1e-7) - token_x = token_x / (token_x.norm(dim=-1, keepdim=True) + 1e-7) - - return token_x, class_x - - def compute_similarity(self, v_emb, name='default', fake=False): - if fake: - return None - v_emb = v_emb / (v_emb.norm(dim=-1, keepdim=True) + 1e-7) - t_emb = getattr(self, '{}_text_embeddings'.format(name)) - output = self.logit_scale.exp() * v_emb @ t_emb.unsqueeze(0).transpose(1, 2) - return output - - -@register_model -def get_language_model(cfg, **kwargs): - return LanguageEncoder(cfg) \ No newline at end of file diff --git a/spaces/xswu/HPSv2/evaluate.py b/spaces/xswu/HPSv2/evaluate.py deleted file mode 100644 index f17e81bfa16ad20a6a9bf4b421fac7ddcde7b29c..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/evaluate.py +++ /dev/null @@ -1,220 +0,0 @@ -from cProfile import label -import os -import json -import numpy as np -from tqdm import tqdm -from argparse import ArgumentParser -from PIL import Image - -import torch -from torch.utils.data import Dataset, DataLoader - -from src.open_clip import create_model_and_transforms, get_tokenizer -from src.training.train import calc_ImageReward, inversion_score -from src.training.data import ImageRewardDataset, collate_rank, RankingDataset - - -parser = ArgumentParser() -parser.add_argument('--data-type', type=str, choices=['benchmark', 'test', 'ImageReward', 'drawbench']) -parser.add_argument('--data-path', type=str, help='path to dataset') -parser.add_argument('--image-path', type=str, help='path to image files') -parser.add_argument('--checkpoint', type=str, help='path to checkpoint') -parser.add_argument('--batch-size', type=int, default=20) -args = parser.parse_args() - -batch_size = args.batch_size -args.model = "ViT-H-14" -args.precision = 'amp' -print(args.model) -device = 'cuda' if torch.cuda.is_available() else 'cpu' -model, preprocess_train, preprocess_val = create_model_and_transforms( - args.model, - 'laion2B-s32B-b79K', - precision=args.precision, - device=device, - jit=False, - force_quick_gelu=False, - force_custom_text=False, - force_patch_dropout=False, - force_image_size=None, - pretrained_image=False, - image_mean=None, - image_std=None, - light_augmentation=True, - aug_cfg={}, - output_dict=True, - with_score_predictor=False, - with_region_predictor=False -) - -checkpoint = torch.load(args.checkpoint) -model.load_state_dict(checkpoint['state_dict']) -tokenizer = get_tokenizer(args.model) -model.eval() - -class BenchmarkDataset(Dataset): - def __init__(self, meta_file, image_folder,transforms, tokenizer): - self.transforms = transforms - self.image_folder = image_folder - self.tokenizer = tokenizer - self.open_image = Image.open - with open(meta_file, 'r') as f: - self.annotations = json.load(f) - - def __len__(self): - return len(self.annotations) - - def __getitem__(self, idx): - try: - img_path = os.path.join(self.image_folder, f'{idx:05d}.jpg') - images = self.transforms(self.open_image(os.path.join(img_path))) - caption = self.tokenizer(self.annotations[idx]) - return images, caption - except: - print('file not exist') - return self.__getitem__((idx + 1) % len(self)) - -def evaluate_IR(data_path, image_folder, model): - meta_file = data_path + '/ImageReward_test.json' - dataset = ImageRewardDataset(meta_file, image_folder, preprocess_val, tokenizer) - dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False, num_workers=4, collate_fn=collate_rank) - - score = 0 - total = len(dataset) - with torch.no_grad(): - for batch in tqdm(dataloader): - images, num_images, labels, texts = batch - images = images.to(device=device, non_blocking=True) - texts = texts.to(device=device, non_blocking=True) - num_images = num_images.to(device=device, non_blocking=True) - labels = labels.to(device=device, non_blocking=True) - - with torch.cuda.amp.autocast(): - outputs = model(images, texts) - image_features, text_features, logit_scale = outputs["image_features"], outputs["text_features"], outputs["logit_scale"] - logits_per_image = logit_scale * image_features @ text_features.T - paired_logits_list = [logit[:,i] for i, logit in enumerate(logits_per_image.split(num_images.tolist()))] - - predicted = [torch.argsort(-k) for k in paired_logits_list] - hps_ranking = [[predicted[i].tolist().index(j) for j in range(n)] for i,n in enumerate(num_images)] - labels = [label for label in labels.split(num_images.tolist())] - score +=sum([calc_ImageReward(paired_logits_list[i].tolist(), labels[i]) for i in range(len(hps_ranking))]) - print('ImageReward:', score/total) - -def evaluate_rank(data_path, image_folder, model): - meta_file = data_path + '/test.json' - dataset = RankingDataset(meta_file, image_folder, preprocess_val, tokenizer) - dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False, num_workers=4, collate_fn=collate_rank) - - score = 0 - total = len(dataset) - all_rankings = [] - with torch.no_grad(): - for batch in tqdm(dataloader): - images, num_images, labels, texts = batch - images = images.to(device=device, non_blocking=True) - texts = texts.to(device=device, non_blocking=True) - num_images = num_images.to(device=device, non_blocking=True) - labels = labels.to(device=device, non_blocking=True) - - with torch.cuda.amp.autocast(): - outputs = model(images, texts) - image_features, text_features, logit_scale = outputs["image_features"], outputs["text_features"], outputs["logit_scale"] - logits_per_image = logit_scale * image_features @ text_features.T - paired_logits_list = [logit[:,i] for i, logit in enumerate(logits_per_image.split(num_images.tolist()))] - - predicted = [torch.argsort(-k) for k in paired_logits_list] - hps_ranking = [[predicted[i].tolist().index(j) for j in range(n)] for i,n in enumerate(num_images)] - labels = [label for label in labels.split(num_images.tolist())] - all_rankings.extend(hps_ranking) - score += sum([inversion_score(hps_ranking[i], labels[i]) for i in range(len(hps_ranking))]) - print('ranking_acc:', score/total) - with open('logs/hps_rank.json', 'w') as f: - json.dump(all_rankings, f) - -def collate_eval(batch): - images = torch.stack([sample[0] for sample in batch]) - captions = torch.cat([sample[1] for sample in batch]) - return images, captions - - -def evaluate_benchmark(data_path, root_dir, model): - meta_dir = data_path - model_list = os.listdir(root_dir) - style_list = os.listdir(os.path.join(root_dir, model_list[0])) - - score = {} - for model_id in model_list: - score[model_id]={} - for style in style_list: - # score[model_id][style] = [0] * 10 - score[model_id][style] = [] - image_folder = os.path.join(root_dir, model_id, style) - meta_file = os.path.join(meta_dir, f'{style}.json') - dataset = BenchmarkDataset(meta_file, image_folder, preprocess_val, tokenizer) - dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False, collate_fn=collate_eval) - - with torch.no_grad(): - for i, batch in enumerate(dataloader): - images, texts = batch - images = images.to(device=device, non_blocking=True) - texts = texts.to(device=device, non_blocking=True) - - with torch.cuda.amp.autocast(): - outputs = model(images, texts) - image_features, text_features = outputs["image_features"], outputs["text_features"] - logits_per_image = image_features @ text_features.T - # score[model_id][style][i] = torch.sum(torch.diagonal(logits_per_image)).cpu().item() / 80 - score[model_id][style].extend(torch.diagonal(logits_per_image).cpu().tolist()) - print('-----------benchmark score ---------------- ') - for model_id, data in score.items(): - for style , res in data.items(): - avg_score = [np.mean(res[i:i+80]) for i in range(0, 800, 80)] - print(model_id, '\t', style, '\t', np.mean(avg_score), '\t', np.std(avg_score)) - - -def evaluate_benchmark_DB(data_path, root_dir, model): - meta_file = data_path + '/drawbench.json' - model_list = os.listdir(root_dir) - - - score = {} - for model_id in model_list: - image_folder = os.path.join(root_dir, model_id) - dataset = BenchmarkDataset(meta_file, image_folder, preprocess_val, tokenizer) - dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False, num_workers=4, collate_fn=collate_eval) - score[model_id] = 0 - with torch.no_grad(): - for batch in tqdm(dataloader): - images, texts = batch - images = images.to(device=device, non_blocking=True) - texts = texts.to(device=device, non_blocking=True) - - with torch.cuda.amp.autocast(): - outputs = model(images, texts) - image_features, text_features = outputs["image_features"], outputs["text_features"] - logits_per_image = image_features @ text_features.T - diag = torch.diagonal(logits_per_image) - score[model_id] += torch.sum(diag).cpu().item() - score[model_id] = score[model_id] / len(dataset) - # with open('logs/benchmark_score_DB.json', 'w') as f: - # json.dump(score, f) - print('-----------drawbench score ---------------- ') - for model, data in score.items(): - print(model, '\t', '\t', np.mean(data)) - - -if args.data_type == 'ImageReward': - evaluate_IR(args.data_path, args.image_path, model) -elif args.data_type == 'test': - evaluate_rank(args.data_path, args.image_path, model) -elif args.data_type == 'benchmark': - evaluate_benchmark(args.data_path, args.image_path, model) -elif args.data_type == 'drawbench': - evaluate_benchmark_DB(args.data_path, args.image_path, model) -else: - raise NotImplementedError - - - - diff --git a/spaces/yangban/catordog/app.py b/spaces/yangban/catordog/app.py deleted file mode 100644 index 21323c8a25b045e0700cd1dfc4a6a79d4dbeef87..0000000000000000000000000000000000000000 --- a/spaces/yangban/catordog/app.py +++ /dev/null @@ -1,21 +0,0 @@ - - -from fastai.vision.all import * -import gradio as gr - -def is_cat(x): return x[0].isupper() - -learn = load_learner('model.pkl') - -categories = ('Dog', 'Cat') - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() -examples = ['dog.jpg', 'cat.jpg', 'dunno.jpg'] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label,examples=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/yerfor/SyntaSpeech/utils/metrics/diagonal_metrics.py b/spaces/yerfor/SyntaSpeech/utils/metrics/diagonal_metrics.py deleted file mode 100644 index ba9807c1a594b38632c4731391e2d4fa3289037b..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/utils/metrics/diagonal_metrics.py +++ /dev/null @@ -1,74 +0,0 @@ -import torch - - -def get_focus_rate(attn, src_padding_mask=None, tgt_padding_mask=None): - ''' - attn: bs x L_t x L_s - ''' - if src_padding_mask is not None: - attn = attn * (1 - src_padding_mask.float())[:, None, :] - - if tgt_padding_mask is not None: - attn = attn * (1 - tgt_padding_mask.float())[:, :, None] - - focus_rate = attn.max(-1).values.sum(-1) - focus_rate = focus_rate / attn.sum(-1).sum(-1) - return focus_rate - - -def get_phone_coverage_rate(attn, src_padding_mask=None, src_seg_mask=None, tgt_padding_mask=None): - ''' - attn: bs x L_t x L_s - ''' - src_mask = attn.new(attn.size(0), attn.size(-1)).bool().fill_(False) - if src_padding_mask is not None: - src_mask |= src_padding_mask - if src_seg_mask is not None: - src_mask |= src_seg_mask - - attn = attn * (1 - src_mask.float())[:, None, :] - if tgt_padding_mask is not None: - attn = attn * (1 - tgt_padding_mask.float())[:, :, None] - - phone_coverage_rate = attn.max(1).values.sum(-1) - # phone_coverage_rate = phone_coverage_rate / attn.sum(-1).sum(-1) - phone_coverage_rate = phone_coverage_rate / (1 - src_mask.float()).sum(-1) - return phone_coverage_rate - - -def get_diagonal_focus_rate(attn, attn_ks, target_len, src_padding_mask=None, tgt_padding_mask=None, - band_mask_factor=5, band_width=50): - ''' - attn: bx x L_t x L_s - attn_ks: shape: tensor with shape [batch_size], input_lens/output_lens - - diagonal: y=k*x (k=attn_ks, x:output, y:input) - 1 0 0 - 0 1 0 - 0 0 1 - y>=k*(x-width) and y<=k*(x+width):1 - else:0 - ''' - # width = min(target_len/band_mask_factor, 50) - width1 = target_len / band_mask_factor - width2 = target_len.new(target_len.size()).fill_(band_width) - width = torch.where(width1 < width2, width1, width2).float() - base = torch.ones(attn.size()).to(attn.device) - zero = torch.zeros(attn.size()).to(attn.device) - x = torch.arange(0, attn.size(1)).to(attn.device)[None, :, None].float() * base - y = torch.arange(0, attn.size(2)).to(attn.device)[None, None, :].float() * base - cond = (y - attn_ks[:, None, None] * x) - cond1 = cond + attn_ks[:, None, None] * width[:, None, None] - cond2 = cond - attn_ks[:, None, None] * width[:, None, None] - mask1 = torch.where(cond1 < 0, zero, base) - mask2 = torch.where(cond2 > 0, zero, base) - mask = mask1 * mask2 - - if src_padding_mask is not None: - attn = attn * (1 - src_padding_mask.float())[:, None, :] - if tgt_padding_mask is not None: - attn = attn * (1 - tgt_padding_mask.float())[:, :, None] - - diagonal_attn = attn * mask - diagonal_focus_rate = diagonal_attn.sum(-1).sum(-1) / attn.sum(-1).sum(-1) - return diagonal_focus_rate, mask diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/version.py b/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/version.py deleted file mode 100644 index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.0' diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnextv2/configuration_convnextv2.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnextv2/configuration_convnextv2.py deleted file mode 100644 index 14dfcf85124e7f8b150b0e418718ee2a5eeccbfb..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convnextv2/configuration_convnextv2.py +++ /dev/null @@ -1,115 +0,0 @@ -# coding=utf-8 -# Copyright 2023 Meta Platforms, Inc. and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" ConvNeXTV2 model configuration""" - - -from ...configuration_utils import PretrainedConfig -from ...utils import logging -from ...utils.backbone_utils import BackboneConfigMixin, get_aligned_output_features_output_indices - - -logger = logging.get_logger(__name__) - -CONVNEXTV2_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "facebook/convnextv2-tiny-1k-224": "https://huggingface.co/facebook/convnextv2-tiny-1k-224/resolve/main/config.json", -} - - -class ConvNextV2Config(BackboneConfigMixin, PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`ConvNextV2Model`]. It is used to instantiate an - ConvNeXTV2 model according to the specified arguments, defining the model architecture. Instantiating a - configuration with the defaults will yield a similar configuration to that of the ConvNeXTV2 - [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - num_channels (`int`, *optional*, defaults to 3): - The number of input channels. - patch_size (`int`, optional, defaults to 4): - Patch size to use in the patch embedding layer. - num_stages (`int`, optional, defaults to 4): - The number of stages in the model. - hidden_sizes (`List[int]`, *optional*, defaults to `[96, 192, 384, 768]`): - Dimensionality (hidden size) at each stage. - depths (`List[int]`, *optional*, defaults to `[3, 3, 9, 3]`): - Depth (number of blocks) for each stage. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in each block. If string, `"gelu"`, `"relu"`, - `"selu"` and `"gelu_new"` are supported. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - drop_path_rate (`float`, *optional*, defaults to 0.0): - The drop rate for stochastic depth. - out_features (`List[str]`, *optional*): - If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. - (depending on how many stages the model has). If unset and `out_indices` is set, will default to the - corresponding stages. If unset and `out_indices` is unset, will default to the last stage. - out_indices (`List[int]`, *optional*): - If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how - many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. - If unset and `out_features` is unset, will default to the last stage. - - Example: - ```python - >>> from transformers import ConvNeXTV2Config, ConvNextV2Model - - >>> # Initializing a ConvNeXTV2 convnextv2-tiny-1k-224 style configuration - >>> configuration = ConvNeXTV2Config() - - >>> # Initializing a model (with random weights) from the convnextv2-tiny-1k-224 style configuration - >>> model = ConvNextV2Model(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "convnextv2" - - def __init__( - self, - num_channels=3, - patch_size=4, - num_stages=4, - hidden_sizes=None, - depths=None, - hidden_act="gelu", - initializer_range=0.02, - layer_norm_eps=1e-12, - drop_path_rate=0.0, - image_size=224, - out_features=None, - out_indices=None, - **kwargs, - ): - super().__init__(**kwargs) - - self.num_channels = num_channels - self.patch_size = patch_size - self.num_stages = num_stages - self.hidden_sizes = [96, 192, 384, 768] if hidden_sizes is None else hidden_sizes - self.depths = [3, 3, 9, 3] if depths is None else depths - self.hidden_act = hidden_act - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - self.drop_path_rate = drop_path_rate - self.image_size = image_size - self.stage_names = ["stem"] + [f"stage{idx}" for idx in range(1, len(self.depths) + 1)] - self._out_features, self._out_indices = get_aligned_output_features_output_indices( - out_features=out_features, out_indices=out_indices, stage_names=self.stage_names - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gptsan_japanese/configuration_gptsan_japanese.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gptsan_japanese/configuration_gptsan_japanese.py deleted file mode 100644 index d20b79daacfd1713aa1efc2f192ae600ec3789f2..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gptsan_japanese/configuration_gptsan_japanese.py +++ /dev/null @@ -1,158 +0,0 @@ -# coding=utf-8 -# Copyright 2023, HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" GPTSAN-japanese model configuration""" -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -GPTSAN_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "tanreinama/GPTSAN-2.8B-spout_is_uniform": ( - "https://huggingface.co/tanreinama/GPTSAN-2.8B-spout_is_uniform/resolve/main/config.json" - ), -} - - -class GPTSanJapaneseConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`GPTSanJapaneseModel`]. It is used to instantiate - a GPTSANJapanese model according to the specified arguments, defining the model architecture. Instantiating a - configuration with the defaults will yield a similar configuration to that of the GPTSANJapanese - [Tanrei/GPTSAN-japanese](https://huggingface.co/Tanrei/GPTSAN-japanese) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Arguments: - vocab_size (`int`, *optional*, defaults to 36000): - Vocabulary size of the GPTSANJapanese model. Defines the number of different tokens that can be represented - by the `inputs_ids` passed when calling [`GPTSanJapaneseModel`]. - max_position_embeddings (`int`, *optional*, defaults to 1280): - The maximum sequence length that this model might ever be used with. Defaults set this to 1280. - d_model (`int`, *optional*, defaults to 1024): - Size of the encoder layers and the pooler layer. - d_ff (`int`, *optional*, defaults to 8192): - Size of the intermediate feed forward layer in each `SwitchTransformersBlock`. - d_ext (`int`, *optional*, defaults to 4096): - Size of the intermediate feed forward layer in each Extra-layers. - d_spout (`int`, *optional*, defaults to 128): - Size of the `spout` vector. - num_switch_layers (`int`, *optional*, defaults to 10): - Number of layers in the Switch Transformer layer. - num_ext_layers (`int`, *optional*, defaults to 0): - Number of layers in the Extra-layers. - num_heads (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - num_experts (`int`, *optional*, defaults to 16): - Number of experts for each SwitchTransformer layer. - expert_capacity (`int`, *optional*, defaults to 128): - Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular - Transformer. - dropout_rate (`float`, *optional*, defaults to 0.0): - The ratio for all dropout layers. - layer_norm_eps (`float`, *optional*, defaults to 1e-5): - The epsilon used by the layer normalization layers. - router_bias (`bool`, *optional*, defaults to `False`): - Whether to add a bias to the router. - router_jitter_noise (`float`, *optional*, defaults to 0.0): - Amount of noise to add to the router. Set it to 0.0 during prediction or set small value (usually 1e-2) - during training. - router_dtype (`str`, *optional*, default to `"float32"`): - The `dtype` used for the routers. It is preferable to keep the `dtype` to `"float32"` as specified in the - *selective precision* discussion in [the paper](https://arxiv.org/abs/2101.03961). - router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`): - Whether to ignore padding tokens when routing. - output_hidden_states (`bool`, *optional*, default to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. - initializer_factor (`float`, *optional*, defaults to 0.002): - A factor for initializing all weight matrices. - output_router_logits (`bool`, *optional*, default to `False`): - Whether or not to return the router logits of all experts. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models) - """ - model_type = "gptsan-japanese" - keys_to_ignore_at_inference = [ - "past_key_values", - ] - attribute_map = { - "hidden_size": "d_model", - "num_attention_heads": "num_heads", - "num_hidden_layers": "num_layers", - } - - def __init__( - self, - vocab_size=36000, - max_position_embeddings=1280, - d_model=1024, - d_ff=8192, - d_ext=4096, - d_spout=128, - num_switch_layers=10, - num_ext_layers=0, - num_heads=16, - num_experts=16, - expert_capacity=128, - dropout_rate=0.0, - layer_norm_epsilon=1e-5, - router_bias=False, - router_jitter_noise=0.0, - router_dtype="float32", - router_ignore_padding_tokens=False, - output_hidden_states=False, - output_attentions=False, - initializer_factor=0.002, - output_router_logits=False, - use_cache=True, - separator_token_id=35998, - pad_token_id=35995, - eos_token_id=35999, - **kwargs, - ): - self.vocab_size = vocab_size - self.max_position_embeddings = max_position_embeddings - self.d_model = d_model - self.d_ff = d_ff - self.d_ext = d_ext - self.d_spout = d_spout - self.num_switch_layers = num_switch_layers - self.num_ext_layers = num_ext_layers - self.num_layers = num_switch_layers + num_ext_layers - self.num_heads = num_heads - self.num_experts = num_experts - self.expert_capacity = expert_capacity - self.dropout_rate = dropout_rate - self.layer_norm_epsilon = layer_norm_epsilon - self.router_bias = router_bias - self.router_jitter_noise = router_jitter_noise - self.router_dtype = router_dtype - self.router_ignore_padding_tokens = router_ignore_padding_tokens - self.output_hidden_states = output_hidden_states - self.output_attentions = output_attentions - self.initializer_factor = initializer_factor - self.output_router_logits = output_router_logits - self.use_cache = use_cache - - super().__init__( - separator_token_id=separator_token_id, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - **kwargs, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/maskformer/modeling_maskformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/maskformer/modeling_maskformer.py deleted file mode 100644 index 87b91ed64b62d32cdc7feaa8f7232e559ecd06d5..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/maskformer/modeling_maskformer.py +++ /dev/null @@ -1,1971 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Meta Platforms, Inc.s and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch MaskFormer model.""" - -import math -from dataclasses import dataclass -from numbers import Number -from typing import Dict, List, Optional, Tuple - -import numpy as np -import torch -from torch import Tensor, nn - -from ... import AutoBackbone -from ...activations import ACT2FN -from ...modeling_outputs import BaseModelOutputWithCrossAttentions -from ...modeling_utils import PreTrainedModel -from ...utils import ( - ModelOutput, - add_start_docstrings, - add_start_docstrings_to_model_forward, - is_scipy_available, - logging, - replace_return_docstrings, - requires_backends, -) -from ..detr import DetrConfig -from .configuration_maskformer import MaskFormerConfig -from .configuration_maskformer_swin import MaskFormerSwinConfig - - -if is_scipy_available(): - from scipy.optimize import linear_sum_assignment - -logger = logging.get_logger(__name__) - - -_CONFIG_FOR_DOC = "MaskFormerConfig" -_CHECKPOINT_FOR_DOC = "facebook/maskformer-swin-base-ade" - -MASKFORMER_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "facebook/maskformer-swin-base-ade", - # See all MaskFormer models at https://huggingface.co/models?filter=maskformer -] - - -@dataclass -# Copied from transformers.models.detr.modeling_detr.DetrDecoderOutput -class DetrDecoderOutput(BaseModelOutputWithCrossAttentions): - """ - Base class for outputs of the DETR decoder. This class adds one attribute to BaseModelOutputWithCrossAttentions, - namely an optional stack of intermediate decoder activations, i.e. the output of each decoder layer, each of them - gone through a layernorm. This is useful when training the model with auxiliary decoding losses. - - Args: - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer - plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in - the self-attention heads. - cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` and `config.add_cross_attention=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, - used to compute the weighted average in the cross-attention heads. - intermediate_hidden_states (`torch.FloatTensor` of shape `(config.decoder_layers, batch_size, num_queries, hidden_size)`, *optional*, returned when `config.auxiliary_loss=True`): - Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a - layernorm. - """ - - intermediate_hidden_states: Optional[torch.FloatTensor] = None - - -@dataclass -class MaskFormerPixelLevelModuleOutput(ModelOutput): - """ - MaskFormer's pixel level module output. It returns both the last and (optionally) the hidden states from the - `encoder` and `decoder`. By default, the `encoder` is a MaskFormerSwin Transformer and the `decoder` is a Feature - Pyramid Network (FPN). - - The `encoder_last_hidden_state` are referred on the paper as **images features**, while `decoder_last_hidden_state` - as **pixel embeddings** - - Args: - encoder_last_hidden_state (`torch.FloatTensor` of shape`(batch_size, num_channels, height, width)`): - Last hidden states (final feature map) of the last stage of the encoder. - encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the model at - the output of each stage. - decoder_last_hidden_state (`torch.FloatTensor` of shape`(batch_size, num_channels, height, width)`): - Last hidden states (final feature map) of the last stage of the decoder. - decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the model at - the output of each stage. - """ - - encoder_last_hidden_state: Optional[torch.FloatTensor] = None - decoder_last_hidden_state: Optional[torch.FloatTensor] = None - encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - decoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class MaskFormerPixelDecoderOutput(ModelOutput): - """ - MaskFormer's pixel decoder module output, practically a Feature Pyramid Network. It returns the last hidden state - and (optionally) the hidden states. - - Args: - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Last hidden states (final feature map) of the last stage of the model. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, num_channels, height, width)`. Hidden-states of the model at the output of each layer - plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights from Detr's decoder after the attention softmax, used to compute the - weighted average in the self-attention heads. - """ - - last_hidden_state: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class MaskFormerModelOutput(ModelOutput): - """ - Class for outputs of [`MaskFormerModel`]. This class returns all the needed hidden states to compute the logits. - - Args: - encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Last hidden states (final feature map) of the last stage of the encoder model (backbone). - pixel_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN). - transformer_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Last hidden states (final feature map) of the last stage of the transformer decoder model. - encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder - model at the output of each stage. - pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel - decoder model at the output of each stage. - transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the - transformer decoder at the output of each stage. - hidden_states `tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` containing `encoder_hidden_states`, `pixel_decoder_hidden_states` and - `decoder_hidden_states` - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights from Detr's decoder after the attention softmax, used to compute the - weighted average in the self-attention heads. - """ - - encoder_last_hidden_state: Optional[torch.FloatTensor] = None - pixel_decoder_last_hidden_state: Optional[torch.FloatTensor] = None - transformer_decoder_last_hidden_state: Optional[torch.FloatTensor] = None - encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - pixel_decoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - transformer_decoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class MaskFormerForInstanceSegmentationOutput(ModelOutput): - """ - Class for outputs of [`MaskFormerForInstanceSegmentation`]. - - This output can be directly passed to [`~MaskFormerImageProcessor.post_process_semantic_segmentation`] or or - [`~MaskFormerImageProcessor.post_process_instance_segmentation`] or - [`~MaskFormerImageProcessor.post_process_panoptic_segmentation`] depending on the task. Please, see - [`~MaskFormerImageProcessor] for details regarding usage. - - Args: - loss (`torch.Tensor`, *optional*): - The computed loss, returned when labels are present. - class_queries_logits (`torch.FloatTensor`): - A tensor of shape `(batch_size, num_queries, num_labels + 1)` representing the proposed classes for each - query. Note the `+ 1` is needed because we incorporate the null class. - masks_queries_logits (`torch.FloatTensor`): - A tensor of shape `(batch_size, num_queries, height, width)` representing the proposed masks for each - query. - encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Last hidden states (final feature map) of the last stage of the encoder model (backbone). - pixel_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN). - transformer_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Last hidden states (final feature map) of the last stage of the transformer decoder model. - encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder - model at the output of each stage. - pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel - decoder model at the output of each stage. - transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of - shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the transformer decoder at the output - of each stage. - hidden_states `tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` containing `encoder_hidden_states`, `pixel_decoder_hidden_states` and - `decoder_hidden_states`. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights from Detr's decoder after the attention softmax, used to compute the - weighted average in the self-attention heads. - """ - - loss: Optional[torch.FloatTensor] = None - class_queries_logits: torch.FloatTensor = None - masks_queries_logits: torch.FloatTensor = None - auxiliary_logits: torch.FloatTensor = None - encoder_last_hidden_state: Optional[torch.FloatTensor] = None - pixel_decoder_last_hidden_state: Optional[torch.FloatTensor] = None - transformer_decoder_last_hidden_state: Optional[torch.FloatTensor] = None - encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - pixel_decoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - transformer_decoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -def upsample_like(pixel_values: Tensor, like: Tensor, mode: str = "bilinear") -> Tensor: - """ - An utility function that upsamples `pixel_values` to match the dimension of `like`. - - Args: - pixel_values (`torch.Tensor`): - The tensor we wish to upsample. - like (`torch.Tensor`): - The tensor we wish to use as size target. - mode (str, *optional*, defaults to `"bilinear"`): - The interpolation mode. - - Returns: - `torch.Tensor`: The upsampled tensor - """ - _, _, height, width = like.shape - upsampled = nn.functional.interpolate(pixel_values, size=(height, width), mode=mode, align_corners=False) - return upsampled - - -# refactored from original implementation -def dice_loss(inputs: Tensor, labels: Tensor, num_masks: int) -> Tensor: - r""" - Compute the DICE loss, similar to generalized IOU for masks as follows: - - $$ \mathcal{L}_{\text{dice}(x, y) = 1 - \frac{2 * x \cap y }{x \cup y + 1}} $$ - - In practice, since `labels` is a binary mask, (only 0s and 1s), dice can be computed as follow - - $$ \mathcal{L}_{\text{dice}(x, y) = 1 - \frac{2 * x * y }{x + y + 1}} $$ - - Args: - inputs (`torch.Tensor`): - A tensor representing a mask. - labels (`torch.Tensor`): - A tensor with the same shape as inputs. Stores the binary classification labels for each element in inputs - (0 for the negative class and 1 for the positive class). - num_masks (`int`): - The number of masks present in the current batch, used for normalization. - - Returns: - `torch.Tensor`: The computed loss. - """ - probs = inputs.sigmoid().flatten(1) - numerator = 2 * (probs * labels).sum(-1) - denominator = probs.sum(-1) + labels.sum(-1) - loss = 1 - (numerator + 1) / (denominator + 1) - loss = loss.sum() / num_masks - return loss - - -# refactored from original implementation -def sigmoid_focal_loss( - inputs: Tensor, labels: Tensor, num_masks: int, alpha: float = 0.25, gamma: float = 2 -) -> Tensor: - r""" - Focal loss proposed in [Focal Loss for Dense Object Detection](https://arxiv.org/abs/1708.02002) originally used in - RetinaNet. The loss is computed as follows: - - $$ \mathcal{L}_{\text{focal loss} = -(1 - p_t)^{\gamma}\log{(p_t)} $$ - - where \\(CE(p_t) = -\log{(p_t)}}\\), CE is the standard Cross Entropy Loss - - Please refer to equation (1,2,3) of the paper for a better understanding. - - Args: - inputs (`torch.Tensor`): - A float tensor of arbitrary shape. - labels (`torch.Tensor`): - A tensor with the same shape as inputs. Stores the binary classification labels for each element in inputs - (0 for the negative class and 1 for the positive class). - num_masks (`int`): - The number of masks present in the current batch, used for normalization. - alpha (float, *optional*, defaults to 0.25): - Weighting factor in range (0,1) to balance positive vs negative examples. - gamma (float, *optional*, defaults to 2.0): - Exponent of the modulating factor \\(1 - p_t\\) to balance easy vs hard examples. - - Returns: - `torch.Tensor`: The computed loss. - """ - criterion = nn.BCEWithLogitsLoss(reduction="none") - probs = inputs.sigmoid() - cross_entropy_loss = criterion(inputs, labels) - p_t = probs * labels + (1 - probs) * (1 - labels) - loss = cross_entropy_loss * ((1 - p_t) ** gamma) - - if alpha >= 0: - alpha_t = alpha * labels + (1 - alpha) * (1 - labels) - loss = alpha_t * loss - - loss = loss.mean(1).sum() / num_masks - return loss - - -# refactored from original implementation -def pair_wise_dice_loss(inputs: Tensor, labels: Tensor) -> Tensor: - """ - A pair wise version of the dice loss, see `dice_loss` for usage. - - Args: - inputs (`torch.Tensor`): - A tensor representing a mask - labels (`torch.Tensor`): - A tensor with the same shape as inputs. Stores the binary classification labels for each element in inputs - (0 for the negative class and 1 for the positive class). - - Returns: - `torch.Tensor`: The computed loss between each pairs. - """ - inputs = inputs.sigmoid().flatten(1) - numerator = 2 * torch.matmul(inputs, labels.T) - # using broadcasting to get a [num_queries, NUM_CLASSES] matrix - denominator = inputs.sum(-1)[:, None] + labels.sum(-1)[None, :] - loss = 1 - (numerator + 1) / (denominator + 1) - return loss - - -# refactored from original implementation -def pair_wise_sigmoid_focal_loss(inputs: Tensor, labels: Tensor, alpha: float = 0.25, gamma: float = 2.0) -> Tensor: - r""" - A pair wise version of the focal loss, see `sigmoid_focal_loss` for usage. - - Args: - inputs (`torch.Tensor`): - A tensor representing a mask. - labels (`torch.Tensor`): - A tensor with the same shape as inputs. Stores the binary classification labels for each element in inputs - (0 for the negative class and 1 for the positive class). - alpha (float, *optional*, defaults to 0.25): - Weighting factor in range (0,1) to balance positive vs negative examples. - gamma (float, *optional*, defaults to 2.0): - Exponent of the modulating factor \\(1 - p_t\\) to balance easy vs hard examples. - - Returns: - `torch.Tensor`: The computed loss between each pairs. - """ - if alpha < 0: - raise ValueError("alpha must be positive") - - height_and_width = inputs.shape[1] - - criterion = nn.BCEWithLogitsLoss(reduction="none") - prob = inputs.sigmoid() - cross_entropy_loss_pos = criterion(inputs, torch.ones_like(inputs)) - focal_pos = ((1 - prob) ** gamma) * cross_entropy_loss_pos - focal_pos *= alpha - - cross_entropy_loss_neg = criterion(inputs, torch.zeros_like(inputs)) - - focal_neg = (prob**gamma) * cross_entropy_loss_neg - focal_neg *= 1 - alpha - - loss = torch.matmul(focal_pos, labels.T) + torch.matmul(focal_neg, (1 - labels).T) - - return loss / height_and_width - - -# Copied from transformers.models.detr.modeling_detr.DetrAttention -class DetrAttention(nn.Module): - """ - Multi-headed attention from 'Attention Is All You Need' paper. - - Here, we add position embeddings to the queries and keys (as explained in the DETR paper). - """ - - def __init__( - self, - embed_dim: int, - num_heads: int, - dropout: float = 0.0, - bias: bool = True, - ): - super().__init__() - self.embed_dim = embed_dim - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - if self.head_dim * num_heads != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:" - f" {num_heads})." - ) - self.scaling = self.head_dim**-0.5 - - self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - - def _shape(self, tensor: torch.Tensor, seq_len: int, batch_size: int): - return tensor.view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def with_pos_embed(self, tensor: torch.Tensor, object_queries: Optional[Tensor], **kwargs): - position_embeddings = kwargs.pop("position_embeddings", None) - - if kwargs: - raise ValueError(f"Unexpected arguments {kwargs.keys()}") - - if position_embeddings is not None and object_queries is not None: - raise ValueError( - "Cannot specify both position_embeddings and object_queries. Please use just object_queries" - ) - - if position_embeddings is not None: - logger.warning_once( - "position_embeddings has been deprecated and will be removed in v4.34. Please use object_queries instead" - ) - object_queries = position_embeddings - - return tensor if object_queries is None else tensor + object_queries - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - object_queries: Optional[torch.Tensor] = None, - key_value_states: Optional[torch.Tensor] = None, - spatial_position_embeddings: Optional[torch.Tensor] = None, - output_attentions: bool = False, - **kwargs, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel""" - - position_embeddings = kwargs.pop("position_ebmeddings", None) - key_value_position_embeddings = kwargs.pop("key_value_position_embeddings", None) - - if kwargs: - raise ValueError(f"Unexpected arguments {kwargs.keys()}") - - if position_embeddings is not None and object_queries is not None: - raise ValueError( - "Cannot specify both position_embeddings and object_queries. Please use just object_queries" - ) - - if key_value_position_embeddings is not None and spatial_position_embeddings is not None: - raise ValueError( - "Cannot specify both key_value_position_embeddings and spatial_position_embeddings. Please use just spatial_position_embeddings" - ) - - if position_embeddings is not None: - logger.warning_once( - "position_embeddings has been deprecated and will be removed in v4.34. Please use object_queries instead" - ) - object_queries = position_embeddings - - if key_value_position_embeddings is not None: - logger.warning_once( - "key_value_position_embeddings has been deprecated and will be removed in v4.34. Please use spatial_position_embeddings instead" - ) - spatial_position_embeddings = key_value_position_embeddings - - # if key_value_states are provided this layer is used as a cross-attention layer - # for the decoder - is_cross_attention = key_value_states is not None - batch_size, target_len, embed_dim = hidden_states.size() - - # add position embeddings to the hidden states before projecting to queries and keys - if object_queries is not None: - hidden_states_original = hidden_states - hidden_states = self.with_pos_embed(hidden_states, object_queries) - - # add key-value position embeddings to the key value states - if spatial_position_embeddings is not None: - key_value_states_original = key_value_states - key_value_states = self.with_pos_embed(key_value_states, spatial_position_embeddings) - - # get query proj - query_states = self.q_proj(hidden_states) * self.scaling - # get key, value proj - if is_cross_attention: - # cross_attentions - key_states = self._shape(self.k_proj(key_value_states), -1, batch_size) - value_states = self._shape(self.v_proj(key_value_states_original), -1, batch_size) - else: - # self_attention - key_states = self._shape(self.k_proj(hidden_states), -1, batch_size) - value_states = self._shape(self.v_proj(hidden_states_original), -1, batch_size) - - proj_shape = (batch_size * self.num_heads, -1, self.head_dim) - query_states = self._shape(query_states, target_len, batch_size).view(*proj_shape) - key_states = key_states.view(*proj_shape) - value_states = value_states.view(*proj_shape) - - source_len = key_states.size(1) - - attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) - - if attn_weights.size() != (batch_size * self.num_heads, target_len, source_len): - raise ValueError( - f"Attention weights should be of size {(batch_size * self.num_heads, target_len, source_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (batch_size, 1, target_len, source_len): - raise ValueError( - f"Attention mask should be of size {(batch_size, 1, target_len, source_len)}, but is" - f" {attention_mask.size()}" - ) - attn_weights = attn_weights.view(batch_size, self.num_heads, target_len, source_len) + attention_mask - attn_weights = attn_weights.view(batch_size * self.num_heads, target_len, source_len) - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - if output_attentions: - # this operation is a bit awkward, but it's required to - # make sure that attn_weights keeps its gradient. - # In order to do so, attn_weights have to reshaped - # twice and have to be reused in the following - attn_weights_reshaped = attn_weights.view(batch_size, self.num_heads, target_len, source_len) - attn_weights = attn_weights_reshaped.view(batch_size * self.num_heads, target_len, source_len) - else: - attn_weights_reshaped = None - - attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - - attn_output = torch.bmm(attn_probs, value_states) - - if attn_output.size() != (batch_size * self.num_heads, target_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(batch_size, self.num_heads, target_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.view(batch_size, self.num_heads, target_len, self.head_dim) - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(batch_size, target_len, embed_dim) - - attn_output = self.out_proj(attn_output) - - return attn_output, attn_weights_reshaped - - -# Copied from transformers.models.detr.modeling_detr.DetrDecoderLayer -class DetrDecoderLayer(nn.Module): - def __init__(self, config: DetrConfig): - super().__init__() - self.embed_dim = config.d_model - - self.self_attn = DetrAttention( - embed_dim=self.embed_dim, - num_heads=config.decoder_attention_heads, - dropout=config.attention_dropout, - ) - self.dropout = config.dropout - self.activation_fn = ACT2FN[config.activation_function] - self.activation_dropout = config.activation_dropout - - self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) - self.encoder_attn = DetrAttention( - self.embed_dim, - config.decoder_attention_heads, - dropout=config.attention_dropout, - ) - self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim) - self.fc1 = nn.Linear(self.embed_dim, config.decoder_ffn_dim) - self.fc2 = nn.Linear(config.decoder_ffn_dim, self.embed_dim) - self.final_layer_norm = nn.LayerNorm(self.embed_dim) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - object_queries: Optional[torch.Tensor] = None, - query_position_embeddings: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = False, - **kwargs, - ): - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, target_len, source_len)` where padding elements are indicated by very large negative - values. - object_queries (`torch.FloatTensor`, *optional*): - object_queries that are added to the hidden states - in the cross-attention layer. - query_position_embeddings (`torch.FloatTensor`, *optional*): - position embeddings that are added to the queries and keys - in the self-attention layer. - encoder_hidden_states (`torch.FloatTensor`): - cross attention input to the layer of shape `(batch, seq_len, embed_dim)` - encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size - `(batch, 1, target_len, source_len)` where padding elements are indicated by very large negative - values. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - position_embeddings = kwargs.pop("position_embeddings", None) - - if kwargs: - raise ValueError(f"Unexpected arguments {kwargs.keys()}") - - if position_embeddings is not None and object_queries is not None: - raise ValueError( - "Cannot specify both position_embeddings and object_queries. Please use just object_queries" - ) - - if position_embeddings is not None: - logger.warning_once( - "position_embeddings has been deprecated and will be removed in v4.34. Please use object_queries instead" - ) - object_queries = position_embeddings - - residual = hidden_states - - # Self Attention - hidden_states, self_attn_weights = self.self_attn( - hidden_states=hidden_states, - object_queries=query_position_embeddings, - attention_mask=attention_mask, - output_attentions=output_attentions, - ) - - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.self_attn_layer_norm(hidden_states) - - # Cross-Attention Block - cross_attn_weights = None - if encoder_hidden_states is not None: - residual = hidden_states - - hidden_states, cross_attn_weights = self.encoder_attn( - hidden_states=hidden_states, - object_queries=query_position_embeddings, - key_value_states=encoder_hidden_states, - attention_mask=encoder_attention_mask, - spatial_position_embeddings=object_queries, - output_attentions=output_attentions, - ) - - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.encoder_attn_layer_norm(hidden_states) - - # Fully Connected - residual = hidden_states - hidden_states = self.activation_fn(self.fc1(hidden_states)) - hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) - hidden_states = self.fc2(hidden_states) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.final_layer_norm(hidden_states) - - outputs = (hidden_states,) - - if output_attentions: - outputs += (self_attn_weights, cross_attn_weights) - - return outputs - - -# Copied from transformers.models.detr.modeling_detr._expand_mask -def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, target_len: Optional[int] = None): - """ - Expands attention_mask from `[batch_size, seq_len]` to `[batch_size, 1, target_seq_len, source_seq_len]`. - """ - batch_size, source_len = mask.size() - target_len = target_len if target_len is not None else source_len - - expanded_mask = mask[:, None, None, :].expand(batch_size, 1, target_len, source_len).to(dtype) - - inverted_mask = 1.0 - expanded_mask - - return inverted_mask.masked_fill(inverted_mask.bool(), torch.finfo(dtype).min) - - -class DetrDecoder(nn.Module): - """ - Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`DetrDecoderLayer`]. - - The decoder updates the query embeddings through multiple self-attention and cross-attention layers. - - Some small tweaks for DETR: - - - object_queries and query_position_embeddings are added to the forward pass. - - if self.config.auxiliary_loss is set to True, also returns a stack of activations from all decoding layers. - - Args: - config: DetrConfig - """ - - def __init__(self, config: DetrConfig): - super().__init__() - self.config = config - self.dropout = config.dropout - self.layerdrop = config.decoder_layerdrop - - self.layers = nn.ModuleList([DetrDecoderLayer(config) for _ in range(config.decoder_layers)]) - # in DETR, the decoder uses layernorm after the last decoder layer output - self.layernorm = nn.LayerNorm(config.d_model) - - self.gradient_checkpointing = False - - def forward( - self, - inputs_embeds=None, - attention_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - object_queries=None, - query_position_embeddings=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - **kwargs, - ): - r""" - Args: - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - The query embeddings that are passed into the decoder. - - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on certain queries. Mask values selected in `[0, 1]`: - - - 1 for queries that are **not masked**, - - 0 for queries that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention - of the decoder. - encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, *optional*): - Mask to avoid performing cross-attention on padding pixel_values of the encoder. Mask values selected - in `[0, 1]`: - - - 1 for pixels that are real (i.e. **not masked**), - - 0 for pixels that are padding (i.e. **masked**). - - object_queries (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Position embeddings that are added to the queries and keys in each cross-attention layer. - query_position_embeddings (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_size)`): - , *optional*): Position embeddings that are added to the queries and keys in each self-attention layer. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - """ - position_embeddings = kwargs.pop("position_embeddings", None) - if kwargs: - raise ValueError(f"Unexpected arguments {kwargs.keys()}") - - if position_embeddings is not None and object_queries is not None: - raise ValueError( - "Cannot specify both position_embeddings and object_queries. Please use just object_queries" - ) - - if position_embeddings is not None: - logger.warning_once( - "position_embeddings has been deprecated and will be removed in v4.34. Please use object_queries instead" - ) - object_queries = position_embeddings - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if inputs_embeds is not None: - hidden_states = inputs_embeds - input_shape = inputs_embeds.size()[:-1] - - combined_attention_mask = None - - if attention_mask is not None and combined_attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - combined_attention_mask = combined_attention_mask + _expand_mask( - attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1] - ) - - # expand encoder attention mask - if encoder_hidden_states is not None and encoder_attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) - - # optional intermediate hidden states - intermediate = () if self.config.auxiliary_loss else None - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - - for idx, decoder_layer in enumerate(self.layers): - # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) - if output_hidden_states: - all_hidden_states += (hidden_states,) - if self.training: - dropout_probability = torch.rand([]) - if dropout_probability < self.layerdrop: - continue - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), - hidden_states, - combined_attention_mask, - encoder_hidden_states, - encoder_attention_mask, - None, - ) - else: - layer_outputs = decoder_layer( - hidden_states, - attention_mask=combined_attention_mask, - object_queries=object_queries, - query_position_embeddings=query_position_embeddings, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - ) - - hidden_states = layer_outputs[0] - - if self.config.auxiliary_loss: - hidden_states = self.layernorm(hidden_states) - intermediate += (hidden_states,) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - if encoder_hidden_states is not None: - all_cross_attentions += (layer_outputs[2],) - - # finally, apply layernorm - hidden_states = self.layernorm(hidden_states) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - # stack intermediate decoder activations - if self.config.auxiliary_loss: - intermediate = torch.stack(intermediate) - - if not return_dict: - return tuple( - v - for v in [hidden_states, all_hidden_states, all_self_attns, all_cross_attentions, intermediate] - if v is not None - ) - return DetrDecoderOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_self_attns, - cross_attentions=all_cross_attentions, - intermediate_hidden_states=intermediate, - ) - - -# refactored from original implementation -class MaskFormerHungarianMatcher(nn.Module): - """This class computes an assignment between the labels and the predictions of the network. - - For efficiency reasons, the labels don't include the no_object. Because of this, in general, there are more - predictions than labels. In this case, we do a 1-to-1 matching of the best predictions, while the others are - un-matched (and thus treated as non-objects). - """ - - def __init__(self, cost_class: float = 1.0, cost_mask: float = 1.0, cost_dice: float = 1.0): - """Creates the matcher - - Params: - cost_class (float, *optional*, defaults to 1.0): - This is the relative weight of the classification error in the matching cost. - cost_mask (float, *optional*, defaults to 1.0): - This is the relative weight of the focal loss of the binary mask in the matching cost. - cost_dice (float, *optional*, defaults to 1.0): - This is the relative weight of the dice loss of the binary mask in the matching cost - """ - super().__init__() - if cost_class == 0 and cost_mask == 0 and cost_dice == 0: - raise ValueError("All costs cant be 0") - self.cost_class = cost_class - self.cost_mask = cost_mask - self.cost_dice = cost_dice - - @torch.no_grad() - def forward(self, masks_queries_logits, class_queries_logits, mask_labels, class_labels) -> List[Tuple[Tensor]]: - """Performs the matching - - Params: - masks_queries_logits (`torch.Tensor`): - A tensor` of dim `batch_size, num_queries, num_labels` with the - classification logits. - class_queries_logits (`torch.Tensor`): - A tensor` of dim `batch_size, num_queries, height, width` with the - predicted masks. - - class_labels (`torch.Tensor`): - A tensor` of dim `num_target_boxes` (where num_target_boxes is the number - of ground-truth objects in the target) containing the class labels. - mask_labels (`torch.Tensor`): - A tensor` of dim `num_target_boxes, height, width` containing the target - masks. - - Returns: - `List[Tuple[Tensor]]`: A list of size batch_size, containing tuples of (index_i, index_j) where: - - index_i is the indices of the selected predictions (in order) - - index_j is the indices of the corresponding selected labels (in order) - For each batch element, it holds: - len(index_i) = len(index_j) = min(num_queries, num_target_boxes). - """ - indices: List[Tuple[np.array]] = [] - - preds_masks = masks_queries_logits - preds_probs = class_queries_logits - # iterate through batch size - for pred_probs, pred_mask, target_mask, labels in zip(preds_probs, preds_masks, mask_labels, class_labels): - # downsample the target mask, save memory - target_mask = nn.functional.interpolate(target_mask[:, None], size=pred_mask.shape[-2:], mode="nearest") - pred_probs = pred_probs.softmax(-1) - # Compute the classification cost. Contrary to the loss, we don't use the NLL, - # but approximate it in 1 - proba[target class]. - # The 1 is a constant that doesn't change the matching, it can be ommitted. - cost_class = -pred_probs[:, labels] - # flatten spatial dimension "q h w -> q (h w)" - pred_mask_flat = pred_mask.flatten(1) # [num_queries, height*width] - # same for target_mask "c h w -> c (h w)" - target_mask_flat = target_mask[:, 0].flatten(1) # [num_total_labels, height*width] - # compute the focal loss between each mask pairs -> shape (num_queries, num_labels) - cost_mask = pair_wise_sigmoid_focal_loss(pred_mask_flat, target_mask_flat) - # Compute the dice loss betwen each mask pairs -> shape (num_queries, num_labels) - cost_dice = pair_wise_dice_loss(pred_mask_flat, target_mask_flat) - # final cost matrix - cost_matrix = self.cost_mask * cost_mask + self.cost_class * cost_class + self.cost_dice * cost_dice - # do the assigmented using the hungarian algorithm in scipy - assigned_indices: Tuple[np.array] = linear_sum_assignment(cost_matrix.cpu()) - indices.append(assigned_indices) - - # It could be stacked in one tensor - matched_indices = [ - (torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64)) for i, j in indices - ] - return matched_indices - - def __repr__(self): - head = "Matcher " + self.__class__.__name__ - body = [ - f"cost_class: {self.cost_class}", - f"cost_mask: {self.cost_mask}", - f"cost_dice: {self.cost_dice}", - ] - _repr_indent = 4 - lines = [head] + [" " * _repr_indent + line for line in body] - return "\n".join(lines) - - -# copied and adapted from original implementation -class MaskFormerLoss(nn.Module): - def __init__( - self, - num_labels: int, - matcher: MaskFormerHungarianMatcher, - weight_dict: Dict[str, float], - eos_coef: float, - ): - """ - The MaskFormer Loss. The loss is computed very similar to DETR. The process happens in two steps: 1) we compute - hungarian assignment between ground truth masks and the outputs of the model 2) we supervise each pair of - matched ground-truth / prediction (supervise class and mask) - - Args: - num_labels (`int`): - The number of classes. - matcher (`MaskFormerHungarianMatcher`): - A torch module that computes the assigments between the predictions and labels. - weight_dict (`Dict[str, float]`): - A dictionary of weights to be applied to the different losses. - eos_coef (`float`): - Weight to apply to the null class. - """ - - super().__init__() - requires_backends(self, ["scipy"]) - self.num_labels = num_labels - self.matcher = matcher - self.weight_dict = weight_dict - self.eos_coef = eos_coef - empty_weight = torch.ones(self.num_labels + 1) - empty_weight[-1] = self.eos_coef - self.register_buffer("empty_weight", empty_weight) - - def _max_by_axis(self, the_list: List[List[int]]) -> List[int]: - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - def _pad_images_to_max_in_batch(self, tensors: List[Tensor]) -> Tuple[Tensor, Tensor]: - # get the maximum size in the batch - max_size = self._max_by_axis([list(tensor.shape) for tensor in tensors]) - batch_size = len(tensors) - # compute finel size - batch_shape = [batch_size] + max_size - b, _, h, w = batch_shape - # get metadata - dtype = tensors[0].dtype - device = tensors[0].device - padded_tensors = torch.zeros(batch_shape, dtype=dtype, device=device) - padding_masks = torch.ones((b, h, w), dtype=torch.bool, device=device) - # pad the tensors to the size of the biggest one - for tensor, padded_tensor, padding_mask in zip(tensors, padded_tensors, padding_masks): - padded_tensor[: tensor.shape[0], : tensor.shape[1], : tensor.shape[2]].copy_(tensor) - padding_mask[: tensor.shape[1], : tensor.shape[2]] = False - - return padded_tensors, padding_masks - - def loss_labels( - self, class_queries_logits: Tensor, class_labels: List[Tensor], indices: Tuple[np.array] - ) -> Dict[str, Tensor]: - """Compute the losses related to the labels using cross entropy. - - Args: - class_queries_logits (`torch.Tensor`): - A tensor of shape `batch_size, num_queries, num_labels` - class_labels (`List[torch.Tensor]`): - List of class labels of shape `(labels)`. - indices (`Tuple[np.array])`: - The indices computed by the Hungarian matcher. - - Returns: - `Dict[str, Tensor]`: A dict of `torch.Tensor` containing the following key: - - **loss_cross_entropy** -- The loss computed using cross entropy on the predicted and ground truth labels. - """ - - pred_logits = class_queries_logits - batch_size, num_queries, _ = pred_logits.shape - criterion = nn.CrossEntropyLoss(weight=self.empty_weight) - idx = self._get_predictions_permutation_indices(indices) - # shape = (batch_size, num_queries) - target_classes_o = torch.cat([target[j] for target, (_, j) in zip(class_labels, indices)]) - # shape = (batch_size, num_queries) - target_classes = torch.full( - (batch_size, num_queries), fill_value=self.num_labels, dtype=torch.int64, device=pred_logits.device - ) - target_classes[idx] = target_classes_o - # target_classes is a (batch_size, num_labels, num_queries), we need to permute pred_logits "b q c -> b c q" - pred_logits_transposed = pred_logits.transpose(1, 2) - loss_ce = criterion(pred_logits_transposed, target_classes) - losses = {"loss_cross_entropy": loss_ce} - return losses - - def loss_masks( - self, masks_queries_logits: Tensor, mask_labels: List[Tensor], indices: Tuple[np.array], num_masks: int - ) -> Dict[str, Tensor]: - """Compute the losses related to the masks using focal and dice loss. - - Args: - masks_queries_logits (`torch.Tensor`): - A tensor of shape `batch_size, num_queries, height, width` - mask_labels (`torch.Tensor`): - List of mask labels of shape `(labels, height, width)`. - indices (`Tuple[np.array])`: - The indices computed by the Hungarian matcher. - num_masks (`int)`: - The number of masks, used for normalization. - - Returns: - `Dict[str, Tensor]`: A dict of `torch.Tensor` containing two keys: - - **loss_mask** -- The loss computed using sigmoid focal loss on the predicted and ground truth masks. - - **loss_dice** -- The loss computed using dice loss on the predicted on the predicted and ground truth - masks. - """ - src_idx = self._get_predictions_permutation_indices(indices) - tgt_idx = self._get_targets_permutation_indices(indices) - # shape (batch_size * num_queries, height, width) - pred_masks = masks_queries_logits[src_idx] - # shape (batch_size, num_queries, height, width) - # pad all and stack the targets to the num_labels dimension - target_masks, _ = self._pad_images_to_max_in_batch(mask_labels) - target_masks = target_masks[tgt_idx] - # upsample predictions to the target size, we have to add one dim to use interpolate - pred_masks = nn.functional.interpolate( - pred_masks[:, None], size=target_masks.shape[-2:], mode="bilinear", align_corners=False - ) - pred_masks = pred_masks[:, 0].flatten(1) - - target_masks = target_masks.flatten(1) - losses = { - "loss_mask": sigmoid_focal_loss(pred_masks, target_masks, num_masks), - "loss_dice": dice_loss(pred_masks, target_masks, num_masks), - } - return losses - - def _get_predictions_permutation_indices(self, indices): - # permute predictions following indices - batch_indices = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)]) - predictions_indices = torch.cat([src for (src, _) in indices]) - return batch_indices, predictions_indices - - def _get_targets_permutation_indices(self, indices): - # permute labels following indices - batch_indices = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)]) - target_indices = torch.cat([tgt for (_, tgt) in indices]) - return batch_indices, target_indices - - def forward( - self, - masks_queries_logits: Tensor, - class_queries_logits: Tensor, - mask_labels: List[Tensor], - class_labels: List[Tensor], - auxiliary_predictions: Optional[Dict[str, Tensor]] = None, - ) -> Dict[str, Tensor]: - """ - This performs the loss computation. - - Args: - masks_queries_logits (`torch.Tensor`): - A tensor of shape `batch_size, num_queries, height, width` - class_queries_logits (`torch.Tensor`): - A tensor of shape `batch_size, num_queries, num_labels` - mask_labels (`torch.Tensor`): - List of mask labels of shape `(labels, height, width)`. - class_labels (`List[torch.Tensor]`): - List of class labels of shape `(labels)`. - auxiliary_predictions (`Dict[str, torch.Tensor]`, *optional*): - if `use_auxiliary_loss` was set to `true` in [`MaskFormerConfig`], then it contains the logits from the - inner layers of the Detr's Decoder. - - Returns: - `Dict[str, Tensor]`: A dict of `torch.Tensor` containing two keys: - - **loss_cross_entropy** -- The loss computed using cross entropy on the predicted and ground truth labels. - - **loss_mask** -- The loss computed using sigmoid focal loss on the predicted and ground truth masks. - - **loss_dice** -- The loss computed using dice loss on the predicted on the predicted and ground truth - masks. - if `use_auxiliary_loss` was set to `true` in [`MaskFormerConfig`], the dictionary contains addional losses - for each auxiliary predictions. - """ - - # retrieve the matching between the outputs of the last layer and the labels - indices = self.matcher(masks_queries_logits, class_queries_logits, mask_labels, class_labels) - # compute the average number of target masks for normalization purposes - num_masks: Number = self.get_num_masks(class_labels, device=class_labels[0].device) - # get all the losses - losses: Dict[str, Tensor] = { - **self.loss_masks(masks_queries_logits, mask_labels, indices, num_masks), - **self.loss_labels(class_queries_logits, class_labels, indices), - } - # in case of auxiliary losses, we repeat this process with the output of each intermediate layer. - if auxiliary_predictions is not None: - for idx, aux_outputs in enumerate(auxiliary_predictions): - masks_queries_logits = aux_outputs["masks_queries_logits"] - class_queries_logits = aux_outputs["class_queries_logits"] - loss_dict = self.forward(masks_queries_logits, class_queries_logits, mask_labels, class_labels) - loss_dict = {f"{key}_{idx}": value for key, value in loss_dict.items()} - losses.update(loss_dict) - - return losses - - def get_num_masks(self, class_labels: torch.Tensor, device: torch.device) -> torch.Tensor: - """ - Computes the average number of target masks across the batch, for normalization purposes. - """ - num_masks = sum([len(classes) for classes in class_labels]) - num_masks_pt = torch.as_tensor([num_masks], dtype=torch.float, device=device) - return num_masks_pt - - -class MaskFormerFPNConvLayer(nn.Module): - def __init__(self, in_features: int, out_features: int, kernel_size: int = 3, padding: int = 1): - """ - A basic module that executes conv - norm - in sequence used in MaskFormer. - - Args: - in_features (`int`): - The number of input features (channels). - out_features (`int`): - The number of outputs features (channels). - """ - super().__init__() - self.layers = [ - nn.Conv2d(in_features, out_features, kernel_size=kernel_size, padding=padding, bias=False), - nn.GroupNorm(32, out_features), - nn.ReLU(inplace=True), - ] - for i, layer in enumerate(self.layers): - # Provide backwards compatibility from when the class inherited from nn.Sequential - # In nn.Sequential subclasses, the name given to the layer is its index in the sequence. - # In nn.Module subclasses they derived from the instance attribute they are assigned to e.g. - # self.my_layer_name = Layer() - # We can't give instance attributes integer names i.e. self.0 is not permitted and so need to register - # explicitly - self.add_module(str(i), layer) - - def forward(self, input: Tensor) -> Tensor: - hidden_state = input - for layer in self.layers: - hidden_state = layer(hidden_state) - return hidden_state - - -class MaskFormerFPNLayer(nn.Module): - def __init__(self, in_features: int, lateral_features: int): - """ - A Feature Pyramid Network Layer (FPN) layer. It creates a feature map by aggregating features from the previous - and backbone layer. Due to the spatial mismatch, the tensor coming from the previous layer is upsampled. - - Args: - in_features (`int`): - The number of input features (channels). - lateral_features (`int`): - The number of lateral features (channels). - """ - super().__init__() - self.proj = nn.Sequential( - nn.Conv2d(lateral_features, in_features, kernel_size=1, padding=0, bias=False), - nn.GroupNorm(32, in_features), - ) - - self.block = MaskFormerFPNConvLayer(in_features, in_features) - - def forward(self, down: Tensor, left: Tensor) -> Tensor: - left = self.proj(left) - down = nn.functional.interpolate(down, size=left.shape[-2:], mode="nearest") - down += left - down = self.block(down) - return down - - -class MaskFormerFPNModel(nn.Module): - def __init__(self, in_features: int, lateral_widths: List[int], feature_size: int = 256): - """ - Feature Pyramid Network, given an input tensor and a set of feature map of different feature/spatial size, it - creates a list of feature maps with the same feature size. - - Args: - in_features (`int`): - The number of input features (channels). - lateral_widths (`List[int]`): - A list with the features (channels) size of each lateral connection. - feature_size (int, *optional*, defaults to 256): - The features (channels) of the resulting feature maps. - """ - super().__init__() - self.stem = MaskFormerFPNConvLayer(in_features, feature_size) - self.layers = nn.Sequential( - *[MaskFormerFPNLayer(feature_size, lateral_width) for lateral_width in lateral_widths[::-1]] - ) - - def forward(self, features: List[Tensor]) -> List[Tensor]: - fpn_features = [] - last_feature = features[-1] - other_features = features[:-1] - output = self.stem(last_feature) - for layer, left in zip(self.layers, other_features[::-1]): - output = layer(output, left) - fpn_features.append(output) - return fpn_features - - -class MaskFormerPixelDecoder(nn.Module): - def __init__(self, *args, feature_size: int = 256, mask_feature_size: int = 256, **kwargs): - r""" - Pixel Decoder Module proposed in [Per-Pixel Classification is Not All You Need for Semantic - Segmentation](https://arxiv.org/abs/2107.06278). It first runs the backbone's features into a Feature Pyramid - Network creating a list of feature maps. Then, it projects the last one to the correct `mask_size`. - - Args: - feature_size (`int`, *optional*, defaults to 256): - The feature size (channel dimension) of the FPN feature maps. - mask_feature_size (`int`, *optional*, defaults to 256): - The features (channels) of the target masks size \\(C_{\epsilon}\\) in the paper. - """ - super().__init__() - - self.fpn = MaskFormerFPNModel(*args, feature_size=feature_size, **kwargs) - self.mask_projection = nn.Conv2d(feature_size, mask_feature_size, kernel_size=3, padding=1) - - def forward( - self, features: List[Tensor], output_hidden_states: bool = False, return_dict: bool = True - ) -> MaskFormerPixelDecoderOutput: - fpn_features = self.fpn(features) - # we use the last feature map - last_feature_projected = self.mask_projection(fpn_features[-1]) - - if not return_dict: - return (last_feature_projected, tuple(fpn_features)) if output_hidden_states else (last_feature_projected,) - - return MaskFormerPixelDecoderOutput( - last_hidden_state=last_feature_projected, hidden_states=tuple(fpn_features) if output_hidden_states else () - ) - - -# copied and adapted from original implementation, also practically equal to DetrSinePositionEmbedding -class MaskFormerSinePositionEmbedding(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one used by the Attention is all you - need paper, generalized to work on images. - """ - - def __init__( - self, num_pos_feats: int = 64, temperature: int = 10000, normalize: bool = False, scale: Optional[float] = None - ): - super().__init__() - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - self.scale = 2 * math.pi if scale is None else scale - - def forward(self, x: Tensor, mask: Optional[Tensor] = None) -> Tensor: - if mask is None: - mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) - not_mask = (~mask).to(x.dtype) - y_embed = not_mask.cumsum(1) - x_embed = not_mask.cumsum(2) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=x.dtype, device=x.device) - dim_t = self.temperature ** (2 * torch.div(dim_t, 2, rounding_mode="floor") / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - -class PredictionBlock(nn.Module): - def __init__(self, in_dim: int, out_dim: int, activation: nn.Module) -> None: - super().__init__() - self.layers = [nn.Linear(in_dim, out_dim), activation] - # Maintain submodule indexing as if part of a Sequential block - for i, layer in enumerate(self.layers): - self.add_module(str(i), layer) - - def forward(self, input: Tensor) -> Tensor: - hidden_state = input - for layer in self.layers: - hidden_state = layer(hidden_state) - return hidden_state - - -class MaskformerMLPPredictionHead(nn.Module): - def __init__(self, input_dim: int, hidden_dim: int, output_dim: int, num_layers: int = 3): - """ - A classic Multi Layer Perceptron (MLP). - - Args: - input_dim (`int`): - The input dimensions. - hidden_dim (`int`): - The hidden dimensions. - output_dim (`int`): - The output dimensions. - num_layers (int, *optional*, defaults to 3): - The number of layers. - """ - super().__init__() - in_dims = [input_dim] + [hidden_dim] * (num_layers - 1) - out_dims = [hidden_dim] * (num_layers - 1) + [output_dim] - - self.layers = [] - for i, (in_dim, out_dim) in enumerate(zip(in_dims, out_dims)): - activation = nn.ReLU() if i < num_layers - 1 else nn.Identity() - layer = PredictionBlock(in_dim, out_dim, activation=activation) - self.layers.append(layer) - # Provide backwards compatibility from when the class inherited from nn.Sequential - # In nn.Sequential subclasses, the name given to the layer is its index in the sequence. - # In nn.Module subclasses they derived from the instance attribute they are assigned to e.g. - # self.my_layer_name = Layer() - # We can't give instance attributes integer names i.e. self.0 is not permitted and so need to register - # explicitly - self.add_module(str(i), layer) - - def forward(self, input: Tensor) -> Tensor: - hidden_state = input - for layer in self.layers: - hidden_state = layer(hidden_state) - return hidden_state - - -class MaskFormerPixelLevelModule(nn.Module): - def __init__(self, config: MaskFormerConfig): - """ - Pixel Level Module proposed in [Per-Pixel Classification is Not All You Need for Semantic - Segmentation](https://arxiv.org/abs/2107.06278). It runs the input image through a backbone and a pixel - decoder, generating an image feature map and pixel embeddings. - - Args: - config ([`MaskFormerConfig`]): - The configuration used to instantiate this model. - """ - super().__init__() - - # TODD: add method to load pretrained weights of backbone - backbone_config = config.backbone_config - if backbone_config.model_type == "swin": - # for backwards compatibility - backbone_config = MaskFormerSwinConfig.from_dict(backbone_config.to_dict()) - backbone_config.out_features = ["stage1", "stage2", "stage3", "stage4"] - self.encoder = AutoBackbone.from_config(backbone_config) - - feature_channels = self.encoder.channels - self.decoder = MaskFormerPixelDecoder( - in_features=feature_channels[-1], - feature_size=config.fpn_feature_size, - mask_feature_size=config.mask_feature_size, - lateral_widths=feature_channels[:-1], - ) - - def forward( - self, pixel_values: Tensor, output_hidden_states: bool = False, return_dict: bool = True - ) -> MaskFormerPixelLevelModuleOutput: - features = self.encoder(pixel_values).feature_maps - decoder_output = self.decoder(features, output_hidden_states, return_dict=return_dict) - - if not return_dict: - last_hidden_state = decoder_output[0] - outputs = (features[-1], last_hidden_state) - if output_hidden_states: - hidden_states = decoder_output[1] - outputs = outputs + (tuple(features),) + (hidden_states,) - return outputs - - return MaskFormerPixelLevelModuleOutput( - # the last feature is actually the output from the last layer - encoder_last_hidden_state=features[-1], - decoder_last_hidden_state=decoder_output.last_hidden_state, - encoder_hidden_states=tuple(features) if output_hidden_states else (), - decoder_hidden_states=decoder_output.hidden_states if output_hidden_states else (), - ) - - -class MaskFormerTransformerModule(nn.Module): - """ - The MaskFormer's transformer module. - """ - - def __init__(self, in_features: int, config: MaskFormerConfig): - super().__init__() - hidden_size = config.decoder_config.hidden_size - should_project = in_features != hidden_size - self.position_embedder = MaskFormerSinePositionEmbedding(num_pos_feats=hidden_size // 2, normalize=True) - self.queries_embedder = nn.Embedding(config.decoder_config.num_queries, hidden_size) - self.input_projection = nn.Conv2d(in_features, hidden_size, kernel_size=1) if should_project else None - self.decoder = DetrDecoder(config=config.decoder_config) - - def forward( - self, - image_features: Tensor, - output_hidden_states: bool = False, - output_attentions: bool = False, - return_dict: Optional[bool] = None, - ) -> DetrDecoderOutput: - if self.input_projection is not None: - image_features = self.input_projection(image_features) - object_queries = self.position_embedder(image_features) - # repeat the queries "q c -> b q c" - batch_size = image_features.shape[0] - queries_embeddings = self.queries_embedder.weight.unsqueeze(0).repeat(batch_size, 1, 1) - inputs_embeds = torch.zeros_like(queries_embeddings, requires_grad=True) - - batch_size, num_channels, height, width = image_features.shape - # rearrange both image_features and object_queries "b c h w -> b (h w) c" - image_features = image_features.view(batch_size, num_channels, height * width).permute(0, 2, 1) - object_queries = object_queries.view(batch_size, num_channels, height * width).permute(0, 2, 1) - - decoder_output: DetrDecoderOutput = self.decoder( - inputs_embeds=inputs_embeds, - attention_mask=None, - encoder_hidden_states=image_features, - encoder_attention_mask=None, - object_queries=object_queries, - query_position_embeddings=queries_embeddings, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - return decoder_output - - -MASKFORMER_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use - it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`MaskFormerConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -MASKFORMER_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`MaskFormerImageProcessor.__call__`] for details. - pixel_mask (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*): - Mask to avoid performing attention on padding pixel values. Mask values selected in `[0, 1]`: - - - 1 for pixels that are real (i.e. **not masked**), - - 0 for pixels that are padding (i.e. **masked**). - - [What are attention masks?](../glossary#attention-mask) - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of Detr's decoder attention layers. - return_dict (`bool`, *optional*): - Whether or not to return a [`~MaskFormerModelOutput`] instead of a plain tuple. -""" - - -class MaskFormerPreTrainedModel(PreTrainedModel): - config_class = MaskFormerConfig - base_model_prefix = "model" - main_input_name = "pixel_values" - - def _init_weights(self, module: nn.Module): - xavier_std = self.config.init_xavier_std - std = self.config.init_std - if isinstance(module, MaskFormerTransformerModule): - if module.input_projection is not None: - nn.init.xavier_uniform_(module.input_projection.weight, gain=xavier_std) - nn.init.constant_(module.input_projection.bias, 0) - # FPN - elif isinstance(module, MaskFormerFPNModel): - nn.init.xavier_uniform_(module.stem.get_submodule("0").weight, gain=xavier_std) - - elif isinstance(module, MaskFormerFPNLayer): - nn.init.xavier_uniform_(module.proj[0].weight, gain=xavier_std) - - elif isinstance(module, MaskFormerFPNConvLayer): - nn.init.xavier_uniform_(module.get_submodule("0").weight, gain=xavier_std) - # The MLP head - elif isinstance(module, MaskformerMLPPredictionHead): - # I was not able to find the correct initializer in the original implementation - # we'll use xavier - for submodule in module.modules(): - if isinstance(submodule, nn.Linear): - nn.init.xavier_uniform_(submodule.weight, gain=xavier_std) - nn.init.constant_(submodule.bias, 0) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - # copied from DETR - if isinstance(module, (nn.Linear, nn.Conv2d, nn.BatchNorm2d)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=std) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=std) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, MaskFormerPixelLevelModule): - module.encoder.gradient_checkpointing = value - if isinstance(module, DetrDecoder): - module.gradient_checkpointing = value - - -@add_start_docstrings( - "The bare MaskFormer Model outputting raw hidden-states without any specific head on top.", - MASKFORMER_START_DOCSTRING, -) -class MaskFormerModel(MaskFormerPreTrainedModel): - def __init__(self, config: MaskFormerConfig): - super().__init__(config) - self.pixel_level_module = MaskFormerPixelLevelModule(config) - self.transformer_module = MaskFormerTransformerModule( - in_features=self.pixel_level_module.encoder.channels[-1], config=config - ) - - self.post_init() - - @add_start_docstrings_to_model_forward(MASKFORMER_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=MaskFormerModelOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: Tensor, - pixel_mask: Optional[Tensor] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> MaskFormerModelOutput: - r""" - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, MaskFormerModel - >>> from PIL import Image - >>> import requests - - >>> # load MaskFormer fine-tuned on ADE20k semantic segmentation - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-ade") - >>> model = MaskFormerModel.from_pretrained("facebook/maskformer-swin-base-ade") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = image_processor(image, return_tensors="pt") - - >>> # forward pass - >>> outputs = model(**inputs) - - >>> # the decoder of MaskFormer outputs hidden states of shape (batch_size, num_queries, hidden_size) - >>> transformer_decoder_last_hidden_state = outputs.transformer_decoder_last_hidden_state - >>> list(transformer_decoder_last_hidden_state.shape) - [1, 100, 256] - ```""" - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - batch_size, _, height, width = pixel_values.shape - - if pixel_mask is None: - pixel_mask = torch.ones((batch_size, height, width), device=pixel_values.device) - - pixel_level_module_output = self.pixel_level_module( - pixel_values, output_hidden_states, return_dict=return_dict - ) - image_features = pixel_level_module_output[0] - pixel_embeddings = pixel_level_module_output[1] - - transformer_module_output = self.transformer_module(image_features, output_hidden_states, output_attentions) - queries = transformer_module_output.last_hidden_state - - encoder_hidden_states = None - pixel_decoder_hidden_states = None - transformer_decoder_hidden_states = None - hidden_states = None - - if output_hidden_states: - encoder_hidden_states = pixel_level_module_output[2] - pixel_decoder_hidden_states = pixel_level_module_output[3] - transformer_decoder_hidden_states = transformer_module_output[1] - hidden_states = encoder_hidden_states + pixel_decoder_hidden_states + transformer_decoder_hidden_states - - output = MaskFormerModelOutput( - encoder_last_hidden_state=image_features, - pixel_decoder_last_hidden_state=pixel_embeddings, - transformer_decoder_last_hidden_state=queries, - encoder_hidden_states=encoder_hidden_states, - pixel_decoder_hidden_states=pixel_decoder_hidden_states, - transformer_decoder_hidden_states=transformer_decoder_hidden_states, - hidden_states=hidden_states, - attentions=transformer_module_output.attentions, - ) - - if not return_dict: - output = tuple(v for v in output.values()) - - return output - - -class MaskFormerForInstanceSegmentation(MaskFormerPreTrainedModel): - def __init__(self, config: MaskFormerConfig): - super().__init__(config) - self.model = MaskFormerModel(config) - hidden_size = config.decoder_config.hidden_size - # + 1 because we add the "null" class - self.class_predictor = nn.Linear(hidden_size, config.num_labels + 1) - self.mask_embedder = MaskformerMLPPredictionHead(hidden_size, hidden_size, config.mask_feature_size) - - self.matcher = MaskFormerHungarianMatcher( - cost_class=1.0, cost_dice=config.dice_weight, cost_mask=config.mask_weight - ) - - self.weight_dict: Dict[str, float] = { - "loss_cross_entropy": config.cross_entropy_weight, - "loss_mask": config.mask_weight, - "loss_dice": config.dice_weight, - } - - self.criterion = MaskFormerLoss( - config.num_labels, - matcher=self.matcher, - weight_dict=self.weight_dict, - eos_coef=config.no_object_weight, - ) - - self.post_init() - - def get_loss_dict( - self, - masks_queries_logits: Tensor, - class_queries_logits: Tensor, - mask_labels: Tensor, - class_labels: Tensor, - auxiliary_logits: Dict[str, Tensor], - ) -> Dict[str, Tensor]: - loss_dict: Dict[str, Tensor] = self.criterion( - masks_queries_logits, class_queries_logits, mask_labels, class_labels, auxiliary_logits - ) - # weight each loss by `self.weight_dict[]` including auxiliary losses - for key, weight in self.weight_dict.items(): - for loss_key, loss in loss_dict.items(): - if key in loss_key: - loss *= weight - - return loss_dict - - def get_loss(self, loss_dict: Dict[str, Tensor]) -> Tensor: - return sum(loss_dict.values()) - - def get_logits(self, outputs: MaskFormerModelOutput) -> Tuple[Tensor, Tensor, Dict[str, Tensor]]: - pixel_embeddings = outputs.pixel_decoder_last_hidden_state - # get the auxiliary predictions (one for each decoder's layer) - auxiliary_logits: List[str, Tensor] = [] - # This code is a little bit cumbersome, an improvement can be to return a list of predictions. If we have auxiliary loss then we are going to return more than one element in the list - if self.config.use_auxiliary_loss: - stacked_transformer_decoder_outputs = torch.stack(outputs.transformer_decoder_hidden_states) - classes = self.class_predictor(stacked_transformer_decoder_outputs) - class_queries_logits = classes[-1] - # get the masks - mask_embeddings = self.mask_embedder(stacked_transformer_decoder_outputs) - - # Equivalent to einsum('lbqc, bchw -> lbqhw') but jit friendly - num_embeddings, batch_size, num_queries, num_channels = mask_embeddings.shape - _, _, height, width = pixel_embeddings.shape - binaries_masks = torch.zeros( - (num_embeddings, batch_size, num_queries, height, width), device=mask_embeddings.device - ) - for c in range(num_channels): - binaries_masks += mask_embeddings[..., c][..., None, None] * pixel_embeddings[None, :, None, c] - - masks_queries_logits = binaries_masks[-1] - # go til [:-1] because the last one is always used - for aux_binary_masks, aux_classes in zip(binaries_masks[:-1], classes[:-1]): - auxiliary_logits.append( - {"masks_queries_logits": aux_binary_masks, "class_queries_logits": aux_classes} - ) - - else: - transformer_decoder_hidden_states = outputs.transformer_decoder_last_hidden_state - classes = self.class_predictor(transformer_decoder_hidden_states) - class_queries_logits = classes - # get the masks - mask_embeddings = self.mask_embedder(transformer_decoder_hidden_states) - # sum up over the channels - - # Equivalent to einsum('bqc, bchw -> bqhw') but jit friendly - batch_size, num_queries, num_channels = mask_embeddings.shape - _, _, height, width = pixel_embeddings.shape - masks_queries_logits = torch.zeros((batch_size, num_queries, height, width), device=mask_embeddings.device) - for c in range(num_channels): - masks_queries_logits += mask_embeddings[..., c][..., None, None] * pixel_embeddings[:, None, c] - - return class_queries_logits, masks_queries_logits, auxiliary_logits - - @add_start_docstrings_to_model_forward(MASKFORMER_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=MaskFormerForInstanceSegmentationOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: Tensor, - mask_labels: Optional[List[Tensor]] = None, - class_labels: Optional[List[Tensor]] = None, - pixel_mask: Optional[Tensor] = None, - output_auxiliary_logits: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> MaskFormerForInstanceSegmentationOutput: - r""" - mask_labels (`List[torch.Tensor]`, *optional*): - List of mask labels of shape `(num_labels, height, width)` to be fed to a model - class_labels (`List[torch.LongTensor]`, *optional*): - list of target class labels of shape `(num_labels, height, width)` to be fed to a model. They identify the - labels of `mask_labels`, e.g. the label of `mask_labels[i][j]` if `class_labels[i][j]`. - - Returns: - - Examples: - - Semantic segmentation example: - - ```python - >>> from transformers import AutoImageProcessor, MaskFormerForInstanceSegmentation - >>> from PIL import Image - >>> import requests - - >>> # load MaskFormer fine-tuned on ADE20k semantic segmentation - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-ade") - >>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade") - - >>> url = ( - ... "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg" - ... ) - >>> image = Image.open(requests.get(url, stream=True).raw) - >>> inputs = image_processor(images=image, return_tensors="pt") - - >>> outputs = model(**inputs) - >>> # model predicts class_queries_logits of shape `(batch_size, num_queries)` - >>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` - >>> class_queries_logits = outputs.class_queries_logits - >>> masks_queries_logits = outputs.masks_queries_logits - - >>> # you can pass them to image_processor for postprocessing - >>> predicted_semantic_map = image_processor.post_process_semantic_segmentation( - ... outputs, target_sizes=[image.size[::-1]] - ... )[0] - - >>> # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) - >>> list(predicted_semantic_map.shape) - [512, 683] - ``` - - Panoptic segmentation example: - - ```python - >>> from transformers import AutoImageProcessor, MaskFormerForInstanceSegmentation - >>> from PIL import Image - >>> import requests - - >>> # load MaskFormer fine-tuned on COCO panoptic segmentation - >>> image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-coco") - >>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-coco") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - >>> inputs = image_processor(images=image, return_tensors="pt") - - >>> outputs = model(**inputs) - >>> # model predicts class_queries_logits of shape `(batch_size, num_queries)` - >>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` - >>> class_queries_logits = outputs.class_queries_logits - >>> masks_queries_logits = outputs.masks_queries_logits - - >>> # you can pass them to image_processor for postprocessing - >>> result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] - - >>> # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) - >>> predicted_panoptic_map = result["segmentation"] - >>> list(predicted_panoptic_map.shape) - [480, 640] - ``` - """ - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - raw_outputs = self.model( - pixel_values, - pixel_mask, - output_hidden_states=output_hidden_states or self.config.use_auxiliary_loss, - return_dict=return_dict, - output_attentions=output_attentions, - ) - # We need to have raw_outputs optionally be returned as a dict to use torch.compile. For backwards - # compatibility we convert to a dataclass for the rest of the model logic - outputs = MaskFormerModelOutput( - encoder_last_hidden_state=raw_outputs[0], - pixel_decoder_last_hidden_state=raw_outputs[1], - transformer_decoder_last_hidden_state=raw_outputs[2], - encoder_hidden_states=raw_outputs[3] if output_hidden_states else None, - pixel_decoder_hidden_states=raw_outputs[4] if output_hidden_states else None, - transformer_decoder_hidden_states=raw_outputs[5] if output_hidden_states else None, - hidden_states=raw_outputs[6] if output_hidden_states else None, - attentions=raw_outputs[-1] if output_attentions else None, - ) - - loss, loss_dict, auxiliary_logits = None, None, None - - class_queries_logits, masks_queries_logits, auxiliary_logits = self.get_logits(outputs) - - if mask_labels is not None and class_labels is not None: - loss_dict: Dict[str, Tensor] = self.get_loss_dict( - masks_queries_logits, class_queries_logits, mask_labels, class_labels, auxiliary_logits - ) - loss = self.get_loss(loss_dict) - - output_auxiliary_logits = ( - self.config.output_auxiliary_logits if output_auxiliary_logits is None else output_auxiliary_logits - ) - if not output_auxiliary_logits: - auxiliary_logits = None - - if not return_dict: - output = tuple( - v - for v in (loss, class_queries_logits, masks_queries_logits, auxiliary_logits, *outputs.values()) - if v is not None - ) - return output - - return MaskFormerForInstanceSegmentationOutput( - loss=loss, - **outputs, - class_queries_logits=class_queries_logits, - masks_queries_logits=masks_queries_logits, - auxiliary_logits=auxiliary_logits, - ) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/solver.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/solver.py deleted file mode 100644 index aaf0b21591b42fa903424f8d44fef88d7d791e57..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/solver.py +++ /dev/null @@ -1,195 +0,0 @@ -import os -import time -import numpy as np -import torch -import librosa -from diffusion.logger.saver import Saver -from diffusion.logger import utils -from torch import autocast -from torch.cuda.amp import GradScaler - -def test(args, model, vocoder, loader_test, saver): - print(' [*] testing...') - model.eval() - - # losses - test_loss = 0. - - # intialization - num_batches = len(loader_test) - rtf_all = [] - - # run - with torch.no_grad(): - for bidx, data in enumerate(loader_test): - fn = data['name'][0].split("/")[-1] - speaker = data['name'][0].split("/")[-2] - print('--------') - print('{}/{} - {}'.format(bidx, num_batches, fn)) - - # unpack data - for k in data.keys(): - if not k.startswith('name'): - data[k] = data[k].to(args.device) - print('>>', data['name'][0]) - - # forward - st_time = time.time() - mel = model( - data['units'], - data['f0'], - data['volume'], - data['spk_id'], - gt_spec=None, - infer=True, - infer_speedup=args.infer.speedup, - method=args.infer.method) - signal = vocoder.infer(mel, data['f0']) - ed_time = time.time() - - # RTF - run_time = ed_time - st_time - song_time = signal.shape[-1] / args.data.sampling_rate - rtf = run_time / song_time - print('RTF: {} | {} / {}'.format(rtf, run_time, song_time)) - rtf_all.append(rtf) - - # loss - for i in range(args.train.batch_size): - loss = model( - data['units'], - data['f0'], - data['volume'], - data['spk_id'], - gt_spec=data['mel'], - infer=False) - test_loss += loss.item() - - # log mel - saver.log_spec(f"{speaker}_{fn}.wav", data['mel'], mel) - - # log audi - path_audio = data['name_ext'][0] - audio, sr = librosa.load(path_audio, sr=args.data.sampling_rate) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - audio = torch.from_numpy(audio).unsqueeze(0).to(signal) - saver.log_audio({f"{speaker}_{fn}_gt.wav": audio,f"{speaker}_{fn}_pred.wav": signal}) - # report - test_loss /= args.train.batch_size - test_loss /= num_batches - - # check - print(' [test_loss] test_loss:', test_loss) - print(' Real Time Factor', np.mean(rtf_all)) - return test_loss - - -def train(args, initial_global_step, model, optimizer, scheduler, vocoder, loader_train, loader_test): - # saver - saver = Saver(args, initial_global_step=initial_global_step) - - # model size - params_count = utils.get_network_paras_amount({'model': model}) - saver.log_info('--- model size ---') - saver.log_info(params_count) - - # run - num_batches = len(loader_train) - model.train() - saver.log_info('======= start training =======') - scaler = GradScaler() - if args.train.amp_dtype == 'fp32': - dtype = torch.float32 - elif args.train.amp_dtype == 'fp16': - dtype = torch.float16 - elif args.train.amp_dtype == 'bf16': - dtype = torch.bfloat16 - else: - raise ValueError(' [x] Unknown amp_dtype: ' + args.train.amp_dtype) - saver.log_info("epoch|batch_idx/num_batches|output_dir|batch/s|lr|time|step") - for epoch in range(args.train.epochs): - for batch_idx, data in enumerate(loader_train): - saver.global_step_increment() - optimizer.zero_grad() - - # unpack data - for k in data.keys(): - if not k.startswith('name'): - data[k] = data[k].to(args.device) - - # forward - if dtype == torch.float32: - loss = model(data['units'].float(), data['f0'], data['volume'], data['spk_id'], - aug_shift = data['aug_shift'], gt_spec=data['mel'].float(), infer=False) - else: - with autocast(device_type=args.device, dtype=dtype): - loss = model(data['units'], data['f0'], data['volume'], data['spk_id'], - aug_shift = data['aug_shift'], gt_spec=data['mel'], infer=False) - - # handle nan loss - if torch.isnan(loss): - raise ValueError(' [x] nan loss ') - else: - # backpropagate - if dtype == torch.float32: - loss.backward() - optimizer.step() - else: - scaler.scale(loss).backward() - scaler.step(optimizer) - scaler.update() - scheduler.step() - - # log loss - if saver.global_step % args.train.interval_log == 0: - current_lr = optimizer.param_groups[0]['lr'] - saver.log_info( - 'epoch: {} | {:3d}/{:3d} | {} | batch/s: {:.2f} | lr: {:.6} | loss: {:.3f} | time: {} | step: {}'.format( - epoch, - batch_idx, - num_batches, - args.env.expdir, - args.train.interval_log/saver.get_interval_time(), - current_lr, - loss.item(), - saver.get_total_time(), - saver.global_step - ) - ) - - saver.log_value({ - 'train/loss': loss.item() - }) - - saver.log_value({ - 'train/lr': current_lr - }) - - # validation - if saver.global_step % args.train.interval_val == 0: - optimizer_save = optimizer if args.train.save_opt else None - - # save latest - saver.save_model(model, optimizer_save, postfix=f'{saver.global_step}') - last_val_step = saver.global_step - args.train.interval_val - if last_val_step % args.train.interval_force_save != 0: - saver.delete_model(postfix=f'{last_val_step}') - - # run testing set - test_loss = test(args, model, vocoder, loader_test, saver) - - # log loss - saver.log_info( - ' --- --- \nloss: {:.3f}. '.format( - test_loss, - ) - ) - - saver.log_value({ - 'validation/loss': test_loss - }) - - model.train() - - diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/onnxexport/model_onnx_speaker_mix.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/onnxexport/model_onnx_speaker_mix.py deleted file mode 100644 index 355e590da30a4651925ffb24938b8c2af558c098..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/onnxexport/model_onnx_speaker_mix.py +++ /dev/null @@ -1,350 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, z=None): - x = x + self.f0_emb(f0).transpose(1, 2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + z * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels, 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if spk_emb is not None: - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - self.predict_f0 = False - self.speaker_map = [] - self.export_mix = False - - def export_chara_mix(self, n_speakers_mix): - self.speaker_map = torch.zeros((n_speakers_mix, 1, 1, self.gin_channels)) - for i in range(n_speakers_mix): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - self.export_mix = True - - def forward(self, c, f0, mel2ph, uv, noise=None, g=None, cluster_infer_ratio=0.1): - decoder_inp = F.pad(c, [0, 0, 1, 0]) - mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, c.shape[-1]]) - c = torch.gather(decoder_inp, 1, mel2ph_).transpose(1, 2) # [B, T, H] - - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - - if self.export_mix: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2) - - if self.predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), z=noise) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/__init__.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v1_categories.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v1_categories.py deleted file mode 100644 index 7374e6968bb006f5d8c49e75d9d3b31ea3d77d05..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v1_categories.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Autogen with -# with open("lvis_v1_val.json", "r") as f: -# a = json.load(f) -# c = a["categories"] -# for x in c: -# del x["image_count"] -# del x["instance_count"] -# LVIS_CATEGORIES = repr(c) + " # noqa" -# with open("/tmp/lvis_categories.py", "wt") as f: -# f.write(f"LVIS_CATEGORIES = {LVIS_CATEGORIES}") -# Then paste the contents of that file below - -# fmt: off -LVIS_CATEGORIES = [{'frequency': 'c', 'synset': 'aerosol.n.02', 'synonyms': ['aerosol_can', 'spray_can'], 'id': 1, 'def': 'a dispenser that holds a substance under pressure', 'name': 'aerosol_can'}, {'frequency': 'f', 'synset': 'air_conditioner.n.01', 'synonyms': ['air_conditioner'], 'id': 2, 'def': 'a machine that keeps air cool and dry', 'name': 'air_conditioner'}, {'frequency': 'f', 'synset': 'airplane.n.01', 'synonyms': ['airplane', 'aeroplane'], 'id': 3, 'def': 'an aircraft that has a fixed wing and is powered by propellers or jets', 'name': 'airplane'}, {'frequency': 'f', 'synset': 'alarm_clock.n.01', 'synonyms': ['alarm_clock'], 'id': 4, 'def': 'a clock that wakes a sleeper at some preset time', 'name': 'alarm_clock'}, {'frequency': 'c', 'synset': 'alcohol.n.01', 'synonyms': ['alcohol', 'alcoholic_beverage'], 'id': 5, 'def': 'a liquor or brew containing alcohol as the active agent', 'name': 'alcohol'}, {'frequency': 'c', 'synset': 'alligator.n.02', 'synonyms': ['alligator', 'gator'], 'id': 6, 'def': 'amphibious reptiles related to crocodiles but with shorter broader snouts', 'name': 'alligator'}, {'frequency': 'c', 'synset': 'almond.n.02', 'synonyms': ['almond'], 'id': 7, 'def': 'oval-shaped edible seed of the almond tree', 'name': 'almond'}, {'frequency': 'c', 'synset': 'ambulance.n.01', 'synonyms': ['ambulance'], 'id': 8, 'def': 'a vehicle that takes people to and from hospitals', 'name': 'ambulance'}, {'frequency': 'c', 'synset': 'amplifier.n.01', 'synonyms': ['amplifier'], 'id': 9, 'def': 'electronic equipment that increases strength of signals', 'name': 'amplifier'}, {'frequency': 'c', 'synset': 'anklet.n.03', 'synonyms': ['anklet', 'ankle_bracelet'], 'id': 10, 'def': 'an ornament worn around the ankle', 'name': 'anklet'}, {'frequency': 'f', 'synset': 'antenna.n.01', 'synonyms': ['antenna', 'aerial', 'transmitting_aerial'], 'id': 11, 'def': 'an electrical device that sends or receives radio or television signals', 'name': 'antenna'}, {'frequency': 'f', 'synset': 'apple.n.01', 'synonyms': ['apple'], 'id': 12, 'def': 'fruit with red or yellow or green skin and sweet to tart crisp whitish flesh', 'name': 'apple'}, {'frequency': 'r', 'synset': 'applesauce.n.01', 'synonyms': ['applesauce'], 'id': 13, 'def': 'puree of stewed apples usually sweetened and spiced', 'name': 'applesauce'}, {'frequency': 'r', 'synset': 'apricot.n.02', 'synonyms': ['apricot'], 'id': 14, 'def': 'downy yellow to rosy-colored fruit resembling a small peach', 'name': 'apricot'}, {'frequency': 'f', 'synset': 'apron.n.01', 'synonyms': ['apron'], 'id': 15, 'def': 'a garment of cloth that is tied about the waist and worn to protect clothing', 'name': 'apron'}, {'frequency': 'c', 'synset': 'aquarium.n.01', 'synonyms': ['aquarium', 'fish_tank'], 'id': 16, 'def': 'a tank/pool/bowl filled with water for keeping live fish and underwater animals', 'name': 'aquarium'}, {'frequency': 'r', 'synset': 'arctic.n.02', 'synonyms': ['arctic_(type_of_shoe)', 'galosh', 'golosh', 'rubber_(type_of_shoe)', 'gumshoe'], 'id': 17, 'def': 'a waterproof overshoe that protects shoes from water or snow', 'name': 'arctic_(type_of_shoe)'}, {'frequency': 'c', 'synset': 'armband.n.02', 'synonyms': ['armband'], 'id': 18, 'def': 'a band worn around the upper arm', 'name': 'armband'}, {'frequency': 'f', 'synset': 'armchair.n.01', 'synonyms': ['armchair'], 'id': 19, 'def': 'chair with a support on each side for arms', 'name': 'armchair'}, {'frequency': 'r', 'synset': 'armoire.n.01', 'synonyms': ['armoire'], 'id': 20, 'def': 'a large wardrobe or cabinet', 'name': 'armoire'}, {'frequency': 'r', 'synset': 'armor.n.01', 'synonyms': ['armor', 'armour'], 'id': 21, 'def': 'protective covering made of metal and used in combat', 'name': 'armor'}, {'frequency': 'c', 'synset': 'artichoke.n.02', 'synonyms': ['artichoke'], 'id': 22, 'def': 'a thistlelike flower head with edible fleshy leaves and heart', 'name': 'artichoke'}, {'frequency': 'f', 'synset': 'ashcan.n.01', 'synonyms': ['trash_can', 'garbage_can', 'wastebin', 'dustbin', 'trash_barrel', 'trash_bin'], 'id': 23, 'def': 'a bin that holds rubbish until it is collected', 'name': 'trash_can'}, {'frequency': 'c', 'synset': 'ashtray.n.01', 'synonyms': ['ashtray'], 'id': 24, 'def': "a receptacle for the ash from smokers' cigars or cigarettes", 'name': 'ashtray'}, {'frequency': 'c', 'synset': 'asparagus.n.02', 'synonyms': ['asparagus'], 'id': 25, 'def': 'edible young shoots of the asparagus plant', 'name': 'asparagus'}, {'frequency': 'c', 'synset': 'atomizer.n.01', 'synonyms': ['atomizer', 'atomiser', 'spray', 'sprayer', 'nebulizer', 'nebuliser'], 'id': 26, 'def': 'a dispenser that turns a liquid (such as perfume) into a fine mist', 'name': 'atomizer'}, {'frequency': 'f', 'synset': 'avocado.n.01', 'synonyms': ['avocado'], 'id': 27, 'def': 'a pear-shaped fruit with green or blackish skin and rich yellowish pulp enclosing a single large seed', 'name': 'avocado'}, {'frequency': 'c', 'synset': 'award.n.02', 'synonyms': ['award', 'accolade'], 'id': 28, 'def': 'a tangible symbol signifying approval or distinction', 'name': 'award'}, {'frequency': 'f', 'synset': 'awning.n.01', 'synonyms': ['awning'], 'id': 29, 'def': 'a canopy made of canvas to shelter people or things from rain or sun', 'name': 'awning'}, {'frequency': 'r', 'synset': 'ax.n.01', 'synonyms': ['ax', 'axe'], 'id': 30, 'def': 'an edge tool with a heavy bladed head mounted across a handle', 'name': 'ax'}, {'frequency': 'r', 'synset': 'baboon.n.01', 'synonyms': ['baboon'], 'id': 31, 'def': 'large terrestrial monkeys having doglike muzzles', 'name': 'baboon'}, {'frequency': 'f', 'synset': 'baby_buggy.n.01', 'synonyms': ['baby_buggy', 'baby_carriage', 'perambulator', 'pram', 'stroller'], 'id': 32, 'def': 'a small vehicle with four wheels in which a baby or child is pushed around', 'name': 'baby_buggy'}, {'frequency': 'c', 'synset': 'backboard.n.01', 'synonyms': ['basketball_backboard'], 'id': 33, 'def': 'a raised vertical board with basket attached; used to play basketball', 'name': 'basketball_backboard'}, {'frequency': 'f', 'synset': 'backpack.n.01', 'synonyms': ['backpack', 'knapsack', 'packsack', 'rucksack', 'haversack'], 'id': 34, 'def': 'a bag carried by a strap on your back or shoulder', 'name': 'backpack'}, {'frequency': 'f', 'synset': 'bag.n.04', 'synonyms': ['handbag', 'purse', 'pocketbook'], 'id': 35, 'def': 'a container used for carrying money and small personal items or accessories', 'name': 'handbag'}, {'frequency': 'f', 'synset': 'bag.n.06', 'synonyms': ['suitcase', 'baggage', 'luggage'], 'id': 36, 'def': 'cases used to carry belongings when traveling', 'name': 'suitcase'}, {'frequency': 'c', 'synset': 'bagel.n.01', 'synonyms': ['bagel', 'beigel'], 'id': 37, 'def': 'glazed yeast-raised doughnut-shaped roll with hard crust', 'name': 'bagel'}, {'frequency': 'r', 'synset': 'bagpipe.n.01', 'synonyms': ['bagpipe'], 'id': 38, 'def': 'a tubular wind instrument; the player blows air into a bag and squeezes it out', 'name': 'bagpipe'}, {'frequency': 'r', 'synset': 'baguet.n.01', 'synonyms': ['baguet', 'baguette'], 'id': 39, 'def': 'narrow French stick loaf', 'name': 'baguet'}, {'frequency': 'r', 'synset': 'bait.n.02', 'synonyms': ['bait', 'lure'], 'id': 40, 'def': 'something used to lure fish or other animals into danger so they can be trapped or killed', 'name': 'bait'}, {'frequency': 'f', 'synset': 'ball.n.06', 'synonyms': ['ball'], 'id': 41, 'def': 'a spherical object used as a plaything', 'name': 'ball'}, {'frequency': 'r', 'synset': 'ballet_skirt.n.01', 'synonyms': ['ballet_skirt', 'tutu'], 'id': 42, 'def': 'very short skirt worn by ballerinas', 'name': 'ballet_skirt'}, {'frequency': 'f', 'synset': 'balloon.n.01', 'synonyms': ['balloon'], 'id': 43, 'def': 'large tough nonrigid bag filled with gas or heated air', 'name': 'balloon'}, {'frequency': 'c', 'synset': 'bamboo.n.02', 'synonyms': ['bamboo'], 'id': 44, 'def': 'woody tropical grass having hollow woody stems', 'name': 'bamboo'}, {'frequency': 'f', 'synset': 'banana.n.02', 'synonyms': ['banana'], 'id': 45, 'def': 'elongated crescent-shaped yellow fruit with soft sweet flesh', 'name': 'banana'}, {'frequency': 'c', 'synset': 'band_aid.n.01', 'synonyms': ['Band_Aid'], 'id': 46, 'def': 'trade name for an adhesive bandage to cover small cuts or blisters', 'name': 'Band_Aid'}, {'frequency': 'c', 'synset': 'bandage.n.01', 'synonyms': ['bandage'], 'id': 47, 'def': 'a piece of soft material that covers and protects an injured part of the body', 'name': 'bandage'}, {'frequency': 'f', 'synset': 'bandanna.n.01', 'synonyms': ['bandanna', 'bandana'], 'id': 48, 'def': 'large and brightly colored handkerchief; often used as a neckerchief', 'name': 'bandanna'}, {'frequency': 'r', 'synset': 'banjo.n.01', 'synonyms': ['banjo'], 'id': 49, 'def': 'a stringed instrument of the guitar family with a long neck and circular body', 'name': 'banjo'}, {'frequency': 'f', 'synset': 'banner.n.01', 'synonyms': ['banner', 'streamer'], 'id': 50, 'def': 'long strip of cloth or paper used for decoration or advertising', 'name': 'banner'}, {'frequency': 'r', 'synset': 'barbell.n.01', 'synonyms': ['barbell'], 'id': 51, 'def': 'a bar to which heavy discs are attached at each end; used in weightlifting', 'name': 'barbell'}, {'frequency': 'r', 'synset': 'barge.n.01', 'synonyms': ['barge'], 'id': 52, 'def': 'a flatbottom boat for carrying heavy loads (especially on canals)', 'name': 'barge'}, {'frequency': 'f', 'synset': 'barrel.n.02', 'synonyms': ['barrel', 'cask'], 'id': 53, 'def': 'a cylindrical container that holds liquids', 'name': 'barrel'}, {'frequency': 'c', 'synset': 'barrette.n.01', 'synonyms': ['barrette'], 'id': 54, 'def': "a pin for holding women's hair in place", 'name': 'barrette'}, {'frequency': 'c', 'synset': 'barrow.n.03', 'synonyms': ['barrow', 'garden_cart', 'lawn_cart', 'wheelbarrow'], 'id': 55, 'def': 'a cart for carrying small loads; has handles and one or more wheels', 'name': 'barrow'}, {'frequency': 'f', 'synset': 'base.n.03', 'synonyms': ['baseball_base'], 'id': 56, 'def': 'a place that the runner must touch before scoring', 'name': 'baseball_base'}, {'frequency': 'f', 'synset': 'baseball.n.02', 'synonyms': ['baseball'], 'id': 57, 'def': 'a ball used in playing baseball', 'name': 'baseball'}, {'frequency': 'f', 'synset': 'baseball_bat.n.01', 'synonyms': ['baseball_bat'], 'id': 58, 'def': 'an implement used in baseball by the batter', 'name': 'baseball_bat'}, {'frequency': 'f', 'synset': 'baseball_cap.n.01', 'synonyms': ['baseball_cap', 'jockey_cap', 'golf_cap'], 'id': 59, 'def': 'a cap with a bill', 'name': 'baseball_cap'}, {'frequency': 'f', 'synset': 'baseball_glove.n.01', 'synonyms': ['baseball_glove', 'baseball_mitt'], 'id': 60, 'def': 'the handwear used by fielders in playing baseball', 'name': 'baseball_glove'}, {'frequency': 'f', 'synset': 'basket.n.01', 'synonyms': ['basket', 'handbasket'], 'id': 61, 'def': 'a container that is usually woven and has handles', 'name': 'basket'}, {'frequency': 'c', 'synset': 'basketball.n.02', 'synonyms': ['basketball'], 'id': 62, 'def': 'an inflated ball used in playing basketball', 'name': 'basketball'}, {'frequency': 'r', 'synset': 'bass_horn.n.01', 'synonyms': ['bass_horn', 'sousaphone', 'tuba'], 'id': 63, 'def': 'the lowest brass wind instrument', 'name': 'bass_horn'}, {'frequency': 'c', 'synset': 'bat.n.01', 'synonyms': ['bat_(animal)'], 'id': 64, 'def': 'nocturnal mouselike mammal with forelimbs modified to form membranous wings', 'name': 'bat_(animal)'}, {'frequency': 'f', 'synset': 'bath_mat.n.01', 'synonyms': ['bath_mat'], 'id': 65, 'def': 'a heavy towel or mat to stand on while drying yourself after a bath', 'name': 'bath_mat'}, {'frequency': 'f', 'synset': 'bath_towel.n.01', 'synonyms': ['bath_towel'], 'id': 66, 'def': 'a large towel; to dry yourself after a bath', 'name': 'bath_towel'}, {'frequency': 'c', 'synset': 'bathrobe.n.01', 'synonyms': ['bathrobe'], 'id': 67, 'def': 'a loose-fitting robe of towelling; worn after a bath or swim', 'name': 'bathrobe'}, {'frequency': 'f', 'synset': 'bathtub.n.01', 'synonyms': ['bathtub', 'bathing_tub'], 'id': 68, 'def': 'a large open container that you fill with water and use to wash the body', 'name': 'bathtub'}, {'frequency': 'r', 'synset': 'batter.n.02', 'synonyms': ['batter_(food)'], 'id': 69, 'def': 'a liquid or semiliquid mixture, as of flour, eggs, and milk, used in cooking', 'name': 'batter_(food)'}, {'frequency': 'c', 'synset': 'battery.n.02', 'synonyms': ['battery'], 'id': 70, 'def': 'a portable device that produces electricity', 'name': 'battery'}, {'frequency': 'r', 'synset': 'beach_ball.n.01', 'synonyms': ['beachball'], 'id': 71, 'def': 'large and light ball; for play at the seaside', 'name': 'beachball'}, {'frequency': 'c', 'synset': 'bead.n.01', 'synonyms': ['bead'], 'id': 72, 'def': 'a small ball with a hole through the middle used for ornamentation, jewellery, etc.', 'name': 'bead'}, {'frequency': 'c', 'synset': 'bean_curd.n.01', 'synonyms': ['bean_curd', 'tofu'], 'id': 73, 'def': 'cheeselike food made of curdled soybean milk', 'name': 'bean_curd'}, {'frequency': 'c', 'synset': 'beanbag.n.01', 'synonyms': ['beanbag'], 'id': 74, 'def': 'a bag filled with dried beans or similar items; used in games or to sit on', 'name': 'beanbag'}, {'frequency': 'f', 'synset': 'beanie.n.01', 'synonyms': ['beanie', 'beany'], 'id': 75, 'def': 'a small skullcap; formerly worn by schoolboys and college freshmen', 'name': 'beanie'}, {'frequency': 'f', 'synset': 'bear.n.01', 'synonyms': ['bear'], 'id': 76, 'def': 'large carnivorous or omnivorous mammals with shaggy coats and claws', 'name': 'bear'}, {'frequency': 'f', 'synset': 'bed.n.01', 'synonyms': ['bed'], 'id': 77, 'def': 'a piece of furniture that provides a place to sleep', 'name': 'bed'}, {'frequency': 'r', 'synset': 'bedpan.n.01', 'synonyms': ['bedpan'], 'id': 78, 'def': 'a shallow vessel used by a bedridden patient for defecation and urination', 'name': 'bedpan'}, {'frequency': 'f', 'synset': 'bedspread.n.01', 'synonyms': ['bedspread', 'bedcover', 'bed_covering', 'counterpane', 'spread'], 'id': 79, 'def': 'decorative cover for a bed', 'name': 'bedspread'}, {'frequency': 'f', 'synset': 'beef.n.01', 'synonyms': ['cow'], 'id': 80, 'def': 'cattle/cow', 'name': 'cow'}, {'frequency': 'f', 'synset': 'beef.n.02', 'synonyms': ['beef_(food)', 'boeuf_(food)'], 'id': 81, 'def': 'meat from an adult domestic bovine', 'name': 'beef_(food)'}, {'frequency': 'r', 'synset': 'beeper.n.01', 'synonyms': ['beeper', 'pager'], 'id': 82, 'def': 'an device that beeps when the person carrying it is being paged', 'name': 'beeper'}, {'frequency': 'f', 'synset': 'beer_bottle.n.01', 'synonyms': ['beer_bottle'], 'id': 83, 'def': 'a bottle that holds beer', 'name': 'beer_bottle'}, {'frequency': 'c', 'synset': 'beer_can.n.01', 'synonyms': ['beer_can'], 'id': 84, 'def': 'a can that holds beer', 'name': 'beer_can'}, {'frequency': 'r', 'synset': 'beetle.n.01', 'synonyms': ['beetle'], 'id': 85, 'def': 'insect with hard wing covers', 'name': 'beetle'}, {'frequency': 'f', 'synset': 'bell.n.01', 'synonyms': ['bell'], 'id': 86, 'def': 'a hollow device made of metal that makes a ringing sound when struck', 'name': 'bell'}, {'frequency': 'f', 'synset': 'bell_pepper.n.02', 'synonyms': ['bell_pepper', 'capsicum'], 'id': 87, 'def': 'large bell-shaped sweet pepper in green or red or yellow or orange or black varieties', 'name': 'bell_pepper'}, {'frequency': 'f', 'synset': 'belt.n.02', 'synonyms': ['belt'], 'id': 88, 'def': 'a band to tie or buckle around the body (usually at the waist)', 'name': 'belt'}, {'frequency': 'f', 'synset': 'belt_buckle.n.01', 'synonyms': ['belt_buckle'], 'id': 89, 'def': 'the buckle used to fasten a belt', 'name': 'belt_buckle'}, {'frequency': 'f', 'synset': 'bench.n.01', 'synonyms': ['bench'], 'id': 90, 'def': 'a long seat for more than one person', 'name': 'bench'}, {'frequency': 'c', 'synset': 'beret.n.01', 'synonyms': ['beret'], 'id': 91, 'def': 'a cap with no brim or bill; made of soft cloth', 'name': 'beret'}, {'frequency': 'c', 'synset': 'bib.n.02', 'synonyms': ['bib'], 'id': 92, 'def': 'a napkin tied under the chin of a child while eating', 'name': 'bib'}, {'frequency': 'r', 'synset': 'bible.n.01', 'synonyms': ['Bible'], 'id': 93, 'def': 'the sacred writings of the Christian religions', 'name': 'Bible'}, {'frequency': 'f', 'synset': 'bicycle.n.01', 'synonyms': ['bicycle', 'bike_(bicycle)'], 'id': 94, 'def': 'a wheeled vehicle that has two wheels and is moved by foot pedals', 'name': 'bicycle'}, {'frequency': 'f', 'synset': 'bill.n.09', 'synonyms': ['visor', 'vizor'], 'id': 95, 'def': 'a brim that projects to the front to shade the eyes', 'name': 'visor'}, {'frequency': 'f', 'synset': 'billboard.n.01', 'synonyms': ['billboard'], 'id': 96, 'def': 'large outdoor signboard', 'name': 'billboard'}, {'frequency': 'c', 'synset': 'binder.n.03', 'synonyms': ['binder', 'ring-binder'], 'id': 97, 'def': 'holds loose papers or magazines', 'name': 'binder'}, {'frequency': 'c', 'synset': 'binoculars.n.01', 'synonyms': ['binoculars', 'field_glasses', 'opera_glasses'], 'id': 98, 'def': 'an optical instrument designed for simultaneous use by both eyes', 'name': 'binoculars'}, {'frequency': 'f', 'synset': 'bird.n.01', 'synonyms': ['bird'], 'id': 99, 'def': 'animal characterized by feathers and wings', 'name': 'bird'}, {'frequency': 'c', 'synset': 'bird_feeder.n.01', 'synonyms': ['birdfeeder'], 'id': 100, 'def': 'an outdoor device that supplies food for wild birds', 'name': 'birdfeeder'}, {'frequency': 'c', 'synset': 'birdbath.n.01', 'synonyms': ['birdbath'], 'id': 101, 'def': 'an ornamental basin (usually in a garden) for birds to bathe in', 'name': 'birdbath'}, {'frequency': 'c', 'synset': 'birdcage.n.01', 'synonyms': ['birdcage'], 'id': 102, 'def': 'a cage in which a bird can be kept', 'name': 'birdcage'}, {'frequency': 'c', 'synset': 'birdhouse.n.01', 'synonyms': ['birdhouse'], 'id': 103, 'def': 'a shelter for birds', 'name': 'birdhouse'}, {'frequency': 'f', 'synset': 'birthday_cake.n.01', 'synonyms': ['birthday_cake'], 'id': 104, 'def': 'decorated cake served at a birthday party', 'name': 'birthday_cake'}, {'frequency': 'r', 'synset': 'birthday_card.n.01', 'synonyms': ['birthday_card'], 'id': 105, 'def': 'a card expressing a birthday greeting', 'name': 'birthday_card'}, {'frequency': 'r', 'synset': 'black_flag.n.01', 'synonyms': ['pirate_flag'], 'id': 106, 'def': 'a flag usually bearing a white skull and crossbones on a black background', 'name': 'pirate_flag'}, {'frequency': 'c', 'synset': 'black_sheep.n.02', 'synonyms': ['black_sheep'], 'id': 107, 'def': 'sheep with a black coat', 'name': 'black_sheep'}, {'frequency': 'c', 'synset': 'blackberry.n.01', 'synonyms': ['blackberry'], 'id': 108, 'def': 'large sweet black or very dark purple edible aggregate fruit', 'name': 'blackberry'}, {'frequency': 'f', 'synset': 'blackboard.n.01', 'synonyms': ['blackboard', 'chalkboard'], 'id': 109, 'def': 'sheet of slate; for writing with chalk', 'name': 'blackboard'}, {'frequency': 'f', 'synset': 'blanket.n.01', 'synonyms': ['blanket'], 'id': 110, 'def': 'bedding that keeps a person warm in bed', 'name': 'blanket'}, {'frequency': 'c', 'synset': 'blazer.n.01', 'synonyms': ['blazer', 'sport_jacket', 'sport_coat', 'sports_jacket', 'sports_coat'], 'id': 111, 'def': 'lightweight jacket; often striped in the colors of a club or school', 'name': 'blazer'}, {'frequency': 'f', 'synset': 'blender.n.01', 'synonyms': ['blender', 'liquidizer', 'liquidiser'], 'id': 112, 'def': 'an electrically powered mixer that mix or chop or liquefy foods', 'name': 'blender'}, {'frequency': 'r', 'synset': 'blimp.n.02', 'synonyms': ['blimp'], 'id': 113, 'def': 'a small nonrigid airship used for observation or as a barrage balloon', 'name': 'blimp'}, {'frequency': 'f', 'synset': 'blinker.n.01', 'synonyms': ['blinker', 'flasher'], 'id': 114, 'def': 'a light that flashes on and off; used as a signal or to send messages', 'name': 'blinker'}, {'frequency': 'f', 'synset': 'blouse.n.01', 'synonyms': ['blouse'], 'id': 115, 'def': 'a top worn by women', 'name': 'blouse'}, {'frequency': 'f', 'synset': 'blueberry.n.02', 'synonyms': ['blueberry'], 'id': 116, 'def': 'sweet edible dark-blue berries of blueberry plants', 'name': 'blueberry'}, {'frequency': 'r', 'synset': 'board.n.09', 'synonyms': ['gameboard'], 'id': 117, 'def': 'a flat portable surface (usually rectangular) designed for board games', 'name': 'gameboard'}, {'frequency': 'f', 'synset': 'boat.n.01', 'synonyms': ['boat', 'ship_(boat)'], 'id': 118, 'def': 'a vessel for travel on water', 'name': 'boat'}, {'frequency': 'r', 'synset': 'bob.n.05', 'synonyms': ['bob', 'bobber', 'bobfloat'], 'id': 119, 'def': 'a small float usually made of cork; attached to a fishing line', 'name': 'bob'}, {'frequency': 'c', 'synset': 'bobbin.n.01', 'synonyms': ['bobbin', 'spool', 'reel'], 'id': 120, 'def': 'a thing around which thread/tape/film or other flexible materials can be wound', 'name': 'bobbin'}, {'frequency': 'c', 'synset': 'bobby_pin.n.01', 'synonyms': ['bobby_pin', 'hairgrip'], 'id': 121, 'def': 'a flat wire hairpin used to hold bobbed hair in place', 'name': 'bobby_pin'}, {'frequency': 'c', 'synset': 'boiled_egg.n.01', 'synonyms': ['boiled_egg', 'coddled_egg'], 'id': 122, 'def': 'egg cooked briefly in the shell in gently boiling water', 'name': 'boiled_egg'}, {'frequency': 'r', 'synset': 'bolo_tie.n.01', 'synonyms': ['bolo_tie', 'bolo', 'bola_tie', 'bola'], 'id': 123, 'def': 'a cord fastened around the neck with an ornamental clasp and worn as a necktie', 'name': 'bolo_tie'}, {'frequency': 'c', 'synset': 'bolt.n.03', 'synonyms': ['deadbolt'], 'id': 124, 'def': 'the part of a lock that is engaged or withdrawn with a key', 'name': 'deadbolt'}, {'frequency': 'f', 'synset': 'bolt.n.06', 'synonyms': ['bolt'], 'id': 125, 'def': 'a screw that screws into a nut to form a fastener', 'name': 'bolt'}, {'frequency': 'r', 'synset': 'bonnet.n.01', 'synonyms': ['bonnet'], 'id': 126, 'def': 'a hat tied under the chin', 'name': 'bonnet'}, {'frequency': 'f', 'synset': 'book.n.01', 'synonyms': ['book'], 'id': 127, 'def': 'a written work or composition that has been published', 'name': 'book'}, {'frequency': 'c', 'synset': 'bookcase.n.01', 'synonyms': ['bookcase'], 'id': 128, 'def': 'a piece of furniture with shelves for storing books', 'name': 'bookcase'}, {'frequency': 'c', 'synset': 'booklet.n.01', 'synonyms': ['booklet', 'brochure', 'leaflet', 'pamphlet'], 'id': 129, 'def': 'a small book usually having a paper cover', 'name': 'booklet'}, {'frequency': 'r', 'synset': 'bookmark.n.01', 'synonyms': ['bookmark', 'bookmarker'], 'id': 130, 'def': 'a marker (a piece of paper or ribbon) placed between the pages of a book', 'name': 'bookmark'}, {'frequency': 'r', 'synset': 'boom.n.04', 'synonyms': ['boom_microphone', 'microphone_boom'], 'id': 131, 'def': 'a pole carrying an overhead microphone projected over a film or tv set', 'name': 'boom_microphone'}, {'frequency': 'f', 'synset': 'boot.n.01', 'synonyms': ['boot'], 'id': 132, 'def': 'footwear that covers the whole foot and lower leg', 'name': 'boot'}, {'frequency': 'f', 'synset': 'bottle.n.01', 'synonyms': ['bottle'], 'id': 133, 'def': 'a glass or plastic vessel used for storing drinks or other liquids', 'name': 'bottle'}, {'frequency': 'c', 'synset': 'bottle_opener.n.01', 'synonyms': ['bottle_opener'], 'id': 134, 'def': 'an opener for removing caps or corks from bottles', 'name': 'bottle_opener'}, {'frequency': 'c', 'synset': 'bouquet.n.01', 'synonyms': ['bouquet'], 'id': 135, 'def': 'an arrangement of flowers that is usually given as a present', 'name': 'bouquet'}, {'frequency': 'r', 'synset': 'bow.n.04', 'synonyms': ['bow_(weapon)'], 'id': 136, 'def': 'a weapon for shooting arrows', 'name': 'bow_(weapon)'}, {'frequency': 'f', 'synset': 'bow.n.08', 'synonyms': ['bow_(decorative_ribbons)'], 'id': 137, 'def': 'a decorative interlacing of ribbons', 'name': 'bow_(decorative_ribbons)'}, {'frequency': 'f', 'synset': 'bow_tie.n.01', 'synonyms': ['bow-tie', 'bowtie'], 'id': 138, 'def': "a man's tie that ties in a bow", 'name': 'bow-tie'}, {'frequency': 'f', 'synset': 'bowl.n.03', 'synonyms': ['bowl'], 'id': 139, 'def': 'a dish that is round and open at the top for serving foods', 'name': 'bowl'}, {'frequency': 'r', 'synset': 'bowl.n.08', 'synonyms': ['pipe_bowl'], 'id': 140, 'def': 'a small round container that is open at the top for holding tobacco', 'name': 'pipe_bowl'}, {'frequency': 'c', 'synset': 'bowler_hat.n.01', 'synonyms': ['bowler_hat', 'bowler', 'derby_hat', 'derby', 'plug_hat'], 'id': 141, 'def': 'a felt hat that is round and hard with a narrow brim', 'name': 'bowler_hat'}, {'frequency': 'r', 'synset': 'bowling_ball.n.01', 'synonyms': ['bowling_ball'], 'id': 142, 'def': 'a large ball with finger holes used in the sport of bowling', 'name': 'bowling_ball'}, {'frequency': 'f', 'synset': 'box.n.01', 'synonyms': ['box'], 'id': 143, 'def': 'a (usually rectangular) container; may have a lid', 'name': 'box'}, {'frequency': 'r', 'synset': 'boxing_glove.n.01', 'synonyms': ['boxing_glove'], 'id': 144, 'def': 'large glove coverings the fists of a fighter worn for the sport of boxing', 'name': 'boxing_glove'}, {'frequency': 'c', 'synset': 'brace.n.06', 'synonyms': ['suspenders'], 'id': 145, 'def': 'elastic straps that hold trousers up (usually used in the plural)', 'name': 'suspenders'}, {'frequency': 'f', 'synset': 'bracelet.n.02', 'synonyms': ['bracelet', 'bangle'], 'id': 146, 'def': 'jewelry worn around the wrist for decoration', 'name': 'bracelet'}, {'frequency': 'r', 'synset': 'brass.n.07', 'synonyms': ['brass_plaque'], 'id': 147, 'def': 'a memorial made of brass', 'name': 'brass_plaque'}, {'frequency': 'c', 'synset': 'brassiere.n.01', 'synonyms': ['brassiere', 'bra', 'bandeau'], 'id': 148, 'def': 'an undergarment worn by women to support their breasts', 'name': 'brassiere'}, {'frequency': 'c', 'synset': 'bread-bin.n.01', 'synonyms': ['bread-bin', 'breadbox'], 'id': 149, 'def': 'a container used to keep bread or cake in', 'name': 'bread-bin'}, {'frequency': 'f', 'synset': 'bread.n.01', 'synonyms': ['bread'], 'id': 150, 'def': 'food made from dough of flour or meal and usually raised with yeast or baking powder and then baked', 'name': 'bread'}, {'frequency': 'r', 'synset': 'breechcloth.n.01', 'synonyms': ['breechcloth', 'breechclout', 'loincloth'], 'id': 151, 'def': 'a garment that provides covering for the loins', 'name': 'breechcloth'}, {'frequency': 'f', 'synset': 'bridal_gown.n.01', 'synonyms': ['bridal_gown', 'wedding_gown', 'wedding_dress'], 'id': 152, 'def': 'a gown worn by the bride at a wedding', 'name': 'bridal_gown'}, {'frequency': 'c', 'synset': 'briefcase.n.01', 'synonyms': ['briefcase'], 'id': 153, 'def': 'a case with a handle; for carrying papers or files or books', 'name': 'briefcase'}, {'frequency': 'f', 'synset': 'broccoli.n.01', 'synonyms': ['broccoli'], 'id': 154, 'def': 'plant with dense clusters of tight green flower buds', 'name': 'broccoli'}, {'frequency': 'r', 'synset': 'brooch.n.01', 'synonyms': ['broach'], 'id': 155, 'def': 'a decorative pin worn by women', 'name': 'broach'}, {'frequency': 'c', 'synset': 'broom.n.01', 'synonyms': ['broom'], 'id': 156, 'def': 'bundle of straws or twigs attached to a long handle; used for cleaning', 'name': 'broom'}, {'frequency': 'c', 'synset': 'brownie.n.03', 'synonyms': ['brownie'], 'id': 157, 'def': 'square or bar of very rich chocolate cake usually with nuts', 'name': 'brownie'}, {'frequency': 'c', 'synset': 'brussels_sprouts.n.01', 'synonyms': ['brussels_sprouts'], 'id': 158, 'def': 'the small edible cabbage-like buds growing along a stalk', 'name': 'brussels_sprouts'}, {'frequency': 'r', 'synset': 'bubble_gum.n.01', 'synonyms': ['bubble_gum'], 'id': 159, 'def': 'a kind of chewing gum that can be blown into bubbles', 'name': 'bubble_gum'}, {'frequency': 'f', 'synset': 'bucket.n.01', 'synonyms': ['bucket', 'pail'], 'id': 160, 'def': 'a roughly cylindrical vessel that is open at the top', 'name': 'bucket'}, {'frequency': 'r', 'synset': 'buggy.n.01', 'synonyms': ['horse_buggy'], 'id': 161, 'def': 'a small lightweight carriage; drawn by a single horse', 'name': 'horse_buggy'}, {'frequency': 'c', 'synset': 'bull.n.11', 'synonyms': ['horned_cow'], 'id': 162, 'def': 'a cow with horns', 'name': 'bull'}, {'frequency': 'c', 'synset': 'bulldog.n.01', 'synonyms': ['bulldog'], 'id': 163, 'def': 'a thickset short-haired dog with a large head and strong undershot lower jaw', 'name': 'bulldog'}, {'frequency': 'r', 'synset': 'bulldozer.n.01', 'synonyms': ['bulldozer', 'dozer'], 'id': 164, 'def': 'large powerful tractor; a large blade in front flattens areas of ground', 'name': 'bulldozer'}, {'frequency': 'c', 'synset': 'bullet_train.n.01', 'synonyms': ['bullet_train'], 'id': 165, 'def': 'a high-speed passenger train', 'name': 'bullet_train'}, {'frequency': 'c', 'synset': 'bulletin_board.n.02', 'synonyms': ['bulletin_board', 'notice_board'], 'id': 166, 'def': 'a board that hangs on a wall; displays announcements', 'name': 'bulletin_board'}, {'frequency': 'r', 'synset': 'bulletproof_vest.n.01', 'synonyms': ['bulletproof_vest'], 'id': 167, 'def': 'a vest capable of resisting the impact of a bullet', 'name': 'bulletproof_vest'}, {'frequency': 'c', 'synset': 'bullhorn.n.01', 'synonyms': ['bullhorn', 'megaphone'], 'id': 168, 'def': 'a portable loudspeaker with built-in microphone and amplifier', 'name': 'bullhorn'}, {'frequency': 'f', 'synset': 'bun.n.01', 'synonyms': ['bun', 'roll'], 'id': 169, 'def': 'small rounded bread either plain or sweet', 'name': 'bun'}, {'frequency': 'c', 'synset': 'bunk_bed.n.01', 'synonyms': ['bunk_bed'], 'id': 170, 'def': 'beds built one above the other', 'name': 'bunk_bed'}, {'frequency': 'f', 'synset': 'buoy.n.01', 'synonyms': ['buoy'], 'id': 171, 'def': 'a float attached by rope to the seabed to mark channels in a harbor or underwater hazards', 'name': 'buoy'}, {'frequency': 'r', 'synset': 'burrito.n.01', 'synonyms': ['burrito'], 'id': 172, 'def': 'a flour tortilla folded around a filling', 'name': 'burrito'}, {'frequency': 'f', 'synset': 'bus.n.01', 'synonyms': ['bus_(vehicle)', 'autobus', 'charabanc', 'double-decker', 'motorbus', 'motorcoach'], 'id': 173, 'def': 'a vehicle carrying many passengers; used for public transport', 'name': 'bus_(vehicle)'}, {'frequency': 'c', 'synset': 'business_card.n.01', 'synonyms': ['business_card'], 'id': 174, 'def': "a card on which are printed the person's name and business affiliation", 'name': 'business_card'}, {'frequency': 'f', 'synset': 'butter.n.01', 'synonyms': ['butter'], 'id': 175, 'def': 'an edible emulsion of fat globules made by churning milk or cream; for cooking and table use', 'name': 'butter'}, {'frequency': 'c', 'synset': 'butterfly.n.01', 'synonyms': ['butterfly'], 'id': 176, 'def': 'insect typically having a slender body with knobbed antennae and broad colorful wings', 'name': 'butterfly'}, {'frequency': 'f', 'synset': 'button.n.01', 'synonyms': ['button'], 'id': 177, 'def': 'a round fastener sewn to shirts and coats etc to fit through buttonholes', 'name': 'button'}, {'frequency': 'f', 'synset': 'cab.n.03', 'synonyms': ['cab_(taxi)', 'taxi', 'taxicab'], 'id': 178, 'def': 'a car that takes passengers where they want to go in exchange for money', 'name': 'cab_(taxi)'}, {'frequency': 'r', 'synset': 'cabana.n.01', 'synonyms': ['cabana'], 'id': 179, 'def': 'a small tent used as a dressing room beside the sea or a swimming pool', 'name': 'cabana'}, {'frequency': 'c', 'synset': 'cabin_car.n.01', 'synonyms': ['cabin_car', 'caboose'], 'id': 180, 'def': 'a car on a freight train for use of the train crew; usually the last car on the train', 'name': 'cabin_car'}, {'frequency': 'f', 'synset': 'cabinet.n.01', 'synonyms': ['cabinet'], 'id': 181, 'def': 'a piece of furniture resembling a cupboard with doors and shelves and drawers', 'name': 'cabinet'}, {'frequency': 'r', 'synset': 'cabinet.n.03', 'synonyms': ['locker', 'storage_locker'], 'id': 182, 'def': 'a storage compartment for clothes and valuables; usually it has a lock', 'name': 'locker'}, {'frequency': 'f', 'synset': 'cake.n.03', 'synonyms': ['cake'], 'id': 183, 'def': 'baked goods made from or based on a mixture of flour, sugar, eggs, and fat', 'name': 'cake'}, {'frequency': 'c', 'synset': 'calculator.n.02', 'synonyms': ['calculator'], 'id': 184, 'def': 'a small machine that is used for mathematical calculations', 'name': 'calculator'}, {'frequency': 'f', 'synset': 'calendar.n.02', 'synonyms': ['calendar'], 'id': 185, 'def': 'a list or register of events (appointments/social events/court cases, etc)', 'name': 'calendar'}, {'frequency': 'c', 'synset': 'calf.n.01', 'synonyms': ['calf'], 'id': 186, 'def': 'young of domestic cattle', 'name': 'calf'}, {'frequency': 'c', 'synset': 'camcorder.n.01', 'synonyms': ['camcorder'], 'id': 187, 'def': 'a portable television camera and videocassette recorder', 'name': 'camcorder'}, {'frequency': 'c', 'synset': 'camel.n.01', 'synonyms': ['camel'], 'id': 188, 'def': 'cud-chewing mammal used as a draft or saddle animal in desert regions', 'name': 'camel'}, {'frequency': 'f', 'synset': 'camera.n.01', 'synonyms': ['camera'], 'id': 189, 'def': 'equipment for taking photographs', 'name': 'camera'}, {'frequency': 'c', 'synset': 'camera_lens.n.01', 'synonyms': ['camera_lens'], 'id': 190, 'def': 'a lens that focuses the image in a camera', 'name': 'camera_lens'}, {'frequency': 'c', 'synset': 'camper.n.02', 'synonyms': ['camper_(vehicle)', 'camping_bus', 'motor_home'], 'id': 191, 'def': 'a recreational vehicle equipped for camping out while traveling', 'name': 'camper_(vehicle)'}, {'frequency': 'f', 'synset': 'can.n.01', 'synonyms': ['can', 'tin_can'], 'id': 192, 'def': 'airtight sealed metal container for food or drink or paint etc.', 'name': 'can'}, {'frequency': 'c', 'synset': 'can_opener.n.01', 'synonyms': ['can_opener', 'tin_opener'], 'id': 193, 'def': 'a device for cutting cans open', 'name': 'can_opener'}, {'frequency': 'f', 'synset': 'candle.n.01', 'synonyms': ['candle', 'candlestick'], 'id': 194, 'def': 'stick of wax with a wick in the middle', 'name': 'candle'}, {'frequency': 'f', 'synset': 'candlestick.n.01', 'synonyms': ['candle_holder'], 'id': 195, 'def': 'a holder with sockets for candles', 'name': 'candle_holder'}, {'frequency': 'r', 'synset': 'candy_bar.n.01', 'synonyms': ['candy_bar'], 'id': 196, 'def': 'a candy shaped as a bar', 'name': 'candy_bar'}, {'frequency': 'c', 'synset': 'candy_cane.n.01', 'synonyms': ['candy_cane'], 'id': 197, 'def': 'a hard candy in the shape of a rod (usually with stripes)', 'name': 'candy_cane'}, {'frequency': 'c', 'synset': 'cane.n.01', 'synonyms': ['walking_cane'], 'id': 198, 'def': 'a stick that people can lean on to help them walk', 'name': 'walking_cane'}, {'frequency': 'c', 'synset': 'canister.n.02', 'synonyms': ['canister', 'cannister'], 'id': 199, 'def': 'metal container for storing dry foods such as tea or flour', 'name': 'canister'}, {'frequency': 'c', 'synset': 'canoe.n.01', 'synonyms': ['canoe'], 'id': 200, 'def': 'small and light boat; pointed at both ends; propelled with a paddle', 'name': 'canoe'}, {'frequency': 'c', 'synset': 'cantaloup.n.02', 'synonyms': ['cantaloup', 'cantaloupe'], 'id': 201, 'def': 'the fruit of a cantaloup vine; small to medium-sized melon with yellowish flesh', 'name': 'cantaloup'}, {'frequency': 'r', 'synset': 'canteen.n.01', 'synonyms': ['canteen'], 'id': 202, 'def': 'a flask for carrying water; used by soldiers or travelers', 'name': 'canteen'}, {'frequency': 'f', 'synset': 'cap.n.01', 'synonyms': ['cap_(headwear)'], 'id': 203, 'def': 'a tight-fitting headwear', 'name': 'cap_(headwear)'}, {'frequency': 'f', 'synset': 'cap.n.02', 'synonyms': ['bottle_cap', 'cap_(container_lid)'], 'id': 204, 'def': 'a top (as for a bottle)', 'name': 'bottle_cap'}, {'frequency': 'c', 'synset': 'cape.n.02', 'synonyms': ['cape'], 'id': 205, 'def': 'a sleeveless garment like a cloak but shorter', 'name': 'cape'}, {'frequency': 'c', 'synset': 'cappuccino.n.01', 'synonyms': ['cappuccino', 'coffee_cappuccino'], 'id': 206, 'def': 'equal parts of espresso and steamed milk', 'name': 'cappuccino'}, {'frequency': 'f', 'synset': 'car.n.01', 'synonyms': ['car_(automobile)', 'auto_(automobile)', 'automobile'], 'id': 207, 'def': 'a motor vehicle with four wheels', 'name': 'car_(automobile)'}, {'frequency': 'f', 'synset': 'car.n.02', 'synonyms': ['railcar_(part_of_a_train)', 'railway_car_(part_of_a_train)', 'railroad_car_(part_of_a_train)'], 'id': 208, 'def': 'a wheeled vehicle adapted to the rails of railroad (mark each individual railcar separately)', 'name': 'railcar_(part_of_a_train)'}, {'frequency': 'r', 'synset': 'car.n.04', 'synonyms': ['elevator_car'], 'id': 209, 'def': 'where passengers ride up and down', 'name': 'elevator_car'}, {'frequency': 'r', 'synset': 'car_battery.n.01', 'synonyms': ['car_battery', 'automobile_battery'], 'id': 210, 'def': 'a battery in a motor vehicle', 'name': 'car_battery'}, {'frequency': 'c', 'synset': 'card.n.02', 'synonyms': ['identity_card'], 'id': 211, 'def': 'a card certifying the identity of the bearer', 'name': 'identity_card'}, {'frequency': 'c', 'synset': 'card.n.03', 'synonyms': ['card'], 'id': 212, 'def': 'a rectangular piece of paper used to send messages (e.g. greetings or pictures)', 'name': 'card'}, {'frequency': 'c', 'synset': 'cardigan.n.01', 'synonyms': ['cardigan'], 'id': 213, 'def': 'knitted jacket that is fastened up the front with buttons or a zipper', 'name': 'cardigan'}, {'frequency': 'r', 'synset': 'cargo_ship.n.01', 'synonyms': ['cargo_ship', 'cargo_vessel'], 'id': 214, 'def': 'a ship designed to carry cargo', 'name': 'cargo_ship'}, {'frequency': 'r', 'synset': 'carnation.n.01', 'synonyms': ['carnation'], 'id': 215, 'def': 'plant with pink to purple-red spice-scented usually double flowers', 'name': 'carnation'}, {'frequency': 'c', 'synset': 'carriage.n.02', 'synonyms': ['horse_carriage'], 'id': 216, 'def': 'a vehicle with wheels drawn by one or more horses', 'name': 'horse_carriage'}, {'frequency': 'f', 'synset': 'carrot.n.01', 'synonyms': ['carrot'], 'id': 217, 'def': 'deep orange edible root of the cultivated carrot plant', 'name': 'carrot'}, {'frequency': 'f', 'synset': 'carryall.n.01', 'synonyms': ['tote_bag'], 'id': 218, 'def': 'a capacious bag or basket', 'name': 'tote_bag'}, {'frequency': 'c', 'synset': 'cart.n.01', 'synonyms': ['cart'], 'id': 219, 'def': 'a heavy open wagon usually having two wheels and drawn by an animal', 'name': 'cart'}, {'frequency': 'c', 'synset': 'carton.n.02', 'synonyms': ['carton'], 'id': 220, 'def': 'a container made of cardboard for holding food or drink', 'name': 'carton'}, {'frequency': 'c', 'synset': 'cash_register.n.01', 'synonyms': ['cash_register', 'register_(for_cash_transactions)'], 'id': 221, 'def': 'a cashbox with an adding machine to register transactions', 'name': 'cash_register'}, {'frequency': 'r', 'synset': 'casserole.n.01', 'synonyms': ['casserole'], 'id': 222, 'def': 'food cooked and served in a casserole', 'name': 'casserole'}, {'frequency': 'r', 'synset': 'cassette.n.01', 'synonyms': ['cassette'], 'id': 223, 'def': 'a container that holds a magnetic tape used for recording or playing sound or video', 'name': 'cassette'}, {'frequency': 'c', 'synset': 'cast.n.05', 'synonyms': ['cast', 'plaster_cast', 'plaster_bandage'], 'id': 224, 'def': 'bandage consisting of a firm covering that immobilizes broken bones while they heal', 'name': 'cast'}, {'frequency': 'f', 'synset': 'cat.n.01', 'synonyms': ['cat'], 'id': 225, 'def': 'a domestic house cat', 'name': 'cat'}, {'frequency': 'f', 'synset': 'cauliflower.n.02', 'synonyms': ['cauliflower'], 'id': 226, 'def': 'edible compact head of white undeveloped flowers', 'name': 'cauliflower'}, {'frequency': 'c', 'synset': 'cayenne.n.02', 'synonyms': ['cayenne_(spice)', 'cayenne_pepper_(spice)', 'red_pepper_(spice)'], 'id': 227, 'def': 'ground pods and seeds of pungent red peppers of the genus Capsicum', 'name': 'cayenne_(spice)'}, {'frequency': 'c', 'synset': 'cd_player.n.01', 'synonyms': ['CD_player'], 'id': 228, 'def': 'electronic equipment for playing compact discs (CDs)', 'name': 'CD_player'}, {'frequency': 'f', 'synset': 'celery.n.01', 'synonyms': ['celery'], 'id': 229, 'def': 'widely cultivated herb with aromatic leaf stalks that are eaten raw or cooked', 'name': 'celery'}, {'frequency': 'f', 'synset': 'cellular_telephone.n.01', 'synonyms': ['cellular_telephone', 'cellular_phone', 'cellphone', 'mobile_phone', 'smart_phone'], 'id': 230, 'def': 'a hand-held mobile telephone', 'name': 'cellular_telephone'}, {'frequency': 'r', 'synset': 'chain_mail.n.01', 'synonyms': ['chain_mail', 'ring_mail', 'chain_armor', 'chain_armour', 'ring_armor', 'ring_armour'], 'id': 231, 'def': '(Middle Ages) flexible armor made of interlinked metal rings', 'name': 'chain_mail'}, {'frequency': 'f', 'synset': 'chair.n.01', 'synonyms': ['chair'], 'id': 232, 'def': 'a seat for one person, with a support for the back', 'name': 'chair'}, {'frequency': 'r', 'synset': 'chaise_longue.n.01', 'synonyms': ['chaise_longue', 'chaise', 'daybed'], 'id': 233, 'def': 'a long chair; for reclining', 'name': 'chaise_longue'}, {'frequency': 'r', 'synset': 'chalice.n.01', 'synonyms': ['chalice'], 'id': 234, 'def': 'a bowl-shaped drinking vessel; especially the Eucharistic cup', 'name': 'chalice'}, {'frequency': 'f', 'synset': 'chandelier.n.01', 'synonyms': ['chandelier'], 'id': 235, 'def': 'branched lighting fixture; often ornate; hangs from the ceiling', 'name': 'chandelier'}, {'frequency': 'r', 'synset': 'chap.n.04', 'synonyms': ['chap'], 'id': 236, 'def': 'leather leggings without a seat; worn over trousers by cowboys to protect their legs', 'name': 'chap'}, {'frequency': 'r', 'synset': 'checkbook.n.01', 'synonyms': ['checkbook', 'chequebook'], 'id': 237, 'def': 'a book issued to holders of checking accounts', 'name': 'checkbook'}, {'frequency': 'r', 'synset': 'checkerboard.n.01', 'synonyms': ['checkerboard'], 'id': 238, 'def': 'a board having 64 squares of two alternating colors', 'name': 'checkerboard'}, {'frequency': 'c', 'synset': 'cherry.n.03', 'synonyms': ['cherry'], 'id': 239, 'def': 'a red fruit with a single hard stone', 'name': 'cherry'}, {'frequency': 'r', 'synset': 'chessboard.n.01', 'synonyms': ['chessboard'], 'id': 240, 'def': 'a checkerboard used to play chess', 'name': 'chessboard'}, {'frequency': 'c', 'synset': 'chicken.n.02', 'synonyms': ['chicken_(animal)'], 'id': 241, 'def': 'a domestic fowl bred for flesh or eggs', 'name': 'chicken_(animal)'}, {'frequency': 'c', 'synset': 'chickpea.n.01', 'synonyms': ['chickpea', 'garbanzo'], 'id': 242, 'def': 'the seed of the chickpea plant; usually dried', 'name': 'chickpea'}, {'frequency': 'c', 'synset': 'chili.n.02', 'synonyms': ['chili_(vegetable)', 'chili_pepper_(vegetable)', 'chilli_(vegetable)', 'chilly_(vegetable)', 'chile_(vegetable)'], 'id': 243, 'def': 'very hot and finely tapering pepper of special pungency', 'name': 'chili_(vegetable)'}, {'frequency': 'r', 'synset': 'chime.n.01', 'synonyms': ['chime', 'gong'], 'id': 244, 'def': 'an instrument consisting of a set of bells that are struck with a hammer', 'name': 'chime'}, {'frequency': 'r', 'synset': 'chinaware.n.01', 'synonyms': ['chinaware'], 'id': 245, 'def': 'dishware made of high quality porcelain', 'name': 'chinaware'}, {'frequency': 'c', 'synset': 'chip.n.04', 'synonyms': ['crisp_(potato_chip)', 'potato_chip'], 'id': 246, 'def': 'a thin crisp slice of potato fried in deep fat', 'name': 'crisp_(potato_chip)'}, {'frequency': 'r', 'synset': 'chip.n.06', 'synonyms': ['poker_chip'], 'id': 247, 'def': 'a small disk-shaped counter used to represent money when gambling', 'name': 'poker_chip'}, {'frequency': 'c', 'synset': 'chocolate_bar.n.01', 'synonyms': ['chocolate_bar'], 'id': 248, 'def': 'a bar of chocolate candy', 'name': 'chocolate_bar'}, {'frequency': 'c', 'synset': 'chocolate_cake.n.01', 'synonyms': ['chocolate_cake'], 'id': 249, 'def': 'cake containing chocolate', 'name': 'chocolate_cake'}, {'frequency': 'r', 'synset': 'chocolate_milk.n.01', 'synonyms': ['chocolate_milk'], 'id': 250, 'def': 'milk flavored with chocolate syrup', 'name': 'chocolate_milk'}, {'frequency': 'r', 'synset': 'chocolate_mousse.n.01', 'synonyms': ['chocolate_mousse'], 'id': 251, 'def': 'dessert mousse made with chocolate', 'name': 'chocolate_mousse'}, {'frequency': 'f', 'synset': 'choker.n.03', 'synonyms': ['choker', 'collar', 'neckband'], 'id': 252, 'def': 'shirt collar, animal collar, or tight-fitting necklace', 'name': 'choker'}, {'frequency': 'f', 'synset': 'chopping_board.n.01', 'synonyms': ['chopping_board', 'cutting_board', 'chopping_block'], 'id': 253, 'def': 'a wooden board where meats or vegetables can be cut', 'name': 'chopping_board'}, {'frequency': 'f', 'synset': 'chopstick.n.01', 'synonyms': ['chopstick'], 'id': 254, 'def': 'one of a pair of slender sticks used as oriental tableware to eat food with', 'name': 'chopstick'}, {'frequency': 'f', 'synset': 'christmas_tree.n.05', 'synonyms': ['Christmas_tree'], 'id': 255, 'def': 'an ornamented evergreen used as a Christmas decoration', 'name': 'Christmas_tree'}, {'frequency': 'c', 'synset': 'chute.n.02', 'synonyms': ['slide'], 'id': 256, 'def': 'sloping channel through which things can descend', 'name': 'slide'}, {'frequency': 'r', 'synset': 'cider.n.01', 'synonyms': ['cider', 'cyder'], 'id': 257, 'def': 'a beverage made from juice pressed from apples', 'name': 'cider'}, {'frequency': 'r', 'synset': 'cigar_box.n.01', 'synonyms': ['cigar_box'], 'id': 258, 'def': 'a box for holding cigars', 'name': 'cigar_box'}, {'frequency': 'f', 'synset': 'cigarette.n.01', 'synonyms': ['cigarette'], 'id': 259, 'def': 'finely ground tobacco wrapped in paper; for smoking', 'name': 'cigarette'}, {'frequency': 'c', 'synset': 'cigarette_case.n.01', 'synonyms': ['cigarette_case', 'cigarette_pack'], 'id': 260, 'def': 'a small flat case for holding cigarettes', 'name': 'cigarette_case'}, {'frequency': 'f', 'synset': 'cistern.n.02', 'synonyms': ['cistern', 'water_tank'], 'id': 261, 'def': 'a tank that holds the water used to flush a toilet', 'name': 'cistern'}, {'frequency': 'r', 'synset': 'clarinet.n.01', 'synonyms': ['clarinet'], 'id': 262, 'def': 'a single-reed instrument with a straight tube', 'name': 'clarinet'}, {'frequency': 'c', 'synset': 'clasp.n.01', 'synonyms': ['clasp'], 'id': 263, 'def': 'a fastener (as a buckle or hook) that is used to hold two things together', 'name': 'clasp'}, {'frequency': 'c', 'synset': 'cleansing_agent.n.01', 'synonyms': ['cleansing_agent', 'cleanser', 'cleaner'], 'id': 264, 'def': 'a preparation used in cleaning something', 'name': 'cleansing_agent'}, {'frequency': 'r', 'synset': 'cleat.n.02', 'synonyms': ['cleat_(for_securing_rope)'], 'id': 265, 'def': 'a fastener (usually with two projecting horns) around which a rope can be secured', 'name': 'cleat_(for_securing_rope)'}, {'frequency': 'r', 'synset': 'clementine.n.01', 'synonyms': ['clementine'], 'id': 266, 'def': 'a variety of mandarin orange', 'name': 'clementine'}, {'frequency': 'c', 'synset': 'clip.n.03', 'synonyms': ['clip'], 'id': 267, 'def': 'any of various small fasteners used to hold loose articles together', 'name': 'clip'}, {'frequency': 'c', 'synset': 'clipboard.n.01', 'synonyms': ['clipboard'], 'id': 268, 'def': 'a small writing board with a clip at the top for holding papers', 'name': 'clipboard'}, {'frequency': 'r', 'synset': 'clipper.n.03', 'synonyms': ['clippers_(for_plants)'], 'id': 269, 'def': 'shears for cutting grass or shrubbery (often used in the plural)', 'name': 'clippers_(for_plants)'}, {'frequency': 'r', 'synset': 'cloak.n.02', 'synonyms': ['cloak'], 'id': 270, 'def': 'a loose outer garment', 'name': 'cloak'}, {'frequency': 'f', 'synset': 'clock.n.01', 'synonyms': ['clock', 'timepiece', 'timekeeper'], 'id': 271, 'def': 'a timepiece that shows the time of day', 'name': 'clock'}, {'frequency': 'f', 'synset': 'clock_tower.n.01', 'synonyms': ['clock_tower'], 'id': 272, 'def': 'a tower with a large clock visible high up on an outside face', 'name': 'clock_tower'}, {'frequency': 'c', 'synset': 'clothes_hamper.n.01', 'synonyms': ['clothes_hamper', 'laundry_basket', 'clothes_basket'], 'id': 273, 'def': 'a hamper that holds dirty clothes to be washed or wet clothes to be dried', 'name': 'clothes_hamper'}, {'frequency': 'c', 'synset': 'clothespin.n.01', 'synonyms': ['clothespin', 'clothes_peg'], 'id': 274, 'def': 'wood or plastic fastener; for holding clothes on a clothesline', 'name': 'clothespin'}, {'frequency': 'r', 'synset': 'clutch_bag.n.01', 'synonyms': ['clutch_bag'], 'id': 275, 'def': "a woman's strapless purse that is carried in the hand", 'name': 'clutch_bag'}, {'frequency': 'f', 'synset': 'coaster.n.03', 'synonyms': ['coaster'], 'id': 276, 'def': 'a covering (plate or mat) that protects the surface of a table', 'name': 'coaster'}, {'frequency': 'f', 'synset': 'coat.n.01', 'synonyms': ['coat'], 'id': 277, 'def': 'an outer garment that has sleeves and covers the body from shoulder down', 'name': 'coat'}, {'frequency': 'c', 'synset': 'coat_hanger.n.01', 'synonyms': ['coat_hanger', 'clothes_hanger', 'dress_hanger'], 'id': 278, 'def': "a hanger that is shaped like a person's shoulders", 'name': 'coat_hanger'}, {'frequency': 'c', 'synset': 'coatrack.n.01', 'synonyms': ['coatrack', 'hatrack'], 'id': 279, 'def': 'a rack with hooks for temporarily holding coats and hats', 'name': 'coatrack'}, {'frequency': 'c', 'synset': 'cock.n.04', 'synonyms': ['cock', 'rooster'], 'id': 280, 'def': 'adult male chicken', 'name': 'cock'}, {'frequency': 'r', 'synset': 'cockroach.n.01', 'synonyms': ['cockroach'], 'id': 281, 'def': 'any of numerous chiefly nocturnal insects; some are domestic pests', 'name': 'cockroach'}, {'frequency': 'r', 'synset': 'cocoa.n.01', 'synonyms': ['cocoa_(beverage)', 'hot_chocolate_(beverage)', 'drinking_chocolate'], 'id': 282, 'def': 'a beverage made from cocoa powder and milk and sugar; usually drunk hot', 'name': 'cocoa_(beverage)'}, {'frequency': 'c', 'synset': 'coconut.n.02', 'synonyms': ['coconut', 'cocoanut'], 'id': 283, 'def': 'large hard-shelled brown oval nut with a fibrous husk', 'name': 'coconut'}, {'frequency': 'f', 'synset': 'coffee_maker.n.01', 'synonyms': ['coffee_maker', 'coffee_machine'], 'id': 284, 'def': 'a kitchen appliance for brewing coffee automatically', 'name': 'coffee_maker'}, {'frequency': 'f', 'synset': 'coffee_table.n.01', 'synonyms': ['coffee_table', 'cocktail_table'], 'id': 285, 'def': 'low table where magazines can be placed and coffee or cocktails are served', 'name': 'coffee_table'}, {'frequency': 'c', 'synset': 'coffeepot.n.01', 'synonyms': ['coffeepot'], 'id': 286, 'def': 'tall pot in which coffee is brewed', 'name': 'coffeepot'}, {'frequency': 'r', 'synset': 'coil.n.05', 'synonyms': ['coil'], 'id': 287, 'def': 'tubing that is wound in a spiral', 'name': 'coil'}, {'frequency': 'c', 'synset': 'coin.n.01', 'synonyms': ['coin'], 'id': 288, 'def': 'a flat metal piece (usually a disc) used as money', 'name': 'coin'}, {'frequency': 'c', 'synset': 'colander.n.01', 'synonyms': ['colander', 'cullender'], 'id': 289, 'def': 'bowl-shaped strainer; used to wash or drain foods', 'name': 'colander'}, {'frequency': 'c', 'synset': 'coleslaw.n.01', 'synonyms': ['coleslaw', 'slaw'], 'id': 290, 'def': 'basically shredded cabbage', 'name': 'coleslaw'}, {'frequency': 'r', 'synset': 'coloring_material.n.01', 'synonyms': ['coloring_material', 'colouring_material'], 'id': 291, 'def': 'any material used for its color', 'name': 'coloring_material'}, {'frequency': 'r', 'synset': 'combination_lock.n.01', 'synonyms': ['combination_lock'], 'id': 292, 'def': 'lock that can be opened only by turning dials in a special sequence', 'name': 'combination_lock'}, {'frequency': 'c', 'synset': 'comforter.n.04', 'synonyms': ['pacifier', 'teething_ring'], 'id': 293, 'def': 'device used for an infant to suck or bite on', 'name': 'pacifier'}, {'frequency': 'r', 'synset': 'comic_book.n.01', 'synonyms': ['comic_book'], 'id': 294, 'def': 'a magazine devoted to comic strips', 'name': 'comic_book'}, {'frequency': 'r', 'synset': 'compass.n.01', 'synonyms': ['compass'], 'id': 295, 'def': 'navigational instrument for finding directions', 'name': 'compass'}, {'frequency': 'f', 'synset': 'computer_keyboard.n.01', 'synonyms': ['computer_keyboard', 'keyboard_(computer)'], 'id': 296, 'def': 'a keyboard that is a data input device for computers', 'name': 'computer_keyboard'}, {'frequency': 'f', 'synset': 'condiment.n.01', 'synonyms': ['condiment'], 'id': 297, 'def': 'a preparation (a sauce or relish or spice) to enhance flavor or enjoyment', 'name': 'condiment'}, {'frequency': 'f', 'synset': 'cone.n.01', 'synonyms': ['cone', 'traffic_cone'], 'id': 298, 'def': 'a cone-shaped object used to direct traffic', 'name': 'cone'}, {'frequency': 'f', 'synset': 'control.n.09', 'synonyms': ['control', 'controller'], 'id': 299, 'def': 'a mechanism that controls the operation of a machine', 'name': 'control'}, {'frequency': 'r', 'synset': 'convertible.n.01', 'synonyms': ['convertible_(automobile)'], 'id': 300, 'def': 'a car that has top that can be folded or removed', 'name': 'convertible_(automobile)'}, {'frequency': 'r', 'synset': 'convertible.n.03', 'synonyms': ['sofa_bed'], 'id': 301, 'def': 'a sofa that can be converted into a bed', 'name': 'sofa_bed'}, {'frequency': 'r', 'synset': 'cooker.n.01', 'synonyms': ['cooker'], 'id': 302, 'def': 'a utensil for cooking', 'name': 'cooker'}, {'frequency': 'f', 'synset': 'cookie.n.01', 'synonyms': ['cookie', 'cooky', 'biscuit_(cookie)'], 'id': 303, 'def': "any of various small flat sweet cakes (`biscuit' is the British term)", 'name': 'cookie'}, {'frequency': 'r', 'synset': 'cooking_utensil.n.01', 'synonyms': ['cooking_utensil'], 'id': 304, 'def': 'a kitchen utensil made of material that does not melt easily; used for cooking', 'name': 'cooking_utensil'}, {'frequency': 'f', 'synset': 'cooler.n.01', 'synonyms': ['cooler_(for_food)', 'ice_chest'], 'id': 305, 'def': 'an insulated box for storing food often with ice', 'name': 'cooler_(for_food)'}, {'frequency': 'f', 'synset': 'cork.n.04', 'synonyms': ['cork_(bottle_plug)', 'bottle_cork'], 'id': 306, 'def': 'the plug in the mouth of a bottle (especially a wine bottle)', 'name': 'cork_(bottle_plug)'}, {'frequency': 'r', 'synset': 'corkboard.n.01', 'synonyms': ['corkboard'], 'id': 307, 'def': 'a sheet consisting of cork granules', 'name': 'corkboard'}, {'frequency': 'c', 'synset': 'corkscrew.n.01', 'synonyms': ['corkscrew', 'bottle_screw'], 'id': 308, 'def': 'a bottle opener that pulls corks', 'name': 'corkscrew'}, {'frequency': 'f', 'synset': 'corn.n.03', 'synonyms': ['edible_corn', 'corn', 'maize'], 'id': 309, 'def': 'ears or kernels of corn that can be prepared and served for human food (only mark individual ears or kernels)', 'name': 'edible_corn'}, {'frequency': 'r', 'synset': 'cornbread.n.01', 'synonyms': ['cornbread'], 'id': 310, 'def': 'bread made primarily of cornmeal', 'name': 'cornbread'}, {'frequency': 'c', 'synset': 'cornet.n.01', 'synonyms': ['cornet', 'horn', 'trumpet'], 'id': 311, 'def': 'a brass musical instrument with a narrow tube and a flared bell and many valves', 'name': 'cornet'}, {'frequency': 'c', 'synset': 'cornice.n.01', 'synonyms': ['cornice', 'valance', 'valance_board', 'pelmet'], 'id': 312, 'def': 'a decorative framework to conceal curtain fixtures at the top of a window casing', 'name': 'cornice'}, {'frequency': 'r', 'synset': 'cornmeal.n.01', 'synonyms': ['cornmeal'], 'id': 313, 'def': 'coarsely ground corn', 'name': 'cornmeal'}, {'frequency': 'c', 'synset': 'corset.n.01', 'synonyms': ['corset', 'girdle'], 'id': 314, 'def': "a woman's close-fitting foundation garment", 'name': 'corset'}, {'frequency': 'c', 'synset': 'costume.n.04', 'synonyms': ['costume'], 'id': 315, 'def': 'the attire characteristic of a country or a time or a social class', 'name': 'costume'}, {'frequency': 'r', 'synset': 'cougar.n.01', 'synonyms': ['cougar', 'puma', 'catamount', 'mountain_lion', 'panther'], 'id': 316, 'def': 'large American feline resembling a lion', 'name': 'cougar'}, {'frequency': 'r', 'synset': 'coverall.n.01', 'synonyms': ['coverall'], 'id': 317, 'def': 'a loose-fitting protective garment that is worn over other clothing', 'name': 'coverall'}, {'frequency': 'c', 'synset': 'cowbell.n.01', 'synonyms': ['cowbell'], 'id': 318, 'def': 'a bell hung around the neck of cow so that the cow can be easily located', 'name': 'cowbell'}, {'frequency': 'f', 'synset': 'cowboy_hat.n.01', 'synonyms': ['cowboy_hat', 'ten-gallon_hat'], 'id': 319, 'def': 'a hat with a wide brim and a soft crown; worn by American ranch hands', 'name': 'cowboy_hat'}, {'frequency': 'c', 'synset': 'crab.n.01', 'synonyms': ['crab_(animal)'], 'id': 320, 'def': 'decapod having eyes on short stalks and a broad flattened shell and pincers', 'name': 'crab_(animal)'}, {'frequency': 'r', 'synset': 'crab.n.05', 'synonyms': ['crabmeat'], 'id': 321, 'def': 'the edible flesh of any of various crabs', 'name': 'crabmeat'}, {'frequency': 'c', 'synset': 'cracker.n.01', 'synonyms': ['cracker'], 'id': 322, 'def': 'a thin crisp wafer', 'name': 'cracker'}, {'frequency': 'r', 'synset': 'crape.n.01', 'synonyms': ['crape', 'crepe', 'French_pancake'], 'id': 323, 'def': 'small very thin pancake', 'name': 'crape'}, {'frequency': 'f', 'synset': 'crate.n.01', 'synonyms': ['crate'], 'id': 324, 'def': 'a rugged box (usually made of wood); used for shipping', 'name': 'crate'}, {'frequency': 'c', 'synset': 'crayon.n.01', 'synonyms': ['crayon', 'wax_crayon'], 'id': 325, 'def': 'writing or drawing implement made of a colored stick of composition wax', 'name': 'crayon'}, {'frequency': 'r', 'synset': 'cream_pitcher.n.01', 'synonyms': ['cream_pitcher'], 'id': 326, 'def': 'a small pitcher for serving cream', 'name': 'cream_pitcher'}, {'frequency': 'c', 'synset': 'crescent_roll.n.01', 'synonyms': ['crescent_roll', 'croissant'], 'id': 327, 'def': 'very rich flaky crescent-shaped roll', 'name': 'crescent_roll'}, {'frequency': 'c', 'synset': 'crib.n.01', 'synonyms': ['crib', 'cot'], 'id': 328, 'def': 'baby bed with high sides made of slats', 'name': 'crib'}, {'frequency': 'c', 'synset': 'crock.n.03', 'synonyms': ['crock_pot', 'earthenware_jar'], 'id': 329, 'def': 'an earthen jar (made of baked clay) or a modern electric crockpot', 'name': 'crock_pot'}, {'frequency': 'f', 'synset': 'crossbar.n.01', 'synonyms': ['crossbar'], 'id': 330, 'def': 'a horizontal bar that goes across something', 'name': 'crossbar'}, {'frequency': 'r', 'synset': 'crouton.n.01', 'synonyms': ['crouton'], 'id': 331, 'def': 'a small piece of toasted or fried bread; served in soup or salads', 'name': 'crouton'}, {'frequency': 'c', 'synset': 'crow.n.01', 'synonyms': ['crow'], 'id': 332, 'def': 'black birds having a raucous call', 'name': 'crow'}, {'frequency': 'r', 'synset': 'crowbar.n.01', 'synonyms': ['crowbar', 'wrecking_bar', 'pry_bar'], 'id': 333, 'def': 'a heavy iron lever with one end forged into a wedge', 'name': 'crowbar'}, {'frequency': 'c', 'synset': 'crown.n.04', 'synonyms': ['crown'], 'id': 334, 'def': 'an ornamental jeweled headdress signifying sovereignty', 'name': 'crown'}, {'frequency': 'c', 'synset': 'crucifix.n.01', 'synonyms': ['crucifix'], 'id': 335, 'def': 'representation of the cross on which Jesus died', 'name': 'crucifix'}, {'frequency': 'c', 'synset': 'cruise_ship.n.01', 'synonyms': ['cruise_ship', 'cruise_liner'], 'id': 336, 'def': 'a passenger ship used commercially for pleasure cruises', 'name': 'cruise_ship'}, {'frequency': 'c', 'synset': 'cruiser.n.01', 'synonyms': ['police_cruiser', 'patrol_car', 'police_car', 'squad_car'], 'id': 337, 'def': 'a car in which policemen cruise the streets', 'name': 'police_cruiser'}, {'frequency': 'f', 'synset': 'crumb.n.03', 'synonyms': ['crumb'], 'id': 338, 'def': 'small piece of e.g. bread or cake', 'name': 'crumb'}, {'frequency': 'c', 'synset': 'crutch.n.01', 'synonyms': ['crutch'], 'id': 339, 'def': 'a wooden or metal staff that fits under the armpit and reaches to the ground', 'name': 'crutch'}, {'frequency': 'c', 'synset': 'cub.n.03', 'synonyms': ['cub_(animal)'], 'id': 340, 'def': 'the young of certain carnivorous mammals such as the bear or wolf or lion', 'name': 'cub_(animal)'}, {'frequency': 'c', 'synset': 'cube.n.05', 'synonyms': ['cube', 'square_block'], 'id': 341, 'def': 'a block in the (approximate) shape of a cube', 'name': 'cube'}, {'frequency': 'f', 'synset': 'cucumber.n.02', 'synonyms': ['cucumber', 'cuke'], 'id': 342, 'def': 'cylindrical green fruit with thin green rind and white flesh eaten as a vegetable', 'name': 'cucumber'}, {'frequency': 'c', 'synset': 'cufflink.n.01', 'synonyms': ['cufflink'], 'id': 343, 'def': 'jewelry consisting of linked buttons used to fasten the cuffs of a shirt', 'name': 'cufflink'}, {'frequency': 'f', 'synset': 'cup.n.01', 'synonyms': ['cup'], 'id': 344, 'def': 'a small open container usually used for drinking; usually has a handle', 'name': 'cup'}, {'frequency': 'c', 'synset': 'cup.n.08', 'synonyms': ['trophy_cup'], 'id': 345, 'def': 'a metal award or cup-shaped vessel with handles that is awarded as a trophy to a competition winner', 'name': 'trophy_cup'}, {'frequency': 'f', 'synset': 'cupboard.n.01', 'synonyms': ['cupboard', 'closet'], 'id': 346, 'def': 'a small room (or recess) or cabinet used for storage space', 'name': 'cupboard'}, {'frequency': 'f', 'synset': 'cupcake.n.01', 'synonyms': ['cupcake'], 'id': 347, 'def': 'small cake baked in a muffin tin', 'name': 'cupcake'}, {'frequency': 'r', 'synset': 'curler.n.01', 'synonyms': ['hair_curler', 'hair_roller', 'hair_crimper'], 'id': 348, 'def': 'a cylindrical tube around which the hair is wound to curl it', 'name': 'hair_curler'}, {'frequency': 'r', 'synset': 'curling_iron.n.01', 'synonyms': ['curling_iron'], 'id': 349, 'def': 'a cylindrical home appliance that heats hair that has been curled around it', 'name': 'curling_iron'}, {'frequency': 'f', 'synset': 'curtain.n.01', 'synonyms': ['curtain', 'drapery'], 'id': 350, 'def': 'hanging cloth used as a blind (especially for a window)', 'name': 'curtain'}, {'frequency': 'f', 'synset': 'cushion.n.03', 'synonyms': ['cushion'], 'id': 351, 'def': 'a soft bag filled with air or padding such as feathers or foam rubber', 'name': 'cushion'}, {'frequency': 'r', 'synset': 'cylinder.n.04', 'synonyms': ['cylinder'], 'id': 352, 'def': 'a cylindrical container', 'name': 'cylinder'}, {'frequency': 'r', 'synset': 'cymbal.n.01', 'synonyms': ['cymbal'], 'id': 353, 'def': 'a percussion instrument consisting of a concave brass disk', 'name': 'cymbal'}, {'frequency': 'r', 'synset': 'dagger.n.01', 'synonyms': ['dagger'], 'id': 354, 'def': 'a short knife with a pointed blade used for piercing or stabbing', 'name': 'dagger'}, {'frequency': 'r', 'synset': 'dalmatian.n.02', 'synonyms': ['dalmatian'], 'id': 355, 'def': 'a large breed having a smooth white coat with black or brown spots', 'name': 'dalmatian'}, {'frequency': 'c', 'synset': 'dartboard.n.01', 'synonyms': ['dartboard'], 'id': 356, 'def': 'a circular board of wood or cork used as the target in the game of darts', 'name': 'dartboard'}, {'frequency': 'r', 'synset': 'date.n.08', 'synonyms': ['date_(fruit)'], 'id': 357, 'def': 'sweet edible fruit of the date palm with a single long woody seed', 'name': 'date_(fruit)'}, {'frequency': 'f', 'synset': 'deck_chair.n.01', 'synonyms': ['deck_chair', 'beach_chair'], 'id': 358, 'def': 'a folding chair for use outdoors; a wooden frame supports a length of canvas', 'name': 'deck_chair'}, {'frequency': 'c', 'synset': 'deer.n.01', 'synonyms': ['deer', 'cervid'], 'id': 359, 'def': "distinguished from Bovidae by the male's having solid deciduous antlers", 'name': 'deer'}, {'frequency': 'c', 'synset': 'dental_floss.n.01', 'synonyms': ['dental_floss', 'floss'], 'id': 360, 'def': 'a soft thread for cleaning the spaces between the teeth', 'name': 'dental_floss'}, {'frequency': 'f', 'synset': 'desk.n.01', 'synonyms': ['desk'], 'id': 361, 'def': 'a piece of furniture with a writing surface and usually drawers or other compartments', 'name': 'desk'}, {'frequency': 'r', 'synset': 'detergent.n.01', 'synonyms': ['detergent'], 'id': 362, 'def': 'a surface-active chemical widely used in industry and laundering', 'name': 'detergent'}, {'frequency': 'c', 'synset': 'diaper.n.01', 'synonyms': ['diaper'], 'id': 363, 'def': 'garment consisting of a folded cloth drawn up between the legs and fastened at the waist', 'name': 'diaper'}, {'frequency': 'r', 'synset': 'diary.n.01', 'synonyms': ['diary', 'journal'], 'id': 364, 'def': 'yearly planner book', 'name': 'diary'}, {'frequency': 'r', 'synset': 'die.n.01', 'synonyms': ['die', 'dice'], 'id': 365, 'def': 'a small cube with 1 to 6 spots on the six faces; used in gambling', 'name': 'die'}, {'frequency': 'r', 'synset': 'dinghy.n.01', 'synonyms': ['dinghy', 'dory', 'rowboat'], 'id': 366, 'def': 'a small boat of shallow draft with seats and oars with which it is propelled', 'name': 'dinghy'}, {'frequency': 'f', 'synset': 'dining_table.n.01', 'synonyms': ['dining_table'], 'id': 367, 'def': 'a table at which meals are served', 'name': 'dining_table'}, {'frequency': 'r', 'synset': 'dinner_jacket.n.01', 'synonyms': ['tux', 'tuxedo'], 'id': 368, 'def': 'semiformal evening dress for men', 'name': 'tux'}, {'frequency': 'f', 'synset': 'dish.n.01', 'synonyms': ['dish'], 'id': 369, 'def': 'a piece of dishware normally used as a container for holding or serving food', 'name': 'dish'}, {'frequency': 'c', 'synset': 'dish.n.05', 'synonyms': ['dish_antenna'], 'id': 370, 'def': 'directional antenna consisting of a parabolic reflector', 'name': 'dish_antenna'}, {'frequency': 'c', 'synset': 'dishrag.n.01', 'synonyms': ['dishrag', 'dishcloth'], 'id': 371, 'def': 'a cloth for washing dishes or cleaning in general', 'name': 'dishrag'}, {'frequency': 'f', 'synset': 'dishtowel.n.01', 'synonyms': ['dishtowel', 'tea_towel'], 'id': 372, 'def': 'a towel for drying dishes', 'name': 'dishtowel'}, {'frequency': 'f', 'synset': 'dishwasher.n.01', 'synonyms': ['dishwasher', 'dishwashing_machine'], 'id': 373, 'def': 'a machine for washing dishes', 'name': 'dishwasher'}, {'frequency': 'r', 'synset': 'dishwasher_detergent.n.01', 'synonyms': ['dishwasher_detergent', 'dishwashing_detergent', 'dishwashing_liquid', 'dishsoap'], 'id': 374, 'def': 'dishsoap or dish detergent designed for use in dishwashers', 'name': 'dishwasher_detergent'}, {'frequency': 'f', 'synset': 'dispenser.n.01', 'synonyms': ['dispenser'], 'id': 375, 'def': 'a container so designed that the contents can be used in prescribed amounts', 'name': 'dispenser'}, {'frequency': 'r', 'synset': 'diving_board.n.01', 'synonyms': ['diving_board'], 'id': 376, 'def': 'a springboard from which swimmers can dive', 'name': 'diving_board'}, {'frequency': 'f', 'synset': 'dixie_cup.n.01', 'synonyms': ['Dixie_cup', 'paper_cup'], 'id': 377, 'def': 'a disposable cup made of paper; for holding drinks', 'name': 'Dixie_cup'}, {'frequency': 'f', 'synset': 'dog.n.01', 'synonyms': ['dog'], 'id': 378, 'def': 'a common domesticated dog', 'name': 'dog'}, {'frequency': 'f', 'synset': 'dog_collar.n.01', 'synonyms': ['dog_collar'], 'id': 379, 'def': 'a collar for a dog', 'name': 'dog_collar'}, {'frequency': 'f', 'synset': 'doll.n.01', 'synonyms': ['doll'], 'id': 380, 'def': 'a toy replica of a HUMAN (NOT AN ANIMAL)', 'name': 'doll'}, {'frequency': 'r', 'synset': 'dollar.n.02', 'synonyms': ['dollar', 'dollar_bill', 'one_dollar_bill'], 'id': 381, 'def': 'a piece of paper money worth one dollar', 'name': 'dollar'}, {'frequency': 'r', 'synset': 'dollhouse.n.01', 'synonyms': ['dollhouse', "doll's_house"], 'id': 382, 'def': "a house so small that it is likened to a child's plaything", 'name': 'dollhouse'}, {'frequency': 'c', 'synset': 'dolphin.n.02', 'synonyms': ['dolphin'], 'id': 383, 'def': 'any of various small toothed whales with a beaklike snout; larger than porpoises', 'name': 'dolphin'}, {'frequency': 'c', 'synset': 'domestic_ass.n.01', 'synonyms': ['domestic_ass', 'donkey'], 'id': 384, 'def': 'domestic beast of burden descended from the African wild ass; patient but stubborn', 'name': 'domestic_ass'}, {'frequency': 'f', 'synset': 'doorknob.n.01', 'synonyms': ['doorknob', 'doorhandle'], 'id': 385, 'def': "a knob used to open a door (often called `doorhandle' in Great Britain)", 'name': 'doorknob'}, {'frequency': 'c', 'synset': 'doormat.n.02', 'synonyms': ['doormat', 'welcome_mat'], 'id': 386, 'def': 'a mat placed outside an exterior door for wiping the shoes before entering', 'name': 'doormat'}, {'frequency': 'f', 'synset': 'doughnut.n.02', 'synonyms': ['doughnut', 'donut'], 'id': 387, 'def': 'a small ring-shaped friedcake', 'name': 'doughnut'}, {'frequency': 'r', 'synset': 'dove.n.01', 'synonyms': ['dove'], 'id': 388, 'def': 'any of numerous small pigeons', 'name': 'dove'}, {'frequency': 'r', 'synset': 'dragonfly.n.01', 'synonyms': ['dragonfly'], 'id': 389, 'def': 'slender-bodied non-stinging insect having iridescent wings that are outspread at rest', 'name': 'dragonfly'}, {'frequency': 'f', 'synset': 'drawer.n.01', 'synonyms': ['drawer'], 'id': 390, 'def': 'a boxlike container in a piece of furniture; made so as to slide in and out', 'name': 'drawer'}, {'frequency': 'c', 'synset': 'drawers.n.01', 'synonyms': ['underdrawers', 'boxers', 'boxershorts'], 'id': 391, 'def': 'underpants worn by men', 'name': 'underdrawers'}, {'frequency': 'f', 'synset': 'dress.n.01', 'synonyms': ['dress', 'frock'], 'id': 392, 'def': 'a one-piece garment for a woman; has skirt and bodice', 'name': 'dress'}, {'frequency': 'c', 'synset': 'dress_hat.n.01', 'synonyms': ['dress_hat', 'high_hat', 'opera_hat', 'silk_hat', 'top_hat'], 'id': 393, 'def': "a man's hat with a tall crown; usually covered with silk or with beaver fur", 'name': 'dress_hat'}, {'frequency': 'f', 'synset': 'dress_suit.n.01', 'synonyms': ['dress_suit'], 'id': 394, 'def': 'formalwear consisting of full evening dress for men', 'name': 'dress_suit'}, {'frequency': 'f', 'synset': 'dresser.n.05', 'synonyms': ['dresser'], 'id': 395, 'def': 'a cabinet with shelves', 'name': 'dresser'}, {'frequency': 'c', 'synset': 'drill.n.01', 'synonyms': ['drill'], 'id': 396, 'def': 'a tool with a sharp rotating point for making holes in hard materials', 'name': 'drill'}, {'frequency': 'r', 'synset': 'drone.n.04', 'synonyms': ['drone'], 'id': 397, 'def': 'an aircraft without a pilot that is operated by remote control', 'name': 'drone'}, {'frequency': 'r', 'synset': 'dropper.n.01', 'synonyms': ['dropper', 'eye_dropper'], 'id': 398, 'def': 'pipet consisting of a small tube with a vacuum bulb at one end for drawing liquid in and releasing it a drop at a time', 'name': 'dropper'}, {'frequency': 'c', 'synset': 'drum.n.01', 'synonyms': ['drum_(musical_instrument)'], 'id': 399, 'def': 'a musical percussion instrument; usually consists of a hollow cylinder with a membrane stretched across each end', 'name': 'drum_(musical_instrument)'}, {'frequency': 'r', 'synset': 'drumstick.n.02', 'synonyms': ['drumstick'], 'id': 400, 'def': 'a stick used for playing a drum', 'name': 'drumstick'}, {'frequency': 'f', 'synset': 'duck.n.01', 'synonyms': ['duck'], 'id': 401, 'def': 'small web-footed broad-billed swimming bird', 'name': 'duck'}, {'frequency': 'c', 'synset': 'duckling.n.02', 'synonyms': ['duckling'], 'id': 402, 'def': 'young duck', 'name': 'duckling'}, {'frequency': 'c', 'synset': 'duct_tape.n.01', 'synonyms': ['duct_tape'], 'id': 403, 'def': 'a wide silvery adhesive tape', 'name': 'duct_tape'}, {'frequency': 'f', 'synset': 'duffel_bag.n.01', 'synonyms': ['duffel_bag', 'duffle_bag', 'duffel', 'duffle'], 'id': 404, 'def': 'a large cylindrical bag of heavy cloth (does not include suitcases)', 'name': 'duffel_bag'}, {'frequency': 'r', 'synset': 'dumbbell.n.01', 'synonyms': ['dumbbell'], 'id': 405, 'def': 'an exercising weight with two ball-like ends connected by a short handle', 'name': 'dumbbell'}, {'frequency': 'c', 'synset': 'dumpster.n.01', 'synonyms': ['dumpster'], 'id': 406, 'def': 'a container designed to receive and transport and dump waste', 'name': 'dumpster'}, {'frequency': 'r', 'synset': 'dustpan.n.02', 'synonyms': ['dustpan'], 'id': 407, 'def': 'a short-handled receptacle into which dust can be swept', 'name': 'dustpan'}, {'frequency': 'c', 'synset': 'eagle.n.01', 'synonyms': ['eagle'], 'id': 408, 'def': 'large birds of prey noted for their broad wings and strong soaring flight', 'name': 'eagle'}, {'frequency': 'f', 'synset': 'earphone.n.01', 'synonyms': ['earphone', 'earpiece', 'headphone'], 'id': 409, 'def': 'device for listening to audio that is held over or inserted into the ear', 'name': 'earphone'}, {'frequency': 'r', 'synset': 'earplug.n.01', 'synonyms': ['earplug'], 'id': 410, 'def': 'a soft plug that is inserted into the ear canal to block sound', 'name': 'earplug'}, {'frequency': 'f', 'synset': 'earring.n.01', 'synonyms': ['earring'], 'id': 411, 'def': 'jewelry to ornament the ear', 'name': 'earring'}, {'frequency': 'c', 'synset': 'easel.n.01', 'synonyms': ['easel'], 'id': 412, 'def': "an upright tripod for displaying something (usually an artist's canvas)", 'name': 'easel'}, {'frequency': 'r', 'synset': 'eclair.n.01', 'synonyms': ['eclair'], 'id': 413, 'def': 'oblong cream puff', 'name': 'eclair'}, {'frequency': 'r', 'synset': 'eel.n.01', 'synonyms': ['eel'], 'id': 414, 'def': 'an elongate fish with fatty flesh', 'name': 'eel'}, {'frequency': 'f', 'synset': 'egg.n.02', 'synonyms': ['egg', 'eggs'], 'id': 415, 'def': 'oval reproductive body of a fowl (especially a hen) used as food', 'name': 'egg'}, {'frequency': 'r', 'synset': 'egg_roll.n.01', 'synonyms': ['egg_roll', 'spring_roll'], 'id': 416, 'def': 'minced vegetables and meat wrapped in a pancake and fried', 'name': 'egg_roll'}, {'frequency': 'c', 'synset': 'egg_yolk.n.01', 'synonyms': ['egg_yolk', 'yolk_(egg)'], 'id': 417, 'def': 'the yellow spherical part of an egg', 'name': 'egg_yolk'}, {'frequency': 'c', 'synset': 'eggbeater.n.02', 'synonyms': ['eggbeater', 'eggwhisk'], 'id': 418, 'def': 'a mixer for beating eggs or whipping cream', 'name': 'eggbeater'}, {'frequency': 'c', 'synset': 'eggplant.n.01', 'synonyms': ['eggplant', 'aubergine'], 'id': 419, 'def': 'egg-shaped vegetable having a shiny skin typically dark purple', 'name': 'eggplant'}, {'frequency': 'r', 'synset': 'electric_chair.n.01', 'synonyms': ['electric_chair'], 'id': 420, 'def': 'a chair-shaped instrument of execution by electrocution', 'name': 'electric_chair'}, {'frequency': 'f', 'synset': 'electric_refrigerator.n.01', 'synonyms': ['refrigerator'], 'id': 421, 'def': 'a refrigerator in which the coolant is pumped around by an electric motor', 'name': 'refrigerator'}, {'frequency': 'f', 'synset': 'elephant.n.01', 'synonyms': ['elephant'], 'id': 422, 'def': 'a common elephant', 'name': 'elephant'}, {'frequency': 'c', 'synset': 'elk.n.01', 'synonyms': ['elk', 'moose'], 'id': 423, 'def': 'large northern deer with enormous flattened antlers in the male', 'name': 'elk'}, {'frequency': 'c', 'synset': 'envelope.n.01', 'synonyms': ['envelope'], 'id': 424, 'def': 'a flat (usually rectangular) container for a letter, thin package, etc.', 'name': 'envelope'}, {'frequency': 'c', 'synset': 'eraser.n.01', 'synonyms': ['eraser'], 'id': 425, 'def': 'an implement used to erase something', 'name': 'eraser'}, {'frequency': 'r', 'synset': 'escargot.n.01', 'synonyms': ['escargot'], 'id': 426, 'def': 'edible snail usually served in the shell with a sauce of melted butter and garlic', 'name': 'escargot'}, {'frequency': 'r', 'synset': 'eyepatch.n.01', 'synonyms': ['eyepatch'], 'id': 427, 'def': 'a protective cloth covering for an injured eye', 'name': 'eyepatch'}, {'frequency': 'r', 'synset': 'falcon.n.01', 'synonyms': ['falcon'], 'id': 428, 'def': 'birds of prey having long pointed powerful wings adapted for swift flight', 'name': 'falcon'}, {'frequency': 'f', 'synset': 'fan.n.01', 'synonyms': ['fan'], 'id': 429, 'def': 'a device for creating a current of air by movement of a surface or surfaces', 'name': 'fan'}, {'frequency': 'f', 'synset': 'faucet.n.01', 'synonyms': ['faucet', 'spigot', 'tap'], 'id': 430, 'def': 'a regulator for controlling the flow of a liquid from a reservoir', 'name': 'faucet'}, {'frequency': 'r', 'synset': 'fedora.n.01', 'synonyms': ['fedora'], 'id': 431, 'def': 'a hat made of felt with a creased crown', 'name': 'fedora'}, {'frequency': 'r', 'synset': 'ferret.n.02', 'synonyms': ['ferret'], 'id': 432, 'def': 'domesticated albino variety of the European polecat bred for hunting rats and rabbits', 'name': 'ferret'}, {'frequency': 'c', 'synset': 'ferris_wheel.n.01', 'synonyms': ['Ferris_wheel'], 'id': 433, 'def': 'a large wheel with suspended seats that remain upright as the wheel rotates', 'name': 'Ferris_wheel'}, {'frequency': 'c', 'synset': 'ferry.n.01', 'synonyms': ['ferry', 'ferryboat'], 'id': 434, 'def': 'a boat that transports people or vehicles across a body of water and operates on a regular schedule', 'name': 'ferry'}, {'frequency': 'r', 'synset': 'fig.n.04', 'synonyms': ['fig_(fruit)'], 'id': 435, 'def': 'fleshy sweet pear-shaped yellowish or purple fruit eaten fresh or preserved or dried', 'name': 'fig_(fruit)'}, {'frequency': 'c', 'synset': 'fighter.n.02', 'synonyms': ['fighter_jet', 'fighter_aircraft', 'attack_aircraft'], 'id': 436, 'def': 'a high-speed military or naval airplane designed to destroy enemy targets', 'name': 'fighter_jet'}, {'frequency': 'f', 'synset': 'figurine.n.01', 'synonyms': ['figurine'], 'id': 437, 'def': 'a small carved or molded figure', 'name': 'figurine'}, {'frequency': 'c', 'synset': 'file.n.03', 'synonyms': ['file_cabinet', 'filing_cabinet'], 'id': 438, 'def': 'office furniture consisting of a container for keeping papers in order', 'name': 'file_cabinet'}, {'frequency': 'r', 'synset': 'file.n.04', 'synonyms': ['file_(tool)'], 'id': 439, 'def': 'a steel hand tool with small sharp teeth on some or all of its surfaces; used for smoothing wood or metal', 'name': 'file_(tool)'}, {'frequency': 'f', 'synset': 'fire_alarm.n.02', 'synonyms': ['fire_alarm', 'smoke_alarm'], 'id': 440, 'def': 'an alarm that is tripped off by fire or smoke', 'name': 'fire_alarm'}, {'frequency': 'f', 'synset': 'fire_engine.n.01', 'synonyms': ['fire_engine', 'fire_truck'], 'id': 441, 'def': 'large trucks that carry firefighters and equipment to the site of a fire', 'name': 'fire_engine'}, {'frequency': 'f', 'synset': 'fire_extinguisher.n.01', 'synonyms': ['fire_extinguisher', 'extinguisher'], 'id': 442, 'def': 'a manually operated device for extinguishing small fires', 'name': 'fire_extinguisher'}, {'frequency': 'c', 'synset': 'fire_hose.n.01', 'synonyms': ['fire_hose'], 'id': 443, 'def': 'a large hose that carries water from a fire hydrant to the site of the fire', 'name': 'fire_hose'}, {'frequency': 'f', 'synset': 'fireplace.n.01', 'synonyms': ['fireplace'], 'id': 444, 'def': 'an open recess in a wall at the base of a chimney where a fire can be built', 'name': 'fireplace'}, {'frequency': 'f', 'synset': 'fireplug.n.01', 'synonyms': ['fireplug', 'fire_hydrant', 'hydrant'], 'id': 445, 'def': 'an upright hydrant for drawing water to use in fighting a fire', 'name': 'fireplug'}, {'frequency': 'r', 'synset': 'first-aid_kit.n.01', 'synonyms': ['first-aid_kit'], 'id': 446, 'def': 'kit consisting of a set of bandages and medicines for giving first aid', 'name': 'first-aid_kit'}, {'frequency': 'f', 'synset': 'fish.n.01', 'synonyms': ['fish'], 'id': 447, 'def': 'any of various mostly cold-blooded aquatic vertebrates usually having scales and breathing through gills', 'name': 'fish'}, {'frequency': 'c', 'synset': 'fish.n.02', 'synonyms': ['fish_(food)'], 'id': 448, 'def': 'the flesh of fish used as food', 'name': 'fish_(food)'}, {'frequency': 'r', 'synset': 'fishbowl.n.02', 'synonyms': ['fishbowl', 'goldfish_bowl'], 'id': 449, 'def': 'a transparent bowl in which small fish are kept', 'name': 'fishbowl'}, {'frequency': 'c', 'synset': 'fishing_rod.n.01', 'synonyms': ['fishing_rod', 'fishing_pole'], 'id': 450, 'def': 'a rod that is used in fishing to extend the fishing line', 'name': 'fishing_rod'}, {'frequency': 'f', 'synset': 'flag.n.01', 'synonyms': ['flag'], 'id': 451, 'def': 'emblem usually consisting of a rectangular piece of cloth of distinctive design (do not include pole)', 'name': 'flag'}, {'frequency': 'f', 'synset': 'flagpole.n.02', 'synonyms': ['flagpole', 'flagstaff'], 'id': 452, 'def': 'a tall staff or pole on which a flag is raised', 'name': 'flagpole'}, {'frequency': 'c', 'synset': 'flamingo.n.01', 'synonyms': ['flamingo'], 'id': 453, 'def': 'large pink web-footed bird with down-bent bill', 'name': 'flamingo'}, {'frequency': 'c', 'synset': 'flannel.n.01', 'synonyms': ['flannel'], 'id': 454, 'def': 'a soft light woolen fabric; used for clothing', 'name': 'flannel'}, {'frequency': 'c', 'synset': 'flap.n.01', 'synonyms': ['flap'], 'id': 455, 'def': 'any broad thin covering attached at one edge, such as a mud flap next to a wheel or a flap on an airplane wing', 'name': 'flap'}, {'frequency': 'r', 'synset': 'flash.n.10', 'synonyms': ['flash', 'flashbulb'], 'id': 456, 'def': 'a lamp for providing momentary light to take a photograph', 'name': 'flash'}, {'frequency': 'c', 'synset': 'flashlight.n.01', 'synonyms': ['flashlight', 'torch'], 'id': 457, 'def': 'a small portable battery-powered electric lamp', 'name': 'flashlight'}, {'frequency': 'r', 'synset': 'fleece.n.03', 'synonyms': ['fleece'], 'id': 458, 'def': 'a soft bulky fabric with deep pile; used chiefly for clothing', 'name': 'fleece'}, {'frequency': 'f', 'synset': 'flip-flop.n.02', 'synonyms': ['flip-flop_(sandal)'], 'id': 459, 'def': 'a backless sandal held to the foot by a thong between two toes', 'name': 'flip-flop_(sandal)'}, {'frequency': 'c', 'synset': 'flipper.n.01', 'synonyms': ['flipper_(footwear)', 'fin_(footwear)'], 'id': 460, 'def': 'a shoe to aid a person in swimming', 'name': 'flipper_(footwear)'}, {'frequency': 'f', 'synset': 'flower_arrangement.n.01', 'synonyms': ['flower_arrangement', 'floral_arrangement'], 'id': 461, 'def': 'a decorative arrangement of flowers', 'name': 'flower_arrangement'}, {'frequency': 'c', 'synset': 'flute.n.02', 'synonyms': ['flute_glass', 'champagne_flute'], 'id': 462, 'def': 'a tall narrow wineglass', 'name': 'flute_glass'}, {'frequency': 'c', 'synset': 'foal.n.01', 'synonyms': ['foal'], 'id': 463, 'def': 'a young horse', 'name': 'foal'}, {'frequency': 'c', 'synset': 'folding_chair.n.01', 'synonyms': ['folding_chair'], 'id': 464, 'def': 'a chair that can be folded flat for storage', 'name': 'folding_chair'}, {'frequency': 'c', 'synset': 'food_processor.n.01', 'synonyms': ['food_processor'], 'id': 465, 'def': 'a kitchen appliance for shredding, blending, chopping, or slicing food', 'name': 'food_processor'}, {'frequency': 'c', 'synset': 'football.n.02', 'synonyms': ['football_(American)'], 'id': 466, 'def': 'the inflated oblong ball used in playing American football', 'name': 'football_(American)'}, {'frequency': 'r', 'synset': 'football_helmet.n.01', 'synonyms': ['football_helmet'], 'id': 467, 'def': 'a padded helmet with a face mask to protect the head of football players', 'name': 'football_helmet'}, {'frequency': 'c', 'synset': 'footstool.n.01', 'synonyms': ['footstool', 'footrest'], 'id': 468, 'def': 'a low seat or a stool to rest the feet of a seated person', 'name': 'footstool'}, {'frequency': 'f', 'synset': 'fork.n.01', 'synonyms': ['fork'], 'id': 469, 'def': 'cutlery used for serving and eating food', 'name': 'fork'}, {'frequency': 'c', 'synset': 'forklift.n.01', 'synonyms': ['forklift'], 'id': 470, 'def': 'an industrial vehicle with a power operated fork in front that can be inserted under loads to lift and move them', 'name': 'forklift'}, {'frequency': 'c', 'synset': 'freight_car.n.01', 'synonyms': ['freight_car'], 'id': 471, 'def': 'a railway car that carries freight', 'name': 'freight_car'}, {'frequency': 'c', 'synset': 'french_toast.n.01', 'synonyms': ['French_toast'], 'id': 472, 'def': 'bread slice dipped in egg and milk and fried', 'name': 'French_toast'}, {'frequency': 'c', 'synset': 'freshener.n.01', 'synonyms': ['freshener', 'air_freshener'], 'id': 473, 'def': 'anything that freshens air by removing or covering odor', 'name': 'freshener'}, {'frequency': 'f', 'synset': 'frisbee.n.01', 'synonyms': ['frisbee'], 'id': 474, 'def': 'a light, plastic disk propelled with a flip of the wrist for recreation or competition', 'name': 'frisbee'}, {'frequency': 'c', 'synset': 'frog.n.01', 'synonyms': ['frog', 'toad', 'toad_frog'], 'id': 475, 'def': 'a tailless stout-bodied amphibians with long hind limbs for leaping', 'name': 'frog'}, {'frequency': 'c', 'synset': 'fruit_juice.n.01', 'synonyms': ['fruit_juice'], 'id': 476, 'def': 'drink produced by squeezing or crushing fruit', 'name': 'fruit_juice'}, {'frequency': 'f', 'synset': 'frying_pan.n.01', 'synonyms': ['frying_pan', 'frypan', 'skillet'], 'id': 477, 'def': 'a pan used for frying foods', 'name': 'frying_pan'}, {'frequency': 'r', 'synset': 'fudge.n.01', 'synonyms': ['fudge'], 'id': 478, 'def': 'soft creamy candy', 'name': 'fudge'}, {'frequency': 'r', 'synset': 'funnel.n.02', 'synonyms': ['funnel'], 'id': 479, 'def': 'a cone-shaped utensil used to channel a substance into a container with a small mouth', 'name': 'funnel'}, {'frequency': 'r', 'synset': 'futon.n.01', 'synonyms': ['futon'], 'id': 480, 'def': 'a pad that is used for sleeping on the floor or on a raised frame', 'name': 'futon'}, {'frequency': 'r', 'synset': 'gag.n.02', 'synonyms': ['gag', 'muzzle'], 'id': 481, 'def': "restraint put into a person's mouth to prevent speaking or shouting", 'name': 'gag'}, {'frequency': 'r', 'synset': 'garbage.n.03', 'synonyms': ['garbage'], 'id': 482, 'def': 'a receptacle where waste can be discarded', 'name': 'garbage'}, {'frequency': 'c', 'synset': 'garbage_truck.n.01', 'synonyms': ['garbage_truck'], 'id': 483, 'def': 'a truck for collecting domestic refuse', 'name': 'garbage_truck'}, {'frequency': 'c', 'synset': 'garden_hose.n.01', 'synonyms': ['garden_hose'], 'id': 484, 'def': 'a hose used for watering a lawn or garden', 'name': 'garden_hose'}, {'frequency': 'c', 'synset': 'gargle.n.01', 'synonyms': ['gargle', 'mouthwash'], 'id': 485, 'def': 'a medicated solution used for gargling and rinsing the mouth', 'name': 'gargle'}, {'frequency': 'r', 'synset': 'gargoyle.n.02', 'synonyms': ['gargoyle'], 'id': 486, 'def': 'an ornament consisting of a grotesquely carved figure of a person or animal', 'name': 'gargoyle'}, {'frequency': 'c', 'synset': 'garlic.n.02', 'synonyms': ['garlic', 'ail'], 'id': 487, 'def': 'aromatic bulb used as seasoning', 'name': 'garlic'}, {'frequency': 'r', 'synset': 'gasmask.n.01', 'synonyms': ['gasmask', 'respirator', 'gas_helmet'], 'id': 488, 'def': 'a protective face mask with a filter', 'name': 'gasmask'}, {'frequency': 'c', 'synset': 'gazelle.n.01', 'synonyms': ['gazelle'], 'id': 489, 'def': 'small swift graceful antelope of Africa and Asia having lustrous eyes', 'name': 'gazelle'}, {'frequency': 'c', 'synset': 'gelatin.n.02', 'synonyms': ['gelatin', 'jelly'], 'id': 490, 'def': 'an edible jelly made with gelatin and used as a dessert or salad base or a coating for foods', 'name': 'gelatin'}, {'frequency': 'r', 'synset': 'gem.n.02', 'synonyms': ['gemstone'], 'id': 491, 'def': 'a crystalline rock that can be cut and polished for jewelry', 'name': 'gemstone'}, {'frequency': 'r', 'synset': 'generator.n.02', 'synonyms': ['generator'], 'id': 492, 'def': 'engine that converts mechanical energy into electrical energy by electromagnetic induction', 'name': 'generator'}, {'frequency': 'c', 'synset': 'giant_panda.n.01', 'synonyms': ['giant_panda', 'panda', 'panda_bear'], 'id': 493, 'def': 'large black-and-white herbivorous mammal of bamboo forests of China and Tibet', 'name': 'giant_panda'}, {'frequency': 'c', 'synset': 'gift_wrap.n.01', 'synonyms': ['gift_wrap'], 'id': 494, 'def': 'attractive wrapping paper suitable for wrapping gifts', 'name': 'gift_wrap'}, {'frequency': 'c', 'synset': 'ginger.n.03', 'synonyms': ['ginger', 'gingerroot'], 'id': 495, 'def': 'the root of the common ginger plant; used fresh as a seasoning', 'name': 'ginger'}, {'frequency': 'f', 'synset': 'giraffe.n.01', 'synonyms': ['giraffe'], 'id': 496, 'def': 'tall animal having a spotted coat and small horns and very long neck and legs', 'name': 'giraffe'}, {'frequency': 'c', 'synset': 'girdle.n.02', 'synonyms': ['cincture', 'sash', 'waistband', 'waistcloth'], 'id': 497, 'def': 'a band of material around the waist that strengthens a skirt or trousers', 'name': 'cincture'}, {'frequency': 'f', 'synset': 'glass.n.02', 'synonyms': ['glass_(drink_container)', 'drinking_glass'], 'id': 498, 'def': 'a container for holding liquids while drinking', 'name': 'glass_(drink_container)'}, {'frequency': 'c', 'synset': 'globe.n.03', 'synonyms': ['globe'], 'id': 499, 'def': 'a sphere on which a map (especially of the earth) is represented', 'name': 'globe'}, {'frequency': 'f', 'synset': 'glove.n.02', 'synonyms': ['glove'], 'id': 500, 'def': 'handwear covering the hand', 'name': 'glove'}, {'frequency': 'c', 'synset': 'goat.n.01', 'synonyms': ['goat'], 'id': 501, 'def': 'a common goat', 'name': 'goat'}, {'frequency': 'f', 'synset': 'goggles.n.01', 'synonyms': ['goggles'], 'id': 502, 'def': 'tight-fitting spectacles worn to protect the eyes', 'name': 'goggles'}, {'frequency': 'r', 'synset': 'goldfish.n.01', 'synonyms': ['goldfish'], 'id': 503, 'def': 'small golden or orange-red freshwater fishes used as pond or aquarium pets', 'name': 'goldfish'}, {'frequency': 'c', 'synset': 'golf_club.n.02', 'synonyms': ['golf_club', 'golf-club'], 'id': 504, 'def': 'golf equipment used by a golfer to hit a golf ball', 'name': 'golf_club'}, {'frequency': 'c', 'synset': 'golfcart.n.01', 'synonyms': ['golfcart'], 'id': 505, 'def': 'a small motor vehicle in which golfers can ride between shots', 'name': 'golfcart'}, {'frequency': 'r', 'synset': 'gondola.n.02', 'synonyms': ['gondola_(boat)'], 'id': 506, 'def': 'long narrow flat-bottomed boat propelled by sculling; traditionally used on canals of Venice', 'name': 'gondola_(boat)'}, {'frequency': 'c', 'synset': 'goose.n.01', 'synonyms': ['goose'], 'id': 507, 'def': 'loud, web-footed long-necked aquatic birds usually larger than ducks', 'name': 'goose'}, {'frequency': 'r', 'synset': 'gorilla.n.01', 'synonyms': ['gorilla'], 'id': 508, 'def': 'largest ape', 'name': 'gorilla'}, {'frequency': 'r', 'synset': 'gourd.n.02', 'synonyms': ['gourd'], 'id': 509, 'def': 'any of numerous inedible fruits with hard rinds', 'name': 'gourd'}, {'frequency': 'f', 'synset': 'grape.n.01', 'synonyms': ['grape'], 'id': 510, 'def': 'any of various juicy fruit with green or purple skins; grow in clusters', 'name': 'grape'}, {'frequency': 'c', 'synset': 'grater.n.01', 'synonyms': ['grater'], 'id': 511, 'def': 'utensil with sharp perforations for shredding foods (as vegetables or cheese)', 'name': 'grater'}, {'frequency': 'c', 'synset': 'gravestone.n.01', 'synonyms': ['gravestone', 'headstone', 'tombstone'], 'id': 512, 'def': 'a stone that is used to mark a grave', 'name': 'gravestone'}, {'frequency': 'r', 'synset': 'gravy_boat.n.01', 'synonyms': ['gravy_boat', 'gravy_holder'], 'id': 513, 'def': 'a dish (often boat-shaped) for serving gravy or sauce', 'name': 'gravy_boat'}, {'frequency': 'f', 'synset': 'green_bean.n.02', 'synonyms': ['green_bean'], 'id': 514, 'def': 'a common bean plant cultivated for its slender green edible pods', 'name': 'green_bean'}, {'frequency': 'f', 'synset': 'green_onion.n.01', 'synonyms': ['green_onion', 'spring_onion', 'scallion'], 'id': 515, 'def': 'a young onion before the bulb has enlarged', 'name': 'green_onion'}, {'frequency': 'r', 'synset': 'griddle.n.01', 'synonyms': ['griddle'], 'id': 516, 'def': 'cooking utensil consisting of a flat heated surface on which food is cooked', 'name': 'griddle'}, {'frequency': 'f', 'synset': 'grill.n.02', 'synonyms': ['grill', 'grille', 'grillwork', 'radiator_grille'], 'id': 517, 'def': 'a framework of metal bars used as a partition or a grate', 'name': 'grill'}, {'frequency': 'r', 'synset': 'grits.n.01', 'synonyms': ['grits', 'hominy_grits'], 'id': 518, 'def': 'coarsely ground corn boiled as a breakfast dish', 'name': 'grits'}, {'frequency': 'c', 'synset': 'grizzly.n.01', 'synonyms': ['grizzly', 'grizzly_bear'], 'id': 519, 'def': 'powerful brownish-yellow bear of the uplands of western North America', 'name': 'grizzly'}, {'frequency': 'c', 'synset': 'grocery_bag.n.01', 'synonyms': ['grocery_bag'], 'id': 520, 'def': "a sack for holding customer's groceries", 'name': 'grocery_bag'}, {'frequency': 'f', 'synset': 'guitar.n.01', 'synonyms': ['guitar'], 'id': 521, 'def': 'a stringed instrument usually having six strings; played by strumming or plucking', 'name': 'guitar'}, {'frequency': 'c', 'synset': 'gull.n.02', 'synonyms': ['gull', 'seagull'], 'id': 522, 'def': 'mostly white aquatic bird having long pointed wings and short legs', 'name': 'gull'}, {'frequency': 'c', 'synset': 'gun.n.01', 'synonyms': ['gun'], 'id': 523, 'def': 'a weapon that discharges a bullet at high velocity from a metal tube', 'name': 'gun'}, {'frequency': 'f', 'synset': 'hairbrush.n.01', 'synonyms': ['hairbrush'], 'id': 524, 'def': "a brush used to groom a person's hair", 'name': 'hairbrush'}, {'frequency': 'c', 'synset': 'hairnet.n.01', 'synonyms': ['hairnet'], 'id': 525, 'def': 'a small net that someone wears over their hair to keep it in place', 'name': 'hairnet'}, {'frequency': 'c', 'synset': 'hairpin.n.01', 'synonyms': ['hairpin'], 'id': 526, 'def': "a double pronged pin used to hold women's hair in place", 'name': 'hairpin'}, {'frequency': 'r', 'synset': 'halter.n.03', 'synonyms': ['halter_top'], 'id': 527, 'def': "a woman's top that fastens behind the back and neck leaving the back and arms uncovered", 'name': 'halter_top'}, {'frequency': 'f', 'synset': 'ham.n.01', 'synonyms': ['ham', 'jambon', 'gammon'], 'id': 528, 'def': 'meat cut from the thigh of a hog (usually smoked)', 'name': 'ham'}, {'frequency': 'c', 'synset': 'hamburger.n.01', 'synonyms': ['hamburger', 'beefburger', 'burger'], 'id': 529, 'def': 'a sandwich consisting of a patty of minced beef served on a bun', 'name': 'hamburger'}, {'frequency': 'c', 'synset': 'hammer.n.02', 'synonyms': ['hammer'], 'id': 530, 'def': 'a hand tool with a heavy head and a handle; used to deliver an impulsive force by striking', 'name': 'hammer'}, {'frequency': 'c', 'synset': 'hammock.n.02', 'synonyms': ['hammock'], 'id': 531, 'def': 'a hanging bed of canvas or rope netting (usually suspended between two trees)', 'name': 'hammock'}, {'frequency': 'r', 'synset': 'hamper.n.02', 'synonyms': ['hamper'], 'id': 532, 'def': 'a basket usually with a cover', 'name': 'hamper'}, {'frequency': 'c', 'synset': 'hamster.n.01', 'synonyms': ['hamster'], 'id': 533, 'def': 'short-tailed burrowing rodent with large cheek pouches', 'name': 'hamster'}, {'frequency': 'f', 'synset': 'hand_blower.n.01', 'synonyms': ['hair_dryer'], 'id': 534, 'def': 'a hand-held electric blower that can blow warm air onto the hair', 'name': 'hair_dryer'}, {'frequency': 'r', 'synset': 'hand_glass.n.01', 'synonyms': ['hand_glass', 'hand_mirror'], 'id': 535, 'def': 'a mirror intended to be held in the hand', 'name': 'hand_glass'}, {'frequency': 'f', 'synset': 'hand_towel.n.01', 'synonyms': ['hand_towel', 'face_towel'], 'id': 536, 'def': 'a small towel used to dry the hands or face', 'name': 'hand_towel'}, {'frequency': 'c', 'synset': 'handcart.n.01', 'synonyms': ['handcart', 'pushcart', 'hand_truck'], 'id': 537, 'def': 'wheeled vehicle that can be pushed by a person', 'name': 'handcart'}, {'frequency': 'r', 'synset': 'handcuff.n.01', 'synonyms': ['handcuff'], 'id': 538, 'def': 'shackle that consists of a metal loop that can be locked around the wrist', 'name': 'handcuff'}, {'frequency': 'c', 'synset': 'handkerchief.n.01', 'synonyms': ['handkerchief'], 'id': 539, 'def': 'a square piece of cloth used for wiping the eyes or nose or as a costume accessory', 'name': 'handkerchief'}, {'frequency': 'f', 'synset': 'handle.n.01', 'synonyms': ['handle', 'grip', 'handgrip'], 'id': 540, 'def': 'the appendage to an object that is designed to be held in order to use or move it', 'name': 'handle'}, {'frequency': 'r', 'synset': 'handsaw.n.01', 'synonyms': ['handsaw', "carpenter's_saw"], 'id': 541, 'def': 'a saw used with one hand for cutting wood', 'name': 'handsaw'}, {'frequency': 'r', 'synset': 'hardback.n.01', 'synonyms': ['hardback_book', 'hardcover_book'], 'id': 542, 'def': 'a book with cardboard or cloth or leather covers', 'name': 'hardback_book'}, {'frequency': 'r', 'synset': 'harmonium.n.01', 'synonyms': ['harmonium', 'organ_(musical_instrument)', 'reed_organ_(musical_instrument)'], 'id': 543, 'def': 'a free-reed instrument in which air is forced through the reeds by bellows', 'name': 'harmonium'}, {'frequency': 'f', 'synset': 'hat.n.01', 'synonyms': ['hat'], 'id': 544, 'def': 'headwear that protects the head from bad weather, sun, or worn for fashion', 'name': 'hat'}, {'frequency': 'r', 'synset': 'hatbox.n.01', 'synonyms': ['hatbox'], 'id': 545, 'def': 'a round piece of luggage for carrying hats', 'name': 'hatbox'}, {'frequency': 'c', 'synset': 'head_covering.n.01', 'synonyms': ['veil'], 'id': 546, 'def': 'a garment that covers the head OR face', 'name': 'veil'}, {'frequency': 'f', 'synset': 'headband.n.01', 'synonyms': ['headband'], 'id': 547, 'def': 'a band worn around or over the head', 'name': 'headband'}, {'frequency': 'f', 'synset': 'headboard.n.01', 'synonyms': ['headboard'], 'id': 548, 'def': 'a vertical board or panel forming the head of a bedstead', 'name': 'headboard'}, {'frequency': 'f', 'synset': 'headlight.n.01', 'synonyms': ['headlight', 'headlamp'], 'id': 549, 'def': 'a powerful light with reflector; attached to the front of an automobile or locomotive', 'name': 'headlight'}, {'frequency': 'c', 'synset': 'headscarf.n.01', 'synonyms': ['headscarf'], 'id': 550, 'def': 'a kerchief worn over the head and tied under the chin', 'name': 'headscarf'}, {'frequency': 'r', 'synset': 'headset.n.01', 'synonyms': ['headset'], 'id': 551, 'def': 'receiver consisting of a pair of headphones', 'name': 'headset'}, {'frequency': 'c', 'synset': 'headstall.n.01', 'synonyms': ['headstall_(for_horses)', 'headpiece_(for_horses)'], 'id': 552, 'def': "the band that is the part of a bridle that fits around a horse's head", 'name': 'headstall_(for_horses)'}, {'frequency': 'c', 'synset': 'heart.n.02', 'synonyms': ['heart'], 'id': 553, 'def': 'a muscular organ; its contractions move the blood through the body', 'name': 'heart'}, {'frequency': 'c', 'synset': 'heater.n.01', 'synonyms': ['heater', 'warmer'], 'id': 554, 'def': 'device that heats water or supplies warmth to a room', 'name': 'heater'}, {'frequency': 'c', 'synset': 'helicopter.n.01', 'synonyms': ['helicopter'], 'id': 555, 'def': 'an aircraft without wings that obtains its lift from the rotation of overhead blades', 'name': 'helicopter'}, {'frequency': 'f', 'synset': 'helmet.n.02', 'synonyms': ['helmet'], 'id': 556, 'def': 'a protective headgear made of hard material to resist blows', 'name': 'helmet'}, {'frequency': 'r', 'synset': 'heron.n.02', 'synonyms': ['heron'], 'id': 557, 'def': 'grey or white wading bird with long neck and long legs and (usually) long bill', 'name': 'heron'}, {'frequency': 'c', 'synset': 'highchair.n.01', 'synonyms': ['highchair', 'feeding_chair'], 'id': 558, 'def': 'a chair for feeding a very young child', 'name': 'highchair'}, {'frequency': 'f', 'synset': 'hinge.n.01', 'synonyms': ['hinge'], 'id': 559, 'def': 'a joint that holds two parts together so that one can swing relative to the other', 'name': 'hinge'}, {'frequency': 'r', 'synset': 'hippopotamus.n.01', 'synonyms': ['hippopotamus'], 'id': 560, 'def': 'massive thick-skinned animal living in or around rivers of tropical Africa', 'name': 'hippopotamus'}, {'frequency': 'r', 'synset': 'hockey_stick.n.01', 'synonyms': ['hockey_stick'], 'id': 561, 'def': 'sports implement consisting of a stick used by hockey players to move the puck', 'name': 'hockey_stick'}, {'frequency': 'c', 'synset': 'hog.n.03', 'synonyms': ['hog', 'pig'], 'id': 562, 'def': 'domestic swine', 'name': 'hog'}, {'frequency': 'f', 'synset': 'home_plate.n.01', 'synonyms': ['home_plate_(baseball)', 'home_base_(baseball)'], 'id': 563, 'def': '(baseball) a rubber slab where the batter stands; it must be touched by a base runner in order to score', 'name': 'home_plate_(baseball)'}, {'frequency': 'c', 'synset': 'honey.n.01', 'synonyms': ['honey'], 'id': 564, 'def': 'a sweet yellow liquid produced by bees', 'name': 'honey'}, {'frequency': 'f', 'synset': 'hood.n.06', 'synonyms': ['fume_hood', 'exhaust_hood'], 'id': 565, 'def': 'metal covering leading to a vent that exhausts smoke or fumes', 'name': 'fume_hood'}, {'frequency': 'f', 'synset': 'hook.n.05', 'synonyms': ['hook'], 'id': 566, 'def': 'a curved or bent implement for suspending or pulling something', 'name': 'hook'}, {'frequency': 'r', 'synset': 'hookah.n.01', 'synonyms': ['hookah', 'narghile', 'nargileh', 'sheesha', 'shisha', 'water_pipe'], 'id': 567, 'def': 'a tobacco pipe with a long flexible tube connected to a container where the smoke is cooled by passing through water', 'name': 'hookah'}, {'frequency': 'r', 'synset': 'hornet.n.01', 'synonyms': ['hornet'], 'id': 568, 'def': 'large stinging wasp', 'name': 'hornet'}, {'frequency': 'f', 'synset': 'horse.n.01', 'synonyms': ['horse'], 'id': 569, 'def': 'a common horse', 'name': 'horse'}, {'frequency': 'f', 'synset': 'hose.n.03', 'synonyms': ['hose', 'hosepipe'], 'id': 570, 'def': 'a flexible pipe for conveying a liquid or gas', 'name': 'hose'}, {'frequency': 'r', 'synset': 'hot-air_balloon.n.01', 'synonyms': ['hot-air_balloon'], 'id': 571, 'def': 'balloon for travel through the air in a basket suspended below a large bag of heated air', 'name': 'hot-air_balloon'}, {'frequency': 'r', 'synset': 'hot_plate.n.01', 'synonyms': ['hotplate'], 'id': 572, 'def': 'a portable electric appliance for heating or cooking or keeping food warm', 'name': 'hotplate'}, {'frequency': 'c', 'synset': 'hot_sauce.n.01', 'synonyms': ['hot_sauce'], 'id': 573, 'def': 'a pungent peppery sauce', 'name': 'hot_sauce'}, {'frequency': 'r', 'synset': 'hourglass.n.01', 'synonyms': ['hourglass'], 'id': 574, 'def': 'a sandglass timer that runs for sixty minutes', 'name': 'hourglass'}, {'frequency': 'r', 'synset': 'houseboat.n.01', 'synonyms': ['houseboat'], 'id': 575, 'def': 'a barge that is designed and equipped for use as a dwelling', 'name': 'houseboat'}, {'frequency': 'c', 'synset': 'hummingbird.n.01', 'synonyms': ['hummingbird'], 'id': 576, 'def': 'tiny American bird having brilliant iridescent plumage and long slender bills', 'name': 'hummingbird'}, {'frequency': 'r', 'synset': 'hummus.n.01', 'synonyms': ['hummus', 'humus', 'hommos', 'hoummos', 'humous'], 'id': 577, 'def': 'a thick spread made from mashed chickpeas', 'name': 'hummus'}, {'frequency': 'f', 'synset': 'ice_bear.n.01', 'synonyms': ['polar_bear'], 'id': 578, 'def': 'white bear of Arctic regions', 'name': 'polar_bear'}, {'frequency': 'c', 'synset': 'ice_cream.n.01', 'synonyms': ['icecream'], 'id': 579, 'def': 'frozen dessert containing cream and sugar and flavoring', 'name': 'icecream'}, {'frequency': 'r', 'synset': 'ice_lolly.n.01', 'synonyms': ['popsicle'], 'id': 580, 'def': 'ice cream or water ice on a small wooden stick', 'name': 'popsicle'}, {'frequency': 'c', 'synset': 'ice_maker.n.01', 'synonyms': ['ice_maker'], 'id': 581, 'def': 'an appliance included in some electric refrigerators for making ice cubes', 'name': 'ice_maker'}, {'frequency': 'r', 'synset': 'ice_pack.n.01', 'synonyms': ['ice_pack', 'ice_bag'], 'id': 582, 'def': 'a waterproof bag filled with ice: applied to the body (especially the head) to cool or reduce swelling', 'name': 'ice_pack'}, {'frequency': 'r', 'synset': 'ice_skate.n.01', 'synonyms': ['ice_skate'], 'id': 583, 'def': 'skate consisting of a boot with a steel blade fitted to the sole', 'name': 'ice_skate'}, {'frequency': 'c', 'synset': 'igniter.n.01', 'synonyms': ['igniter', 'ignitor', 'lighter'], 'id': 584, 'def': 'a substance or device used to start a fire', 'name': 'igniter'}, {'frequency': 'r', 'synset': 'inhaler.n.01', 'synonyms': ['inhaler', 'inhalator'], 'id': 585, 'def': 'a dispenser that produces a chemical vapor to be inhaled through mouth or nose', 'name': 'inhaler'}, {'frequency': 'f', 'synset': 'ipod.n.01', 'synonyms': ['iPod'], 'id': 586, 'def': 'a pocket-sized device used to play music files', 'name': 'iPod'}, {'frequency': 'c', 'synset': 'iron.n.04', 'synonyms': ['iron_(for_clothing)', 'smoothing_iron_(for_clothing)'], 'id': 587, 'def': 'home appliance consisting of a flat metal base that is heated and used to smooth cloth', 'name': 'iron_(for_clothing)'}, {'frequency': 'c', 'synset': 'ironing_board.n.01', 'synonyms': ['ironing_board'], 'id': 588, 'def': 'narrow padded board on collapsible supports; used for ironing clothes', 'name': 'ironing_board'}, {'frequency': 'f', 'synset': 'jacket.n.01', 'synonyms': ['jacket'], 'id': 589, 'def': 'a waist-length coat', 'name': 'jacket'}, {'frequency': 'c', 'synset': 'jam.n.01', 'synonyms': ['jam'], 'id': 590, 'def': 'preserve of crushed fruit', 'name': 'jam'}, {'frequency': 'f', 'synset': 'jar.n.01', 'synonyms': ['jar'], 'id': 591, 'def': 'a vessel (usually cylindrical) with a wide mouth and without handles', 'name': 'jar'}, {'frequency': 'f', 'synset': 'jean.n.01', 'synonyms': ['jean', 'blue_jean', 'denim'], 'id': 592, 'def': '(usually plural) close-fitting trousers of heavy denim for manual work or casual wear', 'name': 'jean'}, {'frequency': 'c', 'synset': 'jeep.n.01', 'synonyms': ['jeep', 'landrover'], 'id': 593, 'def': 'a car suitable for traveling over rough terrain', 'name': 'jeep'}, {'frequency': 'r', 'synset': 'jelly_bean.n.01', 'synonyms': ['jelly_bean', 'jelly_egg'], 'id': 594, 'def': 'sugar-glazed jellied candy', 'name': 'jelly_bean'}, {'frequency': 'f', 'synset': 'jersey.n.03', 'synonyms': ['jersey', 'T-shirt', 'tee_shirt'], 'id': 595, 'def': 'a close-fitting pullover shirt', 'name': 'jersey'}, {'frequency': 'c', 'synset': 'jet.n.01', 'synonyms': ['jet_plane', 'jet-propelled_plane'], 'id': 596, 'def': 'an airplane powered by one or more jet engines', 'name': 'jet_plane'}, {'frequency': 'r', 'synset': 'jewel.n.01', 'synonyms': ['jewel', 'gem', 'precious_stone'], 'id': 597, 'def': 'a precious or semiprecious stone incorporated into a piece of jewelry', 'name': 'jewel'}, {'frequency': 'c', 'synset': 'jewelry.n.01', 'synonyms': ['jewelry', 'jewellery'], 'id': 598, 'def': 'an adornment (as a bracelet or ring or necklace) made of precious metals and set with gems (or imitation gems)', 'name': 'jewelry'}, {'frequency': 'r', 'synset': 'joystick.n.02', 'synonyms': ['joystick'], 'id': 599, 'def': 'a control device for computers consisting of a vertical handle that can move freely in two directions', 'name': 'joystick'}, {'frequency': 'c', 'synset': 'jump_suit.n.01', 'synonyms': ['jumpsuit'], 'id': 600, 'def': "one-piece garment fashioned after a parachutist's uniform", 'name': 'jumpsuit'}, {'frequency': 'c', 'synset': 'kayak.n.01', 'synonyms': ['kayak'], 'id': 601, 'def': 'a small canoe consisting of a light frame made watertight with animal skins', 'name': 'kayak'}, {'frequency': 'r', 'synset': 'keg.n.02', 'synonyms': ['keg'], 'id': 602, 'def': 'small cask or barrel', 'name': 'keg'}, {'frequency': 'r', 'synset': 'kennel.n.01', 'synonyms': ['kennel', 'doghouse'], 'id': 603, 'def': 'outbuilding that serves as a shelter for a dog', 'name': 'kennel'}, {'frequency': 'c', 'synset': 'kettle.n.01', 'synonyms': ['kettle', 'boiler'], 'id': 604, 'def': 'a metal pot for stewing or boiling; usually has a lid', 'name': 'kettle'}, {'frequency': 'f', 'synset': 'key.n.01', 'synonyms': ['key'], 'id': 605, 'def': 'metal instrument used to unlock a lock', 'name': 'key'}, {'frequency': 'r', 'synset': 'keycard.n.01', 'synonyms': ['keycard'], 'id': 606, 'def': 'a plastic card used to gain access typically to a door', 'name': 'keycard'}, {'frequency': 'c', 'synset': 'kilt.n.01', 'synonyms': ['kilt'], 'id': 607, 'def': 'a knee-length pleated tartan skirt worn by men as part of the traditional dress in the Highlands of northern Scotland', 'name': 'kilt'}, {'frequency': 'c', 'synset': 'kimono.n.01', 'synonyms': ['kimono'], 'id': 608, 'def': 'a loose robe; imitated from robes originally worn by Japanese', 'name': 'kimono'}, {'frequency': 'f', 'synset': 'kitchen_sink.n.01', 'synonyms': ['kitchen_sink'], 'id': 609, 'def': 'a sink in a kitchen', 'name': 'kitchen_sink'}, {'frequency': 'r', 'synset': 'kitchen_table.n.01', 'synonyms': ['kitchen_table'], 'id': 610, 'def': 'a table in the kitchen', 'name': 'kitchen_table'}, {'frequency': 'f', 'synset': 'kite.n.03', 'synonyms': ['kite'], 'id': 611, 'def': 'plaything consisting of a light frame covered with tissue paper; flown in wind at end of a string', 'name': 'kite'}, {'frequency': 'c', 'synset': 'kitten.n.01', 'synonyms': ['kitten', 'kitty'], 'id': 612, 'def': 'young domestic cat', 'name': 'kitten'}, {'frequency': 'c', 'synset': 'kiwi.n.03', 'synonyms': ['kiwi_fruit'], 'id': 613, 'def': 'fuzzy brown egg-shaped fruit with slightly tart green flesh', 'name': 'kiwi_fruit'}, {'frequency': 'f', 'synset': 'knee_pad.n.01', 'synonyms': ['knee_pad'], 'id': 614, 'def': 'protective garment consisting of a pad worn by football or baseball or hockey players', 'name': 'knee_pad'}, {'frequency': 'f', 'synset': 'knife.n.01', 'synonyms': ['knife'], 'id': 615, 'def': 'tool with a blade and point used as a cutting instrument', 'name': 'knife'}, {'frequency': 'r', 'synset': 'knitting_needle.n.01', 'synonyms': ['knitting_needle'], 'id': 616, 'def': 'needle consisting of a slender rod with pointed ends; usually used in pairs', 'name': 'knitting_needle'}, {'frequency': 'f', 'synset': 'knob.n.02', 'synonyms': ['knob'], 'id': 617, 'def': 'a round handle often found on a door', 'name': 'knob'}, {'frequency': 'r', 'synset': 'knocker.n.05', 'synonyms': ['knocker_(on_a_door)', 'doorknocker'], 'id': 618, 'def': 'a device (usually metal and ornamental) attached by a hinge to a door', 'name': 'knocker_(on_a_door)'}, {'frequency': 'r', 'synset': 'koala.n.01', 'synonyms': ['koala', 'koala_bear'], 'id': 619, 'def': 'sluggish tailless Australian marsupial with grey furry ears and coat', 'name': 'koala'}, {'frequency': 'r', 'synset': 'lab_coat.n.01', 'synonyms': ['lab_coat', 'laboratory_coat'], 'id': 620, 'def': 'a light coat worn to protect clothing from substances used while working in a laboratory', 'name': 'lab_coat'}, {'frequency': 'f', 'synset': 'ladder.n.01', 'synonyms': ['ladder'], 'id': 621, 'def': 'steps consisting of two parallel members connected by rungs', 'name': 'ladder'}, {'frequency': 'c', 'synset': 'ladle.n.01', 'synonyms': ['ladle'], 'id': 622, 'def': 'a spoon-shaped vessel with a long handle frequently used to transfer liquids', 'name': 'ladle'}, {'frequency': 'c', 'synset': 'ladybug.n.01', 'synonyms': ['ladybug', 'ladybeetle', 'ladybird_beetle'], 'id': 623, 'def': 'small round bright-colored and spotted beetle, typically red and black', 'name': 'ladybug'}, {'frequency': 'f', 'synset': 'lamb.n.01', 'synonyms': ['lamb_(animal)'], 'id': 624, 'def': 'young sheep', 'name': 'lamb_(animal)'}, {'frequency': 'r', 'synset': 'lamb_chop.n.01', 'synonyms': ['lamb-chop', 'lambchop'], 'id': 625, 'def': 'chop cut from a lamb', 'name': 'lamb-chop'}, {'frequency': 'f', 'synset': 'lamp.n.02', 'synonyms': ['lamp'], 'id': 626, 'def': 'a piece of furniture holding one or more electric light bulbs', 'name': 'lamp'}, {'frequency': 'f', 'synset': 'lamppost.n.01', 'synonyms': ['lamppost'], 'id': 627, 'def': 'a metal post supporting an outdoor lamp (such as a streetlight)', 'name': 'lamppost'}, {'frequency': 'f', 'synset': 'lampshade.n.01', 'synonyms': ['lampshade'], 'id': 628, 'def': 'a protective ornamental shade used to screen a light bulb from direct view', 'name': 'lampshade'}, {'frequency': 'c', 'synset': 'lantern.n.01', 'synonyms': ['lantern'], 'id': 629, 'def': 'light in a transparent protective case', 'name': 'lantern'}, {'frequency': 'f', 'synset': 'lanyard.n.02', 'synonyms': ['lanyard', 'laniard'], 'id': 630, 'def': 'a cord worn around the neck to hold a knife or whistle, etc.', 'name': 'lanyard'}, {'frequency': 'f', 'synset': 'laptop.n.01', 'synonyms': ['laptop_computer', 'notebook_computer'], 'id': 631, 'def': 'a portable computer small enough to use in your lap', 'name': 'laptop_computer'}, {'frequency': 'r', 'synset': 'lasagna.n.01', 'synonyms': ['lasagna', 'lasagne'], 'id': 632, 'def': 'baked dish of layers of lasagna pasta with sauce and cheese and meat or vegetables', 'name': 'lasagna'}, {'frequency': 'f', 'synset': 'latch.n.02', 'synonyms': ['latch'], 'id': 633, 'def': 'a bar that can be lowered or slid into a groove to fasten a door or gate', 'name': 'latch'}, {'frequency': 'r', 'synset': 'lawn_mower.n.01', 'synonyms': ['lawn_mower'], 'id': 634, 'def': 'garden tool for mowing grass on lawns', 'name': 'lawn_mower'}, {'frequency': 'r', 'synset': 'leather.n.01', 'synonyms': ['leather'], 'id': 635, 'def': 'an animal skin made smooth and flexible by removing the hair and then tanning', 'name': 'leather'}, {'frequency': 'c', 'synset': 'legging.n.01', 'synonyms': ['legging_(clothing)', 'leging_(clothing)', 'leg_covering'], 'id': 636, 'def': 'a garment covering the leg (usually extending from the knee to the ankle)', 'name': 'legging_(clothing)'}, {'frequency': 'c', 'synset': 'lego.n.01', 'synonyms': ['Lego', 'Lego_set'], 'id': 637, 'def': "a child's plastic construction set for making models from blocks", 'name': 'Lego'}, {'frequency': 'r', 'synset': 'legume.n.02', 'synonyms': ['legume'], 'id': 638, 'def': 'the fruit or seed of bean or pea plants', 'name': 'legume'}, {'frequency': 'f', 'synset': 'lemon.n.01', 'synonyms': ['lemon'], 'id': 639, 'def': 'yellow oval fruit with juicy acidic flesh', 'name': 'lemon'}, {'frequency': 'r', 'synset': 'lemonade.n.01', 'synonyms': ['lemonade'], 'id': 640, 'def': 'sweetened beverage of diluted lemon juice', 'name': 'lemonade'}, {'frequency': 'f', 'synset': 'lettuce.n.02', 'synonyms': ['lettuce'], 'id': 641, 'def': 'leafy plant commonly eaten in salad or on sandwiches', 'name': 'lettuce'}, {'frequency': 'f', 'synset': 'license_plate.n.01', 'synonyms': ['license_plate', 'numberplate'], 'id': 642, 'def': "a plate mounted on the front and back of car and bearing the car's registration number", 'name': 'license_plate'}, {'frequency': 'f', 'synset': 'life_buoy.n.01', 'synonyms': ['life_buoy', 'lifesaver', 'life_belt', 'life_ring'], 'id': 643, 'def': 'a ring-shaped life preserver used to prevent drowning (NOT a life-jacket or vest)', 'name': 'life_buoy'}, {'frequency': 'f', 'synset': 'life_jacket.n.01', 'synonyms': ['life_jacket', 'life_vest'], 'id': 644, 'def': 'life preserver consisting of a sleeveless jacket of buoyant or inflatable design', 'name': 'life_jacket'}, {'frequency': 'f', 'synset': 'light_bulb.n.01', 'synonyms': ['lightbulb'], 'id': 645, 'def': 'lightblub/source of light', 'name': 'lightbulb'}, {'frequency': 'r', 'synset': 'lightning_rod.n.02', 'synonyms': ['lightning_rod', 'lightning_conductor'], 'id': 646, 'def': 'a metallic conductor that is attached to a high point and leads to the ground', 'name': 'lightning_rod'}, {'frequency': 'f', 'synset': 'lime.n.06', 'synonyms': ['lime'], 'id': 647, 'def': 'the green acidic fruit of any of various lime trees', 'name': 'lime'}, {'frequency': 'r', 'synset': 'limousine.n.01', 'synonyms': ['limousine'], 'id': 648, 'def': 'long luxurious car; usually driven by a chauffeur', 'name': 'limousine'}, {'frequency': 'c', 'synset': 'lion.n.01', 'synonyms': ['lion'], 'id': 649, 'def': 'large gregarious predatory cat of Africa and India', 'name': 'lion'}, {'frequency': 'c', 'synset': 'lip_balm.n.01', 'synonyms': ['lip_balm'], 'id': 650, 'def': 'a balm applied to the lips', 'name': 'lip_balm'}, {'frequency': 'r', 'synset': 'liquor.n.01', 'synonyms': ['liquor', 'spirits', 'hard_liquor', 'liqueur', 'cordial'], 'id': 651, 'def': 'liquor or beer', 'name': 'liquor'}, {'frequency': 'c', 'synset': 'lizard.n.01', 'synonyms': ['lizard'], 'id': 652, 'def': 'a reptile with usually two pairs of legs and a tapering tail', 'name': 'lizard'}, {'frequency': 'f', 'synset': 'log.n.01', 'synonyms': ['log'], 'id': 653, 'def': 'a segment of the trunk of a tree when stripped of branches', 'name': 'log'}, {'frequency': 'c', 'synset': 'lollipop.n.02', 'synonyms': ['lollipop'], 'id': 654, 'def': 'hard candy on a stick', 'name': 'lollipop'}, {'frequency': 'f', 'synset': 'loudspeaker.n.01', 'synonyms': ['speaker_(stero_equipment)'], 'id': 655, 'def': 'electronic device that produces sound often as part of a stereo system', 'name': 'speaker_(stero_equipment)'}, {'frequency': 'c', 'synset': 'love_seat.n.01', 'synonyms': ['loveseat'], 'id': 656, 'def': 'small sofa that seats two people', 'name': 'loveseat'}, {'frequency': 'r', 'synset': 'machine_gun.n.01', 'synonyms': ['machine_gun'], 'id': 657, 'def': 'a rapidly firing automatic gun', 'name': 'machine_gun'}, {'frequency': 'f', 'synset': 'magazine.n.02', 'synonyms': ['magazine'], 'id': 658, 'def': 'a paperback periodic publication', 'name': 'magazine'}, {'frequency': 'f', 'synset': 'magnet.n.01', 'synonyms': ['magnet'], 'id': 659, 'def': 'a device that attracts iron and produces a magnetic field', 'name': 'magnet'}, {'frequency': 'c', 'synset': 'mail_slot.n.01', 'synonyms': ['mail_slot'], 'id': 660, 'def': 'a slot (usually in a door) through which mail can be delivered', 'name': 'mail_slot'}, {'frequency': 'f', 'synset': 'mailbox.n.01', 'synonyms': ['mailbox_(at_home)', 'letter_box_(at_home)'], 'id': 661, 'def': 'a private box for delivery of mail', 'name': 'mailbox_(at_home)'}, {'frequency': 'r', 'synset': 'mallard.n.01', 'synonyms': ['mallard'], 'id': 662, 'def': 'wild dabbling duck from which domestic ducks are descended', 'name': 'mallard'}, {'frequency': 'r', 'synset': 'mallet.n.01', 'synonyms': ['mallet'], 'id': 663, 'def': 'a sports implement with a long handle and a hammer-like head used to hit a ball', 'name': 'mallet'}, {'frequency': 'r', 'synset': 'mammoth.n.01', 'synonyms': ['mammoth'], 'id': 664, 'def': 'any of numerous extinct elephants widely distributed in the Pleistocene', 'name': 'mammoth'}, {'frequency': 'r', 'synset': 'manatee.n.01', 'synonyms': ['manatee'], 'id': 665, 'def': 'sirenian mammal of tropical coastal waters of America', 'name': 'manatee'}, {'frequency': 'c', 'synset': 'mandarin.n.05', 'synonyms': ['mandarin_orange'], 'id': 666, 'def': 'a somewhat flat reddish-orange loose skinned citrus of China', 'name': 'mandarin_orange'}, {'frequency': 'c', 'synset': 'manger.n.01', 'synonyms': ['manger', 'trough'], 'id': 667, 'def': 'a container (usually in a barn or stable) from which cattle or horses feed', 'name': 'manger'}, {'frequency': 'f', 'synset': 'manhole.n.01', 'synonyms': ['manhole'], 'id': 668, 'def': 'a hole (usually with a flush cover) through which a person can gain access to an underground structure', 'name': 'manhole'}, {'frequency': 'f', 'synset': 'map.n.01', 'synonyms': ['map'], 'id': 669, 'def': "a diagrammatic representation of the earth's surface (or part of it)", 'name': 'map'}, {'frequency': 'f', 'synset': 'marker.n.03', 'synonyms': ['marker'], 'id': 670, 'def': 'a writing implement for making a mark', 'name': 'marker'}, {'frequency': 'r', 'synset': 'martini.n.01', 'synonyms': ['martini'], 'id': 671, 'def': 'a cocktail made of gin (or vodka) with dry vermouth', 'name': 'martini'}, {'frequency': 'r', 'synset': 'mascot.n.01', 'synonyms': ['mascot'], 'id': 672, 'def': 'a person or animal that is adopted by a team or other group as a symbolic figure', 'name': 'mascot'}, {'frequency': 'c', 'synset': 'mashed_potato.n.01', 'synonyms': ['mashed_potato'], 'id': 673, 'def': 'potato that has been peeled and boiled and then mashed', 'name': 'mashed_potato'}, {'frequency': 'r', 'synset': 'masher.n.02', 'synonyms': ['masher'], 'id': 674, 'def': 'a kitchen utensil used for mashing (e.g. potatoes)', 'name': 'masher'}, {'frequency': 'f', 'synset': 'mask.n.04', 'synonyms': ['mask', 'facemask'], 'id': 675, 'def': 'a protective covering worn over the face', 'name': 'mask'}, {'frequency': 'f', 'synset': 'mast.n.01', 'synonyms': ['mast'], 'id': 676, 'def': 'a vertical spar for supporting sails', 'name': 'mast'}, {'frequency': 'c', 'synset': 'mat.n.03', 'synonyms': ['mat_(gym_equipment)', 'gym_mat'], 'id': 677, 'def': 'sports equipment consisting of a piece of thick padding on the floor for gymnastics', 'name': 'mat_(gym_equipment)'}, {'frequency': 'r', 'synset': 'matchbox.n.01', 'synonyms': ['matchbox'], 'id': 678, 'def': 'a box for holding matches', 'name': 'matchbox'}, {'frequency': 'f', 'synset': 'mattress.n.01', 'synonyms': ['mattress'], 'id': 679, 'def': 'a thick pad filled with resilient material used as a bed or part of a bed', 'name': 'mattress'}, {'frequency': 'c', 'synset': 'measuring_cup.n.01', 'synonyms': ['measuring_cup'], 'id': 680, 'def': 'graduated cup used to measure liquid or granular ingredients', 'name': 'measuring_cup'}, {'frequency': 'c', 'synset': 'measuring_stick.n.01', 'synonyms': ['measuring_stick', 'ruler_(measuring_stick)', 'measuring_rod'], 'id': 681, 'def': 'measuring instrument having a sequence of marks at regular intervals', 'name': 'measuring_stick'}, {'frequency': 'c', 'synset': 'meatball.n.01', 'synonyms': ['meatball'], 'id': 682, 'def': 'ground meat formed into a ball and fried or simmered in broth', 'name': 'meatball'}, {'frequency': 'c', 'synset': 'medicine.n.02', 'synonyms': ['medicine'], 'id': 683, 'def': 'something that treats or prevents or alleviates the symptoms of disease', 'name': 'medicine'}, {'frequency': 'c', 'synset': 'melon.n.01', 'synonyms': ['melon'], 'id': 684, 'def': 'fruit of the gourd family having a hard rind and sweet juicy flesh', 'name': 'melon'}, {'frequency': 'f', 'synset': 'microphone.n.01', 'synonyms': ['microphone'], 'id': 685, 'def': 'device for converting sound waves into electrical energy', 'name': 'microphone'}, {'frequency': 'r', 'synset': 'microscope.n.01', 'synonyms': ['microscope'], 'id': 686, 'def': 'magnifier of the image of small objects', 'name': 'microscope'}, {'frequency': 'f', 'synset': 'microwave.n.02', 'synonyms': ['microwave_oven'], 'id': 687, 'def': 'kitchen appliance that cooks food by passing an electromagnetic wave through it', 'name': 'microwave_oven'}, {'frequency': 'r', 'synset': 'milestone.n.01', 'synonyms': ['milestone', 'milepost'], 'id': 688, 'def': 'stone post at side of a road to show distances', 'name': 'milestone'}, {'frequency': 'f', 'synset': 'milk.n.01', 'synonyms': ['milk'], 'id': 689, 'def': 'a white nutritious liquid secreted by mammals and used as food by human beings', 'name': 'milk'}, {'frequency': 'r', 'synset': 'milk_can.n.01', 'synonyms': ['milk_can'], 'id': 690, 'def': 'can for transporting milk', 'name': 'milk_can'}, {'frequency': 'r', 'synset': 'milkshake.n.01', 'synonyms': ['milkshake'], 'id': 691, 'def': 'frothy drink of milk and flavoring and sometimes fruit or ice cream', 'name': 'milkshake'}, {'frequency': 'f', 'synset': 'minivan.n.01', 'synonyms': ['minivan'], 'id': 692, 'def': 'a small box-shaped passenger van', 'name': 'minivan'}, {'frequency': 'r', 'synset': 'mint.n.05', 'synonyms': ['mint_candy'], 'id': 693, 'def': 'a candy that is flavored with a mint oil', 'name': 'mint_candy'}, {'frequency': 'f', 'synset': 'mirror.n.01', 'synonyms': ['mirror'], 'id': 694, 'def': 'polished surface that forms images by reflecting light', 'name': 'mirror'}, {'frequency': 'c', 'synset': 'mitten.n.01', 'synonyms': ['mitten'], 'id': 695, 'def': 'glove that encases the thumb separately and the other four fingers together', 'name': 'mitten'}, {'frequency': 'c', 'synset': 'mixer.n.04', 'synonyms': ['mixer_(kitchen_tool)', 'stand_mixer'], 'id': 696, 'def': 'a kitchen utensil that is used for mixing foods', 'name': 'mixer_(kitchen_tool)'}, {'frequency': 'c', 'synset': 'money.n.03', 'synonyms': ['money'], 'id': 697, 'def': 'the official currency issued by a government or national bank', 'name': 'money'}, {'frequency': 'f', 'synset': 'monitor.n.04', 'synonyms': ['monitor_(computer_equipment) computer_monitor'], 'id': 698, 'def': 'a computer monitor', 'name': 'monitor_(computer_equipment) computer_monitor'}, {'frequency': 'c', 'synset': 'monkey.n.01', 'synonyms': ['monkey'], 'id': 699, 'def': 'any of various long-tailed primates', 'name': 'monkey'}, {'frequency': 'f', 'synset': 'motor.n.01', 'synonyms': ['motor'], 'id': 700, 'def': 'machine that converts other forms of energy into mechanical energy and so imparts motion', 'name': 'motor'}, {'frequency': 'f', 'synset': 'motor_scooter.n.01', 'synonyms': ['motor_scooter', 'scooter'], 'id': 701, 'def': 'a wheeled vehicle with small wheels and a low-powered engine', 'name': 'motor_scooter'}, {'frequency': 'r', 'synset': 'motor_vehicle.n.01', 'synonyms': ['motor_vehicle', 'automotive_vehicle'], 'id': 702, 'def': 'a self-propelled wheeled vehicle that does not run on rails', 'name': 'motor_vehicle'}, {'frequency': 'f', 'synset': 'motorcycle.n.01', 'synonyms': ['motorcycle'], 'id': 703, 'def': 'a motor vehicle with two wheels and a strong frame', 'name': 'motorcycle'}, {'frequency': 'f', 'synset': 'mound.n.01', 'synonyms': ['mound_(baseball)', "pitcher's_mound"], 'id': 704, 'def': '(baseball) the slight elevation on which the pitcher stands', 'name': 'mound_(baseball)'}, {'frequency': 'f', 'synset': 'mouse.n.04', 'synonyms': ['mouse_(computer_equipment)', 'computer_mouse'], 'id': 705, 'def': 'a computer input device that controls an on-screen pointer (does not include trackpads / touchpads)', 'name': 'mouse_(computer_equipment)'}, {'frequency': 'f', 'synset': 'mousepad.n.01', 'synonyms': ['mousepad'], 'id': 706, 'def': 'a small portable pad that provides an operating surface for a computer mouse', 'name': 'mousepad'}, {'frequency': 'c', 'synset': 'muffin.n.01', 'synonyms': ['muffin'], 'id': 707, 'def': 'a sweet quick bread baked in a cup-shaped pan', 'name': 'muffin'}, {'frequency': 'f', 'synset': 'mug.n.04', 'synonyms': ['mug'], 'id': 708, 'def': 'with handle and usually cylindrical', 'name': 'mug'}, {'frequency': 'f', 'synset': 'mushroom.n.02', 'synonyms': ['mushroom'], 'id': 709, 'def': 'a common mushroom', 'name': 'mushroom'}, {'frequency': 'r', 'synset': 'music_stool.n.01', 'synonyms': ['music_stool', 'piano_stool'], 'id': 710, 'def': 'a stool for piano players; usually adjustable in height', 'name': 'music_stool'}, {'frequency': 'c', 'synset': 'musical_instrument.n.01', 'synonyms': ['musical_instrument', 'instrument_(musical)'], 'id': 711, 'def': 'any of various devices or contrivances that can be used to produce musical tones or sounds', 'name': 'musical_instrument'}, {'frequency': 'r', 'synset': 'nailfile.n.01', 'synonyms': ['nailfile'], 'id': 712, 'def': 'a small flat file for shaping the nails', 'name': 'nailfile'}, {'frequency': 'f', 'synset': 'napkin.n.01', 'synonyms': ['napkin', 'table_napkin', 'serviette'], 'id': 713, 'def': 'a small piece of table linen or paper that is used to wipe the mouth and to cover the lap in order to protect clothing', 'name': 'napkin'}, {'frequency': 'r', 'synset': 'neckerchief.n.01', 'synonyms': ['neckerchief'], 'id': 714, 'def': 'a kerchief worn around the neck', 'name': 'neckerchief'}, {'frequency': 'f', 'synset': 'necklace.n.01', 'synonyms': ['necklace'], 'id': 715, 'def': 'jewelry consisting of a cord or chain (often bearing gems) worn about the neck as an ornament', 'name': 'necklace'}, {'frequency': 'f', 'synset': 'necktie.n.01', 'synonyms': ['necktie', 'tie_(necktie)'], 'id': 716, 'def': 'neckwear consisting of a long narrow piece of material worn under a collar and tied in knot at the front', 'name': 'necktie'}, {'frequency': 'c', 'synset': 'needle.n.03', 'synonyms': ['needle'], 'id': 717, 'def': 'a sharp pointed implement (usually metal)', 'name': 'needle'}, {'frequency': 'c', 'synset': 'nest.n.01', 'synonyms': ['nest'], 'id': 718, 'def': 'a structure in which animals lay eggs or give birth to their young', 'name': 'nest'}, {'frequency': 'f', 'synset': 'newspaper.n.01', 'synonyms': ['newspaper', 'paper_(newspaper)'], 'id': 719, 'def': 'a daily or weekly publication on folded sheets containing news, articles, and advertisements', 'name': 'newspaper'}, {'frequency': 'c', 'synset': 'newsstand.n.01', 'synonyms': ['newsstand'], 'id': 720, 'def': 'a stall where newspapers and other periodicals are sold', 'name': 'newsstand'}, {'frequency': 'c', 'synset': 'nightwear.n.01', 'synonyms': ['nightshirt', 'nightwear', 'sleepwear', 'nightclothes'], 'id': 721, 'def': 'garments designed to be worn in bed', 'name': 'nightshirt'}, {'frequency': 'r', 'synset': 'nosebag.n.01', 'synonyms': ['nosebag_(for_animals)', 'feedbag'], 'id': 722, 'def': 'a canvas bag that is used to feed an animal (such as a horse); covers the muzzle and fastens at the top of the head', 'name': 'nosebag_(for_animals)'}, {'frequency': 'c', 'synset': 'noseband.n.01', 'synonyms': ['noseband_(for_animals)', 'nosepiece_(for_animals)'], 'id': 723, 'def': "a strap that is the part of a bridle that goes over the animal's nose", 'name': 'noseband_(for_animals)'}, {'frequency': 'f', 'synset': 'notebook.n.01', 'synonyms': ['notebook'], 'id': 724, 'def': 'a book with blank pages for recording notes or memoranda', 'name': 'notebook'}, {'frequency': 'c', 'synset': 'notepad.n.01', 'synonyms': ['notepad'], 'id': 725, 'def': 'a pad of paper for keeping notes', 'name': 'notepad'}, {'frequency': 'f', 'synset': 'nut.n.03', 'synonyms': ['nut'], 'id': 726, 'def': 'a small metal block (usually square or hexagonal) with internal screw thread to be fitted onto a bolt', 'name': 'nut'}, {'frequency': 'r', 'synset': 'nutcracker.n.01', 'synonyms': ['nutcracker'], 'id': 727, 'def': 'a hand tool used to crack nuts open', 'name': 'nutcracker'}, {'frequency': 'f', 'synset': 'oar.n.01', 'synonyms': ['oar'], 'id': 728, 'def': 'an implement used to propel or steer a boat', 'name': 'oar'}, {'frequency': 'r', 'synset': 'octopus.n.01', 'synonyms': ['octopus_(food)'], 'id': 729, 'def': 'tentacles of octopus prepared as food', 'name': 'octopus_(food)'}, {'frequency': 'r', 'synset': 'octopus.n.02', 'synonyms': ['octopus_(animal)'], 'id': 730, 'def': 'bottom-living cephalopod having a soft oval body with eight long tentacles', 'name': 'octopus_(animal)'}, {'frequency': 'c', 'synset': 'oil_lamp.n.01', 'synonyms': ['oil_lamp', 'kerosene_lamp', 'kerosine_lamp'], 'id': 731, 'def': 'a lamp that burns oil (as kerosine) for light', 'name': 'oil_lamp'}, {'frequency': 'c', 'synset': 'olive_oil.n.01', 'synonyms': ['olive_oil'], 'id': 732, 'def': 'oil from olives', 'name': 'olive_oil'}, {'frequency': 'r', 'synset': 'omelet.n.01', 'synonyms': ['omelet', 'omelette'], 'id': 733, 'def': 'beaten eggs cooked until just set; may be folded around e.g. ham or cheese or jelly', 'name': 'omelet'}, {'frequency': 'f', 'synset': 'onion.n.01', 'synonyms': ['onion'], 'id': 734, 'def': 'the bulb of an onion plant', 'name': 'onion'}, {'frequency': 'f', 'synset': 'orange.n.01', 'synonyms': ['orange_(fruit)'], 'id': 735, 'def': 'orange (FRUIT of an orange tree)', 'name': 'orange_(fruit)'}, {'frequency': 'c', 'synset': 'orange_juice.n.01', 'synonyms': ['orange_juice'], 'id': 736, 'def': 'bottled or freshly squeezed juice of oranges', 'name': 'orange_juice'}, {'frequency': 'c', 'synset': 'ostrich.n.02', 'synonyms': ['ostrich'], 'id': 737, 'def': 'fast-running African flightless bird with two-toed feet; largest living bird', 'name': 'ostrich'}, {'frequency': 'f', 'synset': 'ottoman.n.03', 'synonyms': ['ottoman', 'pouf', 'pouffe', 'hassock'], 'id': 738, 'def': 'a thick standalone cushion used as a seat or footrest, often next to a chair', 'name': 'ottoman'}, {'frequency': 'f', 'synset': 'oven.n.01', 'synonyms': ['oven'], 'id': 739, 'def': 'kitchen appliance used for baking or roasting', 'name': 'oven'}, {'frequency': 'c', 'synset': 'overall.n.01', 'synonyms': ['overalls_(clothing)'], 'id': 740, 'def': 'work clothing consisting of denim trousers usually with a bib and shoulder straps', 'name': 'overalls_(clothing)'}, {'frequency': 'c', 'synset': 'owl.n.01', 'synonyms': ['owl'], 'id': 741, 'def': 'nocturnal bird of prey with hawk-like beak and claws and large head with front-facing eyes', 'name': 'owl'}, {'frequency': 'c', 'synset': 'packet.n.03', 'synonyms': ['packet'], 'id': 742, 'def': 'a small package or bundle', 'name': 'packet'}, {'frequency': 'r', 'synset': 'pad.n.03', 'synonyms': ['inkpad', 'inking_pad', 'stamp_pad'], 'id': 743, 'def': 'absorbent material saturated with ink used to transfer ink evenly to a rubber stamp', 'name': 'inkpad'}, {'frequency': 'c', 'synset': 'pad.n.04', 'synonyms': ['pad'], 'id': 744, 'def': 'mostly arm/knee pads labeled', 'name': 'pad'}, {'frequency': 'f', 'synset': 'paddle.n.04', 'synonyms': ['paddle', 'boat_paddle'], 'id': 745, 'def': 'a short light oar used without an oarlock to propel a canoe or small boat', 'name': 'paddle'}, {'frequency': 'c', 'synset': 'padlock.n.01', 'synonyms': ['padlock'], 'id': 746, 'def': 'a detachable, portable lock', 'name': 'padlock'}, {'frequency': 'c', 'synset': 'paintbrush.n.01', 'synonyms': ['paintbrush'], 'id': 747, 'def': 'a brush used as an applicator to apply paint', 'name': 'paintbrush'}, {'frequency': 'f', 'synset': 'painting.n.01', 'synonyms': ['painting'], 'id': 748, 'def': 'graphic art consisting of an artistic composition made by applying paints to a surface', 'name': 'painting'}, {'frequency': 'f', 'synset': 'pajama.n.02', 'synonyms': ['pajamas', 'pyjamas'], 'id': 749, 'def': 'loose-fitting nightclothes worn for sleeping or lounging', 'name': 'pajamas'}, {'frequency': 'c', 'synset': 'palette.n.02', 'synonyms': ['palette', 'pallet'], 'id': 750, 'def': 'board that provides a flat surface on which artists mix paints and the range of colors used', 'name': 'palette'}, {'frequency': 'f', 'synset': 'pan.n.01', 'synonyms': ['pan_(for_cooking)', 'cooking_pan'], 'id': 751, 'def': 'cooking utensil consisting of a wide metal vessel', 'name': 'pan_(for_cooking)'}, {'frequency': 'r', 'synset': 'pan.n.03', 'synonyms': ['pan_(metal_container)'], 'id': 752, 'def': 'shallow container made of metal', 'name': 'pan_(metal_container)'}, {'frequency': 'c', 'synset': 'pancake.n.01', 'synonyms': ['pancake'], 'id': 753, 'def': 'a flat cake of thin batter fried on both sides on a griddle', 'name': 'pancake'}, {'frequency': 'r', 'synset': 'pantyhose.n.01', 'synonyms': ['pantyhose'], 'id': 754, 'def': "a woman's tights consisting of underpants and stockings", 'name': 'pantyhose'}, {'frequency': 'r', 'synset': 'papaya.n.02', 'synonyms': ['papaya'], 'id': 755, 'def': 'large oval melon-like tropical fruit with yellowish flesh', 'name': 'papaya'}, {'frequency': 'f', 'synset': 'paper_plate.n.01', 'synonyms': ['paper_plate'], 'id': 756, 'def': 'a disposable plate made of cardboard', 'name': 'paper_plate'}, {'frequency': 'f', 'synset': 'paper_towel.n.01', 'synonyms': ['paper_towel'], 'id': 757, 'def': 'a disposable towel made of absorbent paper', 'name': 'paper_towel'}, {'frequency': 'r', 'synset': 'paperback_book.n.01', 'synonyms': ['paperback_book', 'paper-back_book', 'softback_book', 'soft-cover_book'], 'id': 758, 'def': 'a book with paper covers', 'name': 'paperback_book'}, {'frequency': 'r', 'synset': 'paperweight.n.01', 'synonyms': ['paperweight'], 'id': 759, 'def': 'a weight used to hold down a stack of papers', 'name': 'paperweight'}, {'frequency': 'c', 'synset': 'parachute.n.01', 'synonyms': ['parachute'], 'id': 760, 'def': 'rescue equipment consisting of a device that fills with air and retards your fall', 'name': 'parachute'}, {'frequency': 'c', 'synset': 'parakeet.n.01', 'synonyms': ['parakeet', 'parrakeet', 'parroket', 'paraquet', 'paroquet', 'parroquet'], 'id': 761, 'def': 'any of numerous small slender long-tailed parrots', 'name': 'parakeet'}, {'frequency': 'c', 'synset': 'parasail.n.01', 'synonyms': ['parasail_(sports)'], 'id': 762, 'def': 'parachute that will lift a person up into the air when it is towed by a motorboat or a car', 'name': 'parasail_(sports)'}, {'frequency': 'c', 'synset': 'parasol.n.01', 'synonyms': ['parasol', 'sunshade'], 'id': 763, 'def': 'a handheld collapsible source of shade', 'name': 'parasol'}, {'frequency': 'r', 'synset': 'parchment.n.01', 'synonyms': ['parchment'], 'id': 764, 'def': 'a superior paper resembling sheepskin', 'name': 'parchment'}, {'frequency': 'c', 'synset': 'parka.n.01', 'synonyms': ['parka', 'anorak'], 'id': 765, 'def': "a kind of heavy jacket (`windcheater' is a British term)", 'name': 'parka'}, {'frequency': 'f', 'synset': 'parking_meter.n.01', 'synonyms': ['parking_meter'], 'id': 766, 'def': 'a coin-operated timer located next to a parking space', 'name': 'parking_meter'}, {'frequency': 'c', 'synset': 'parrot.n.01', 'synonyms': ['parrot'], 'id': 767, 'def': 'usually brightly colored tropical birds with short hooked beaks and the ability to mimic sounds', 'name': 'parrot'}, {'frequency': 'c', 'synset': 'passenger_car.n.01', 'synonyms': ['passenger_car_(part_of_a_train)', 'coach_(part_of_a_train)'], 'id': 768, 'def': 'a railcar where passengers ride', 'name': 'passenger_car_(part_of_a_train)'}, {'frequency': 'r', 'synset': 'passenger_ship.n.01', 'synonyms': ['passenger_ship'], 'id': 769, 'def': 'a ship built to carry passengers', 'name': 'passenger_ship'}, {'frequency': 'c', 'synset': 'passport.n.02', 'synonyms': ['passport'], 'id': 770, 'def': 'a document issued by a country to a citizen allowing that person to travel abroad and re-enter the home country', 'name': 'passport'}, {'frequency': 'f', 'synset': 'pastry.n.02', 'synonyms': ['pastry'], 'id': 771, 'def': 'any of various baked foods made of dough or batter', 'name': 'pastry'}, {'frequency': 'r', 'synset': 'patty.n.01', 'synonyms': ['patty_(food)'], 'id': 772, 'def': 'small flat mass of chopped food', 'name': 'patty_(food)'}, {'frequency': 'c', 'synset': 'pea.n.01', 'synonyms': ['pea_(food)'], 'id': 773, 'def': 'seed of a pea plant used for food', 'name': 'pea_(food)'}, {'frequency': 'c', 'synset': 'peach.n.03', 'synonyms': ['peach'], 'id': 774, 'def': 'downy juicy fruit with sweet yellowish or whitish flesh', 'name': 'peach'}, {'frequency': 'c', 'synset': 'peanut_butter.n.01', 'synonyms': ['peanut_butter'], 'id': 775, 'def': 'a spread made from ground peanuts', 'name': 'peanut_butter'}, {'frequency': 'f', 'synset': 'pear.n.01', 'synonyms': ['pear'], 'id': 776, 'def': 'sweet juicy gritty-textured fruit available in many varieties', 'name': 'pear'}, {'frequency': 'c', 'synset': 'peeler.n.03', 'synonyms': ['peeler_(tool_for_fruit_and_vegetables)'], 'id': 777, 'def': 'a device for peeling vegetables or fruits', 'name': 'peeler_(tool_for_fruit_and_vegetables)'}, {'frequency': 'r', 'synset': 'peg.n.04', 'synonyms': ['wooden_leg', 'pegleg'], 'id': 778, 'def': 'a prosthesis that replaces a missing leg', 'name': 'wooden_leg'}, {'frequency': 'r', 'synset': 'pegboard.n.01', 'synonyms': ['pegboard'], 'id': 779, 'def': 'a board perforated with regularly spaced holes into which pegs can be fitted', 'name': 'pegboard'}, {'frequency': 'c', 'synset': 'pelican.n.01', 'synonyms': ['pelican'], 'id': 780, 'def': 'large long-winged warm-water seabird having a large bill with a distensible pouch for fish', 'name': 'pelican'}, {'frequency': 'f', 'synset': 'pen.n.01', 'synonyms': ['pen'], 'id': 781, 'def': 'a writing implement with a point from which ink flows', 'name': 'pen'}, {'frequency': 'f', 'synset': 'pencil.n.01', 'synonyms': ['pencil'], 'id': 782, 'def': 'a thin cylindrical pointed writing implement made of wood and graphite', 'name': 'pencil'}, {'frequency': 'r', 'synset': 'pencil_box.n.01', 'synonyms': ['pencil_box', 'pencil_case'], 'id': 783, 'def': 'a box for holding pencils', 'name': 'pencil_box'}, {'frequency': 'r', 'synset': 'pencil_sharpener.n.01', 'synonyms': ['pencil_sharpener'], 'id': 784, 'def': 'a rotary implement for sharpening the point on pencils', 'name': 'pencil_sharpener'}, {'frequency': 'r', 'synset': 'pendulum.n.01', 'synonyms': ['pendulum'], 'id': 785, 'def': 'an apparatus consisting of an object mounted so that it swings freely under the influence of gravity', 'name': 'pendulum'}, {'frequency': 'c', 'synset': 'penguin.n.01', 'synonyms': ['penguin'], 'id': 786, 'def': 'short-legged flightless birds of cold southern regions having webbed feet and wings modified as flippers', 'name': 'penguin'}, {'frequency': 'r', 'synset': 'pennant.n.02', 'synonyms': ['pennant'], 'id': 787, 'def': 'a flag longer than it is wide (and often tapering)', 'name': 'pennant'}, {'frequency': 'r', 'synset': 'penny.n.02', 'synonyms': ['penny_(coin)'], 'id': 788, 'def': 'a coin worth one-hundredth of the value of the basic unit', 'name': 'penny_(coin)'}, {'frequency': 'f', 'synset': 'pepper.n.03', 'synonyms': ['pepper', 'peppercorn'], 'id': 789, 'def': 'pungent seasoning from the berry of the common pepper plant; whole or ground', 'name': 'pepper'}, {'frequency': 'c', 'synset': 'pepper_mill.n.01', 'synonyms': ['pepper_mill', 'pepper_grinder'], 'id': 790, 'def': 'a mill for grinding pepper', 'name': 'pepper_mill'}, {'frequency': 'c', 'synset': 'perfume.n.02', 'synonyms': ['perfume'], 'id': 791, 'def': 'a toiletry that emits and diffuses a fragrant odor', 'name': 'perfume'}, {'frequency': 'r', 'synset': 'persimmon.n.02', 'synonyms': ['persimmon'], 'id': 792, 'def': 'orange fruit resembling a plum; edible when fully ripe', 'name': 'persimmon'}, {'frequency': 'f', 'synset': 'person.n.01', 'synonyms': ['person', 'baby', 'child', 'boy', 'girl', 'man', 'woman', 'human'], 'id': 793, 'def': 'a human being', 'name': 'person'}, {'frequency': 'c', 'synset': 'pet.n.01', 'synonyms': ['pet'], 'id': 794, 'def': 'a domesticated animal kept for companionship or amusement', 'name': 'pet'}, {'frequency': 'c', 'synset': 'pew.n.01', 'synonyms': ['pew_(church_bench)', 'church_bench'], 'id': 795, 'def': 'long bench with backs; used in church by the congregation', 'name': 'pew_(church_bench)'}, {'frequency': 'r', 'synset': 'phonebook.n.01', 'synonyms': ['phonebook', 'telephone_book', 'telephone_directory'], 'id': 796, 'def': 'a directory containing an alphabetical list of telephone subscribers and their telephone numbers', 'name': 'phonebook'}, {'frequency': 'c', 'synset': 'phonograph_record.n.01', 'synonyms': ['phonograph_record', 'phonograph_recording', 'record_(phonograph_recording)'], 'id': 797, 'def': 'sound recording consisting of a typically black disk with a continuous groove', 'name': 'phonograph_record'}, {'frequency': 'f', 'synset': 'piano.n.01', 'synonyms': ['piano'], 'id': 798, 'def': 'a keyboard instrument that is played by depressing keys that cause hammers to strike tuned strings and produce sounds', 'name': 'piano'}, {'frequency': 'f', 'synset': 'pickle.n.01', 'synonyms': ['pickle'], 'id': 799, 'def': 'vegetables (especially cucumbers) preserved in brine or vinegar', 'name': 'pickle'}, {'frequency': 'f', 'synset': 'pickup.n.01', 'synonyms': ['pickup_truck'], 'id': 800, 'def': 'a light truck with an open body and low sides and a tailboard', 'name': 'pickup_truck'}, {'frequency': 'c', 'synset': 'pie.n.01', 'synonyms': ['pie'], 'id': 801, 'def': 'dish baked in pastry-lined pan often with a pastry top', 'name': 'pie'}, {'frequency': 'c', 'synset': 'pigeon.n.01', 'synonyms': ['pigeon'], 'id': 802, 'def': 'wild and domesticated birds having a heavy body and short legs', 'name': 'pigeon'}, {'frequency': 'r', 'synset': 'piggy_bank.n.01', 'synonyms': ['piggy_bank', 'penny_bank'], 'id': 803, 'def': "a child's coin bank (often shaped like a pig)", 'name': 'piggy_bank'}, {'frequency': 'f', 'synset': 'pillow.n.01', 'synonyms': ['pillow'], 'id': 804, 'def': 'a cushion to support the head of a sleeping person', 'name': 'pillow'}, {'frequency': 'r', 'synset': 'pin.n.09', 'synonyms': ['pin_(non_jewelry)'], 'id': 805, 'def': 'a small slender (often pointed) piece of wood or metal used to support or fasten or attach things', 'name': 'pin_(non_jewelry)'}, {'frequency': 'f', 'synset': 'pineapple.n.02', 'synonyms': ['pineapple'], 'id': 806, 'def': 'large sweet fleshy tropical fruit with a tuft of stiff leaves', 'name': 'pineapple'}, {'frequency': 'c', 'synset': 'pinecone.n.01', 'synonyms': ['pinecone'], 'id': 807, 'def': 'the seed-producing cone of a pine tree', 'name': 'pinecone'}, {'frequency': 'r', 'synset': 'ping-pong_ball.n.01', 'synonyms': ['ping-pong_ball'], 'id': 808, 'def': 'light hollow ball used in playing table tennis', 'name': 'ping-pong_ball'}, {'frequency': 'r', 'synset': 'pinwheel.n.03', 'synonyms': ['pinwheel'], 'id': 809, 'def': 'a toy consisting of vanes of colored paper or plastic that is pinned to a stick and spins when it is pointed into the wind', 'name': 'pinwheel'}, {'frequency': 'r', 'synset': 'pipe.n.01', 'synonyms': ['tobacco_pipe'], 'id': 810, 'def': 'a tube with a small bowl at one end; used for smoking tobacco', 'name': 'tobacco_pipe'}, {'frequency': 'f', 'synset': 'pipe.n.02', 'synonyms': ['pipe', 'piping'], 'id': 811, 'def': 'a long tube made of metal or plastic that is used to carry water or oil or gas etc.', 'name': 'pipe'}, {'frequency': 'r', 'synset': 'pistol.n.01', 'synonyms': ['pistol', 'handgun'], 'id': 812, 'def': 'a firearm that is held and fired with one hand', 'name': 'pistol'}, {'frequency': 'c', 'synset': 'pita.n.01', 'synonyms': ['pita_(bread)', 'pocket_bread'], 'id': 813, 'def': 'usually small round bread that can open into a pocket for filling', 'name': 'pita_(bread)'}, {'frequency': 'f', 'synset': 'pitcher.n.02', 'synonyms': ['pitcher_(vessel_for_liquid)', 'ewer'], 'id': 814, 'def': 'an open vessel with a handle and a spout for pouring', 'name': 'pitcher_(vessel_for_liquid)'}, {'frequency': 'r', 'synset': 'pitchfork.n.01', 'synonyms': ['pitchfork'], 'id': 815, 'def': 'a long-handled hand tool with sharp widely spaced prongs for lifting and pitching hay', 'name': 'pitchfork'}, {'frequency': 'f', 'synset': 'pizza.n.01', 'synonyms': ['pizza'], 'id': 816, 'def': 'Italian open pie made of thin bread dough spread with a spiced mixture of e.g. tomato sauce and cheese', 'name': 'pizza'}, {'frequency': 'f', 'synset': 'place_mat.n.01', 'synonyms': ['place_mat'], 'id': 817, 'def': 'a mat placed on a table for an individual place setting', 'name': 'place_mat'}, {'frequency': 'f', 'synset': 'plate.n.04', 'synonyms': ['plate'], 'id': 818, 'def': 'dish on which food is served or from which food is eaten', 'name': 'plate'}, {'frequency': 'c', 'synset': 'platter.n.01', 'synonyms': ['platter'], 'id': 819, 'def': 'a large shallow dish used for serving food', 'name': 'platter'}, {'frequency': 'r', 'synset': 'playpen.n.01', 'synonyms': ['playpen'], 'id': 820, 'def': 'a portable enclosure in which babies may be left to play', 'name': 'playpen'}, {'frequency': 'c', 'synset': 'pliers.n.01', 'synonyms': ['pliers', 'plyers'], 'id': 821, 'def': 'a gripping hand tool with two hinged arms and (usually) serrated jaws', 'name': 'pliers'}, {'frequency': 'r', 'synset': 'plow.n.01', 'synonyms': ['plow_(farm_equipment)', 'plough_(farm_equipment)'], 'id': 822, 'def': 'a farm tool having one or more heavy blades to break the soil and cut a furrow prior to sowing', 'name': 'plow_(farm_equipment)'}, {'frequency': 'r', 'synset': 'plume.n.02', 'synonyms': ['plume'], 'id': 823, 'def': 'a feather or cluster of feathers worn as an ornament', 'name': 'plume'}, {'frequency': 'r', 'synset': 'pocket_watch.n.01', 'synonyms': ['pocket_watch'], 'id': 824, 'def': 'a watch that is carried in a small watch pocket', 'name': 'pocket_watch'}, {'frequency': 'c', 'synset': 'pocketknife.n.01', 'synonyms': ['pocketknife'], 'id': 825, 'def': 'a knife with a blade that folds into the handle; suitable for carrying in the pocket', 'name': 'pocketknife'}, {'frequency': 'c', 'synset': 'poker.n.01', 'synonyms': ['poker_(fire_stirring_tool)', 'stove_poker', 'fire_hook'], 'id': 826, 'def': 'fire iron consisting of a metal rod with a handle; used to stir a fire', 'name': 'poker_(fire_stirring_tool)'}, {'frequency': 'f', 'synset': 'pole.n.01', 'synonyms': ['pole', 'post'], 'id': 827, 'def': 'a long (usually round) rod of wood or metal or plastic', 'name': 'pole'}, {'frequency': 'f', 'synset': 'polo_shirt.n.01', 'synonyms': ['polo_shirt', 'sport_shirt'], 'id': 828, 'def': 'a shirt with short sleeves designed for comfort and casual wear', 'name': 'polo_shirt'}, {'frequency': 'r', 'synset': 'poncho.n.01', 'synonyms': ['poncho'], 'id': 829, 'def': 'a blanket-like cloak with a hole in the center for the head', 'name': 'poncho'}, {'frequency': 'c', 'synset': 'pony.n.05', 'synonyms': ['pony'], 'id': 830, 'def': 'any of various breeds of small gentle horses usually less than five feet high at the shoulder', 'name': 'pony'}, {'frequency': 'r', 'synset': 'pool_table.n.01', 'synonyms': ['pool_table', 'billiard_table', 'snooker_table'], 'id': 831, 'def': 'game equipment consisting of a heavy table on which pool is played', 'name': 'pool_table'}, {'frequency': 'f', 'synset': 'pop.n.02', 'synonyms': ['pop_(soda)', 'soda_(pop)', 'tonic', 'soft_drink'], 'id': 832, 'def': 'a sweet drink containing carbonated water and flavoring', 'name': 'pop_(soda)'}, {'frequency': 'c', 'synset': 'postbox.n.01', 'synonyms': ['postbox_(public)', 'mailbox_(public)'], 'id': 833, 'def': 'public box for deposit of mail', 'name': 'postbox_(public)'}, {'frequency': 'c', 'synset': 'postcard.n.01', 'synonyms': ['postcard', 'postal_card', 'mailing-card'], 'id': 834, 'def': 'a card for sending messages by post without an envelope', 'name': 'postcard'}, {'frequency': 'f', 'synset': 'poster.n.01', 'synonyms': ['poster', 'placard'], 'id': 835, 'def': 'a sign posted in a public place as an advertisement', 'name': 'poster'}, {'frequency': 'f', 'synset': 'pot.n.01', 'synonyms': ['pot'], 'id': 836, 'def': 'metal or earthenware cooking vessel that is usually round and deep; often has a handle and lid', 'name': 'pot'}, {'frequency': 'f', 'synset': 'pot.n.04', 'synonyms': ['flowerpot'], 'id': 837, 'def': 'a container in which plants are cultivated', 'name': 'flowerpot'}, {'frequency': 'f', 'synset': 'potato.n.01', 'synonyms': ['potato'], 'id': 838, 'def': 'an edible tuber native to South America', 'name': 'potato'}, {'frequency': 'c', 'synset': 'potholder.n.01', 'synonyms': ['potholder'], 'id': 839, 'def': 'an insulated pad for holding hot pots', 'name': 'potholder'}, {'frequency': 'c', 'synset': 'pottery.n.01', 'synonyms': ['pottery', 'clayware'], 'id': 840, 'def': 'ceramic ware made from clay and baked in a kiln', 'name': 'pottery'}, {'frequency': 'c', 'synset': 'pouch.n.01', 'synonyms': ['pouch'], 'id': 841, 'def': 'a small or medium size container for holding or carrying things', 'name': 'pouch'}, {'frequency': 'c', 'synset': 'power_shovel.n.01', 'synonyms': ['power_shovel', 'excavator', 'digger'], 'id': 842, 'def': 'a machine for excavating', 'name': 'power_shovel'}, {'frequency': 'c', 'synset': 'prawn.n.01', 'synonyms': ['prawn', 'shrimp'], 'id': 843, 'def': 'any of various edible decapod crustaceans', 'name': 'prawn'}, {'frequency': 'c', 'synset': 'pretzel.n.01', 'synonyms': ['pretzel'], 'id': 844, 'def': 'glazed and salted cracker typically in the shape of a loose knot', 'name': 'pretzel'}, {'frequency': 'f', 'synset': 'printer.n.03', 'synonyms': ['printer', 'printing_machine'], 'id': 845, 'def': 'a machine that prints', 'name': 'printer'}, {'frequency': 'c', 'synset': 'projectile.n.01', 'synonyms': ['projectile_(weapon)', 'missile'], 'id': 846, 'def': 'a weapon that is forcibly thrown or projected at a targets', 'name': 'projectile_(weapon)'}, {'frequency': 'c', 'synset': 'projector.n.02', 'synonyms': ['projector'], 'id': 847, 'def': 'an optical instrument that projects an enlarged image onto a screen', 'name': 'projector'}, {'frequency': 'f', 'synset': 'propeller.n.01', 'synonyms': ['propeller', 'propellor'], 'id': 848, 'def': 'a mechanical device that rotates to push against air or water', 'name': 'propeller'}, {'frequency': 'r', 'synset': 'prune.n.01', 'synonyms': ['prune'], 'id': 849, 'def': 'dried plum', 'name': 'prune'}, {'frequency': 'r', 'synset': 'pudding.n.01', 'synonyms': ['pudding'], 'id': 850, 'def': 'any of various soft thick unsweetened baked dishes', 'name': 'pudding'}, {'frequency': 'r', 'synset': 'puffer.n.02', 'synonyms': ['puffer_(fish)', 'pufferfish', 'blowfish', 'globefish'], 'id': 851, 'def': 'fishes whose elongated spiny body can inflate itself with water or air to form a globe', 'name': 'puffer_(fish)'}, {'frequency': 'r', 'synset': 'puffin.n.01', 'synonyms': ['puffin'], 'id': 852, 'def': 'seabirds having short necks and brightly colored compressed bills', 'name': 'puffin'}, {'frequency': 'r', 'synset': 'pug.n.01', 'synonyms': ['pug-dog'], 'id': 853, 'def': 'small compact smooth-coated breed of Asiatic origin having a tightly curled tail and broad flat wrinkled muzzle', 'name': 'pug-dog'}, {'frequency': 'c', 'synset': 'pumpkin.n.02', 'synonyms': ['pumpkin'], 'id': 854, 'def': 'usually large pulpy deep-yellow round fruit of the squash family maturing in late summer or early autumn', 'name': 'pumpkin'}, {'frequency': 'r', 'synset': 'punch.n.03', 'synonyms': ['puncher'], 'id': 855, 'def': 'a tool for making holes or indentations', 'name': 'puncher'}, {'frequency': 'r', 'synset': 'puppet.n.01', 'synonyms': ['puppet', 'marionette'], 'id': 856, 'def': 'a small figure of a person operated from above with strings by a puppeteer', 'name': 'puppet'}, {'frequency': 'c', 'synset': 'puppy.n.01', 'synonyms': ['puppy'], 'id': 857, 'def': 'a young dog', 'name': 'puppy'}, {'frequency': 'r', 'synset': 'quesadilla.n.01', 'synonyms': ['quesadilla'], 'id': 858, 'def': 'a tortilla that is filled with cheese and heated', 'name': 'quesadilla'}, {'frequency': 'r', 'synset': 'quiche.n.02', 'synonyms': ['quiche'], 'id': 859, 'def': 'a tart filled with rich unsweetened custard; often contains other ingredients (as cheese or ham or seafood or vegetables)', 'name': 'quiche'}, {'frequency': 'f', 'synset': 'quilt.n.01', 'synonyms': ['quilt', 'comforter'], 'id': 860, 'def': 'bedding made of two layers of cloth filled with stuffing and stitched together', 'name': 'quilt'}, {'frequency': 'c', 'synset': 'rabbit.n.01', 'synonyms': ['rabbit'], 'id': 861, 'def': 'any of various burrowing animals of the family Leporidae having long ears and short tails', 'name': 'rabbit'}, {'frequency': 'r', 'synset': 'racer.n.02', 'synonyms': ['race_car', 'racing_car'], 'id': 862, 'def': 'a fast car that competes in races', 'name': 'race_car'}, {'frequency': 'c', 'synset': 'racket.n.04', 'synonyms': ['racket', 'racquet'], 'id': 863, 'def': 'a sports implement used to strike a ball in various games', 'name': 'racket'}, {'frequency': 'r', 'synset': 'radar.n.01', 'synonyms': ['radar'], 'id': 864, 'def': 'measuring instrument in which the echo of a pulse of microwave radiation is used to detect and locate distant objects', 'name': 'radar'}, {'frequency': 'f', 'synset': 'radiator.n.03', 'synonyms': ['radiator'], 'id': 865, 'def': 'a mechanism consisting of a metal honeycomb through which hot fluids circulate', 'name': 'radiator'}, {'frequency': 'c', 'synset': 'radio_receiver.n.01', 'synonyms': ['radio_receiver', 'radio_set', 'radio', 'tuner_(radio)'], 'id': 866, 'def': 'an electronic receiver that detects and demodulates and amplifies transmitted radio signals', 'name': 'radio_receiver'}, {'frequency': 'c', 'synset': 'radish.n.03', 'synonyms': ['radish', 'daikon'], 'id': 867, 'def': 'pungent edible root of any of various cultivated radish plants', 'name': 'radish'}, {'frequency': 'c', 'synset': 'raft.n.01', 'synonyms': ['raft'], 'id': 868, 'def': 'a flat float (usually made of logs or planks) that can be used for transport or as a platform for swimmers', 'name': 'raft'}, {'frequency': 'r', 'synset': 'rag_doll.n.01', 'synonyms': ['rag_doll'], 'id': 869, 'def': 'a cloth doll that is stuffed and (usually) painted', 'name': 'rag_doll'}, {'frequency': 'c', 'synset': 'raincoat.n.01', 'synonyms': ['raincoat', 'waterproof_jacket'], 'id': 870, 'def': 'a water-resistant coat', 'name': 'raincoat'}, {'frequency': 'c', 'synset': 'ram.n.05', 'synonyms': ['ram_(animal)'], 'id': 871, 'def': 'uncastrated adult male sheep', 'name': 'ram_(animal)'}, {'frequency': 'c', 'synset': 'raspberry.n.02', 'synonyms': ['raspberry'], 'id': 872, 'def': 'red or black edible aggregate berries usually smaller than the related blackberries', 'name': 'raspberry'}, {'frequency': 'r', 'synset': 'rat.n.01', 'synonyms': ['rat'], 'id': 873, 'def': 'any of various long-tailed rodents similar to but larger than a mouse', 'name': 'rat'}, {'frequency': 'c', 'synset': 'razorblade.n.01', 'synonyms': ['razorblade'], 'id': 874, 'def': 'a blade that has very sharp edge', 'name': 'razorblade'}, {'frequency': 'c', 'synset': 'reamer.n.01', 'synonyms': ['reamer_(juicer)', 'juicer', 'juice_reamer'], 'id': 875, 'def': 'a squeezer with a conical ridged center that is used for squeezing juice from citrus fruit', 'name': 'reamer_(juicer)'}, {'frequency': 'f', 'synset': 'rearview_mirror.n.01', 'synonyms': ['rearview_mirror'], 'id': 876, 'def': 'vehicle mirror (side or rearview)', 'name': 'rearview_mirror'}, {'frequency': 'c', 'synset': 'receipt.n.02', 'synonyms': ['receipt'], 'id': 877, 'def': 'an acknowledgment (usually tangible) that payment has been made', 'name': 'receipt'}, {'frequency': 'c', 'synset': 'recliner.n.01', 'synonyms': ['recliner', 'reclining_chair', 'lounger_(chair)'], 'id': 878, 'def': 'an armchair whose back can be lowered and foot can be raised to allow the sitter to recline in it', 'name': 'recliner'}, {'frequency': 'c', 'synset': 'record_player.n.01', 'synonyms': ['record_player', 'phonograph_(record_player)', 'turntable'], 'id': 879, 'def': 'machine in which rotating records cause a stylus to vibrate and the vibrations are amplified acoustically or electronically', 'name': 'record_player'}, {'frequency': 'f', 'synset': 'reflector.n.01', 'synonyms': ['reflector'], 'id': 880, 'def': 'device that reflects light, radiation, etc.', 'name': 'reflector'}, {'frequency': 'f', 'synset': 'remote_control.n.01', 'synonyms': ['remote_control'], 'id': 881, 'def': 'a device that can be used to control a machine or apparatus from a distance', 'name': 'remote_control'}, {'frequency': 'c', 'synset': 'rhinoceros.n.01', 'synonyms': ['rhinoceros'], 'id': 882, 'def': 'massive powerful herbivorous odd-toed ungulate of southeast Asia and Africa having very thick skin and one or two horns on the snout', 'name': 'rhinoceros'}, {'frequency': 'r', 'synset': 'rib.n.03', 'synonyms': ['rib_(food)'], 'id': 883, 'def': 'cut of meat including one or more ribs', 'name': 'rib_(food)'}, {'frequency': 'c', 'synset': 'rifle.n.01', 'synonyms': ['rifle'], 'id': 884, 'def': 'a shoulder firearm with a long barrel', 'name': 'rifle'}, {'frequency': 'f', 'synset': 'ring.n.08', 'synonyms': ['ring'], 'id': 885, 'def': 'jewelry consisting of a circlet of precious metal (often set with jewels) worn on the finger', 'name': 'ring'}, {'frequency': 'r', 'synset': 'river_boat.n.01', 'synonyms': ['river_boat'], 'id': 886, 'def': 'a boat used on rivers or to ply a river', 'name': 'river_boat'}, {'frequency': 'r', 'synset': 'road_map.n.02', 'synonyms': ['road_map'], 'id': 887, 'def': '(NOT A ROAD) a MAP showing roads (for automobile travel)', 'name': 'road_map'}, {'frequency': 'c', 'synset': 'robe.n.01', 'synonyms': ['robe'], 'id': 888, 'def': 'any loose flowing garment', 'name': 'robe'}, {'frequency': 'c', 'synset': 'rocking_chair.n.01', 'synonyms': ['rocking_chair'], 'id': 889, 'def': 'a chair mounted on rockers', 'name': 'rocking_chair'}, {'frequency': 'r', 'synset': 'rodent.n.01', 'synonyms': ['rodent'], 'id': 890, 'def': 'relatively small placental mammals having a single pair of constantly growing incisor teeth specialized for gnawing', 'name': 'rodent'}, {'frequency': 'r', 'synset': 'roller_skate.n.01', 'synonyms': ['roller_skate'], 'id': 891, 'def': 'a shoe with pairs of rollers (small hard wheels) fixed to the sole', 'name': 'roller_skate'}, {'frequency': 'r', 'synset': 'rollerblade.n.01', 'synonyms': ['Rollerblade'], 'id': 892, 'def': 'an in-line variant of a roller skate', 'name': 'Rollerblade'}, {'frequency': 'c', 'synset': 'rolling_pin.n.01', 'synonyms': ['rolling_pin'], 'id': 893, 'def': 'utensil consisting of a cylinder (usually of wood) with a handle at each end; used to roll out dough', 'name': 'rolling_pin'}, {'frequency': 'r', 'synset': 'root_beer.n.01', 'synonyms': ['root_beer'], 'id': 894, 'def': 'carbonated drink containing extracts of roots and herbs', 'name': 'root_beer'}, {'frequency': 'c', 'synset': 'router.n.02', 'synonyms': ['router_(computer_equipment)'], 'id': 895, 'def': 'a device that forwards data packets between computer networks', 'name': 'router_(computer_equipment)'}, {'frequency': 'f', 'synset': 'rubber_band.n.01', 'synonyms': ['rubber_band', 'elastic_band'], 'id': 896, 'def': 'a narrow band of elastic rubber used to hold things (such as papers) together', 'name': 'rubber_band'}, {'frequency': 'c', 'synset': 'runner.n.08', 'synonyms': ['runner_(carpet)'], 'id': 897, 'def': 'a long narrow carpet', 'name': 'runner_(carpet)'}, {'frequency': 'f', 'synset': 'sack.n.01', 'synonyms': ['plastic_bag', 'paper_bag'], 'id': 898, 'def': "a bag made of paper or plastic for holding customer's purchases", 'name': 'plastic_bag'}, {'frequency': 'f', 'synset': 'saddle.n.01', 'synonyms': ['saddle_(on_an_animal)'], 'id': 899, 'def': 'a seat for the rider of a horse or camel', 'name': 'saddle_(on_an_animal)'}, {'frequency': 'f', 'synset': 'saddle_blanket.n.01', 'synonyms': ['saddle_blanket', 'saddlecloth', 'horse_blanket'], 'id': 900, 'def': 'stable gear consisting of a blanket placed under the saddle', 'name': 'saddle_blanket'}, {'frequency': 'c', 'synset': 'saddlebag.n.01', 'synonyms': ['saddlebag'], 'id': 901, 'def': 'a large bag (or pair of bags) hung over a saddle', 'name': 'saddlebag'}, {'frequency': 'r', 'synset': 'safety_pin.n.01', 'synonyms': ['safety_pin'], 'id': 902, 'def': 'a pin in the form of a clasp; has a guard so the point of the pin will not stick the user', 'name': 'safety_pin'}, {'frequency': 'f', 'synset': 'sail.n.01', 'synonyms': ['sail'], 'id': 903, 'def': 'a large piece of fabric by means of which wind is used to propel a sailing vessel', 'name': 'sail'}, {'frequency': 'f', 'synset': 'salad.n.01', 'synonyms': ['salad'], 'id': 904, 'def': 'food mixtures either arranged on a plate or tossed and served with a moist dressing; usually consisting of or including greens', 'name': 'salad'}, {'frequency': 'r', 'synset': 'salad_plate.n.01', 'synonyms': ['salad_plate', 'salad_bowl'], 'id': 905, 'def': 'a plate or bowl for individual servings of salad', 'name': 'salad_plate'}, {'frequency': 'c', 'synset': 'salami.n.01', 'synonyms': ['salami'], 'id': 906, 'def': 'highly seasoned fatty sausage of pork and beef usually dried', 'name': 'salami'}, {'frequency': 'c', 'synset': 'salmon.n.01', 'synonyms': ['salmon_(fish)'], 'id': 907, 'def': 'any of various large food and game fishes of northern waters', 'name': 'salmon_(fish)'}, {'frequency': 'r', 'synset': 'salmon.n.03', 'synonyms': ['salmon_(food)'], 'id': 908, 'def': 'flesh of any of various marine or freshwater fish of the family Salmonidae', 'name': 'salmon_(food)'}, {'frequency': 'c', 'synset': 'salsa.n.01', 'synonyms': ['salsa'], 'id': 909, 'def': 'spicy sauce of tomatoes and onions and chili peppers to accompany Mexican foods', 'name': 'salsa'}, {'frequency': 'f', 'synset': 'saltshaker.n.01', 'synonyms': ['saltshaker'], 'id': 910, 'def': 'a shaker with a perforated top for sprinkling salt', 'name': 'saltshaker'}, {'frequency': 'f', 'synset': 'sandal.n.01', 'synonyms': ['sandal_(type_of_shoe)'], 'id': 911, 'def': 'a shoe consisting of a sole fastened by straps to the foot', 'name': 'sandal_(type_of_shoe)'}, {'frequency': 'f', 'synset': 'sandwich.n.01', 'synonyms': ['sandwich'], 'id': 912, 'def': 'two (or more) slices of bread with a filling between them', 'name': 'sandwich'}, {'frequency': 'r', 'synset': 'satchel.n.01', 'synonyms': ['satchel'], 'id': 913, 'def': 'luggage consisting of a small case with a flat bottom and (usually) a shoulder strap', 'name': 'satchel'}, {'frequency': 'r', 'synset': 'saucepan.n.01', 'synonyms': ['saucepan'], 'id': 914, 'def': 'a deep pan with a handle; used for stewing or boiling', 'name': 'saucepan'}, {'frequency': 'f', 'synset': 'saucer.n.02', 'synonyms': ['saucer'], 'id': 915, 'def': 'a small shallow dish for holding a cup at the table', 'name': 'saucer'}, {'frequency': 'f', 'synset': 'sausage.n.01', 'synonyms': ['sausage'], 'id': 916, 'def': 'highly seasoned minced meat stuffed in casings', 'name': 'sausage'}, {'frequency': 'r', 'synset': 'sawhorse.n.01', 'synonyms': ['sawhorse', 'sawbuck'], 'id': 917, 'def': 'a framework for holding wood that is being sawed', 'name': 'sawhorse'}, {'frequency': 'r', 'synset': 'sax.n.02', 'synonyms': ['saxophone'], 'id': 918, 'def': "a wind instrument with a `J'-shaped form typically made of brass", 'name': 'saxophone'}, {'frequency': 'f', 'synset': 'scale.n.07', 'synonyms': ['scale_(measuring_instrument)'], 'id': 919, 'def': 'a measuring instrument for weighing; shows amount of mass', 'name': 'scale_(measuring_instrument)'}, {'frequency': 'r', 'synset': 'scarecrow.n.01', 'synonyms': ['scarecrow', 'strawman'], 'id': 920, 'def': 'an effigy in the shape of a man to frighten birds away from seeds', 'name': 'scarecrow'}, {'frequency': 'f', 'synset': 'scarf.n.01', 'synonyms': ['scarf'], 'id': 921, 'def': 'a garment worn around the head or neck or shoulders for warmth or decoration', 'name': 'scarf'}, {'frequency': 'c', 'synset': 'school_bus.n.01', 'synonyms': ['school_bus'], 'id': 922, 'def': 'a bus used to transport children to or from school', 'name': 'school_bus'}, {'frequency': 'f', 'synset': 'scissors.n.01', 'synonyms': ['scissors'], 'id': 923, 'def': 'a tool having two crossed pivoting blades with looped handles', 'name': 'scissors'}, {'frequency': 'f', 'synset': 'scoreboard.n.01', 'synonyms': ['scoreboard'], 'id': 924, 'def': 'a large board for displaying the score of a contest (and some other information)', 'name': 'scoreboard'}, {'frequency': 'r', 'synset': 'scraper.n.01', 'synonyms': ['scraper'], 'id': 925, 'def': 'any of various hand tools for scraping', 'name': 'scraper'}, {'frequency': 'c', 'synset': 'screwdriver.n.01', 'synonyms': ['screwdriver'], 'id': 926, 'def': 'a hand tool for driving screws; has a tip that fits into the head of a screw', 'name': 'screwdriver'}, {'frequency': 'f', 'synset': 'scrub_brush.n.01', 'synonyms': ['scrubbing_brush'], 'id': 927, 'def': 'a brush with short stiff bristles for heavy cleaning', 'name': 'scrubbing_brush'}, {'frequency': 'c', 'synset': 'sculpture.n.01', 'synonyms': ['sculpture'], 'id': 928, 'def': 'a three-dimensional work of art', 'name': 'sculpture'}, {'frequency': 'c', 'synset': 'seabird.n.01', 'synonyms': ['seabird', 'seafowl'], 'id': 929, 'def': 'a bird that frequents coastal waters and the open ocean: gulls; pelicans; gannets; cormorants; albatrosses; petrels; etc.', 'name': 'seabird'}, {'frequency': 'c', 'synset': 'seahorse.n.02', 'synonyms': ['seahorse'], 'id': 930, 'def': 'small fish with horse-like heads bent sharply downward and curled tails', 'name': 'seahorse'}, {'frequency': 'r', 'synset': 'seaplane.n.01', 'synonyms': ['seaplane', 'hydroplane'], 'id': 931, 'def': 'an airplane that can land on or take off from water', 'name': 'seaplane'}, {'frequency': 'c', 'synset': 'seashell.n.01', 'synonyms': ['seashell'], 'id': 932, 'def': 'the shell of a marine organism', 'name': 'seashell'}, {'frequency': 'c', 'synset': 'sewing_machine.n.01', 'synonyms': ['sewing_machine'], 'id': 933, 'def': 'a textile machine used as a home appliance for sewing', 'name': 'sewing_machine'}, {'frequency': 'c', 'synset': 'shaker.n.03', 'synonyms': ['shaker'], 'id': 934, 'def': 'a container in which something can be shaken', 'name': 'shaker'}, {'frequency': 'c', 'synset': 'shampoo.n.01', 'synonyms': ['shampoo'], 'id': 935, 'def': 'cleansing agent consisting of soaps or detergents used for washing the hair', 'name': 'shampoo'}, {'frequency': 'c', 'synset': 'shark.n.01', 'synonyms': ['shark'], 'id': 936, 'def': 'typically large carnivorous fishes with sharpe teeth', 'name': 'shark'}, {'frequency': 'r', 'synset': 'sharpener.n.01', 'synonyms': ['sharpener'], 'id': 937, 'def': 'any implement that is used to make something (an edge or a point) sharper', 'name': 'sharpener'}, {'frequency': 'r', 'synset': 'sharpie.n.03', 'synonyms': ['Sharpie'], 'id': 938, 'def': 'a pen with indelible ink that will write on any surface', 'name': 'Sharpie'}, {'frequency': 'r', 'synset': 'shaver.n.03', 'synonyms': ['shaver_(electric)', 'electric_shaver', 'electric_razor'], 'id': 939, 'def': 'a razor powered by an electric motor', 'name': 'shaver_(electric)'}, {'frequency': 'c', 'synset': 'shaving_cream.n.01', 'synonyms': ['shaving_cream', 'shaving_soap'], 'id': 940, 'def': 'toiletry consisting that forms a rich lather for softening the beard before shaving', 'name': 'shaving_cream'}, {'frequency': 'r', 'synset': 'shawl.n.01', 'synonyms': ['shawl'], 'id': 941, 'def': 'cloak consisting of an oblong piece of cloth used to cover the head and shoulders', 'name': 'shawl'}, {'frequency': 'r', 'synset': 'shears.n.01', 'synonyms': ['shears'], 'id': 942, 'def': 'large scissors with strong blades', 'name': 'shears'}, {'frequency': 'f', 'synset': 'sheep.n.01', 'synonyms': ['sheep'], 'id': 943, 'def': 'woolly usually horned ruminant mammal related to the goat', 'name': 'sheep'}, {'frequency': 'r', 'synset': 'shepherd_dog.n.01', 'synonyms': ['shepherd_dog', 'sheepdog'], 'id': 944, 'def': 'any of various usually long-haired breeds of dog reared to herd and guard sheep', 'name': 'shepherd_dog'}, {'frequency': 'r', 'synset': 'sherbert.n.01', 'synonyms': ['sherbert', 'sherbet'], 'id': 945, 'def': 'a frozen dessert made primarily of fruit juice and sugar', 'name': 'sherbert'}, {'frequency': 'c', 'synset': 'shield.n.02', 'synonyms': ['shield'], 'id': 946, 'def': 'armor carried on the arm to intercept blows', 'name': 'shield'}, {'frequency': 'f', 'synset': 'shirt.n.01', 'synonyms': ['shirt'], 'id': 947, 'def': 'a garment worn on the upper half of the body', 'name': 'shirt'}, {'frequency': 'f', 'synset': 'shoe.n.01', 'synonyms': ['shoe', 'sneaker_(type_of_shoe)', 'tennis_shoe'], 'id': 948, 'def': 'common footwear covering the foot', 'name': 'shoe'}, {'frequency': 'f', 'synset': 'shopping_bag.n.01', 'synonyms': ['shopping_bag'], 'id': 949, 'def': 'a bag made of plastic or strong paper (often with handles); used to transport goods after shopping', 'name': 'shopping_bag'}, {'frequency': 'c', 'synset': 'shopping_cart.n.01', 'synonyms': ['shopping_cart'], 'id': 950, 'def': 'a handcart that holds groceries or other goods while shopping', 'name': 'shopping_cart'}, {'frequency': 'f', 'synset': 'short_pants.n.01', 'synonyms': ['short_pants', 'shorts_(clothing)', 'trunks_(clothing)'], 'id': 951, 'def': 'trousers that end at or above the knee', 'name': 'short_pants'}, {'frequency': 'r', 'synset': 'shot_glass.n.01', 'synonyms': ['shot_glass'], 'id': 952, 'def': 'a small glass adequate to hold a single swallow of whiskey', 'name': 'shot_glass'}, {'frequency': 'f', 'synset': 'shoulder_bag.n.01', 'synonyms': ['shoulder_bag'], 'id': 953, 'def': 'a large handbag that can be carried by a strap looped over the shoulder', 'name': 'shoulder_bag'}, {'frequency': 'c', 'synset': 'shovel.n.01', 'synonyms': ['shovel'], 'id': 954, 'def': 'a hand tool for lifting loose material such as snow, dirt, etc.', 'name': 'shovel'}, {'frequency': 'f', 'synset': 'shower.n.01', 'synonyms': ['shower_head'], 'id': 955, 'def': 'a plumbing fixture that sprays water over you', 'name': 'shower_head'}, {'frequency': 'r', 'synset': 'shower_cap.n.01', 'synonyms': ['shower_cap'], 'id': 956, 'def': 'a tight cap worn to keep hair dry while showering', 'name': 'shower_cap'}, {'frequency': 'f', 'synset': 'shower_curtain.n.01', 'synonyms': ['shower_curtain'], 'id': 957, 'def': 'a curtain that keeps water from splashing out of the shower area', 'name': 'shower_curtain'}, {'frequency': 'r', 'synset': 'shredder.n.01', 'synonyms': ['shredder_(for_paper)'], 'id': 958, 'def': 'a device that shreds documents', 'name': 'shredder_(for_paper)'}, {'frequency': 'f', 'synset': 'signboard.n.01', 'synonyms': ['signboard'], 'id': 959, 'def': 'structure displaying a board on which advertisements can be posted', 'name': 'signboard'}, {'frequency': 'c', 'synset': 'silo.n.01', 'synonyms': ['silo'], 'id': 960, 'def': 'a cylindrical tower used for storing goods', 'name': 'silo'}, {'frequency': 'f', 'synset': 'sink.n.01', 'synonyms': ['sink'], 'id': 961, 'def': 'plumbing fixture consisting of a water basin fixed to a wall or floor and having a drainpipe', 'name': 'sink'}, {'frequency': 'f', 'synset': 'skateboard.n.01', 'synonyms': ['skateboard'], 'id': 962, 'def': 'a board with wheels that is ridden in a standing or crouching position and propelled by foot', 'name': 'skateboard'}, {'frequency': 'c', 'synset': 'skewer.n.01', 'synonyms': ['skewer'], 'id': 963, 'def': 'a long pin for holding meat in position while it is being roasted', 'name': 'skewer'}, {'frequency': 'f', 'synset': 'ski.n.01', 'synonyms': ['ski'], 'id': 964, 'def': 'sports equipment for skiing on snow', 'name': 'ski'}, {'frequency': 'f', 'synset': 'ski_boot.n.01', 'synonyms': ['ski_boot'], 'id': 965, 'def': 'a stiff boot that is fastened to a ski with a ski binding', 'name': 'ski_boot'}, {'frequency': 'f', 'synset': 'ski_parka.n.01', 'synonyms': ['ski_parka', 'ski_jacket'], 'id': 966, 'def': 'a parka to be worn while skiing', 'name': 'ski_parka'}, {'frequency': 'f', 'synset': 'ski_pole.n.01', 'synonyms': ['ski_pole'], 'id': 967, 'def': 'a pole with metal points used as an aid in skiing', 'name': 'ski_pole'}, {'frequency': 'f', 'synset': 'skirt.n.02', 'synonyms': ['skirt'], 'id': 968, 'def': 'a garment hanging from the waist; worn mainly by girls and women', 'name': 'skirt'}, {'frequency': 'r', 'synset': 'skullcap.n.01', 'synonyms': ['skullcap'], 'id': 969, 'def': 'rounded brimless cap fitting the crown of the head', 'name': 'skullcap'}, {'frequency': 'c', 'synset': 'sled.n.01', 'synonyms': ['sled', 'sledge', 'sleigh'], 'id': 970, 'def': 'a vehicle or flat object for transportation over snow by sliding or pulled by dogs, etc.', 'name': 'sled'}, {'frequency': 'c', 'synset': 'sleeping_bag.n.01', 'synonyms': ['sleeping_bag'], 'id': 971, 'def': 'large padded bag designed to be slept in outdoors', 'name': 'sleeping_bag'}, {'frequency': 'r', 'synset': 'sling.n.05', 'synonyms': ['sling_(bandage)', 'triangular_bandage'], 'id': 972, 'def': 'bandage to support an injured forearm; slung over the shoulder or neck', 'name': 'sling_(bandage)'}, {'frequency': 'c', 'synset': 'slipper.n.01', 'synonyms': ['slipper_(footwear)', 'carpet_slipper_(footwear)'], 'id': 973, 'def': 'low footwear that can be slipped on and off easily; usually worn indoors', 'name': 'slipper_(footwear)'}, {'frequency': 'r', 'synset': 'smoothie.n.02', 'synonyms': ['smoothie'], 'id': 974, 'def': 'a thick smooth drink consisting of fresh fruit pureed with ice cream or yoghurt or milk', 'name': 'smoothie'}, {'frequency': 'r', 'synset': 'snake.n.01', 'synonyms': ['snake', 'serpent'], 'id': 975, 'def': 'limbless scaly elongate reptile; some are venomous', 'name': 'snake'}, {'frequency': 'f', 'synset': 'snowboard.n.01', 'synonyms': ['snowboard'], 'id': 976, 'def': 'a board that resembles a broad ski or a small surfboard; used in a standing position to slide down snow-covered slopes', 'name': 'snowboard'}, {'frequency': 'c', 'synset': 'snowman.n.01', 'synonyms': ['snowman'], 'id': 977, 'def': 'a figure of a person made of packed snow', 'name': 'snowman'}, {'frequency': 'c', 'synset': 'snowmobile.n.01', 'synonyms': ['snowmobile'], 'id': 978, 'def': 'tracked vehicle for travel on snow having skis in front', 'name': 'snowmobile'}, {'frequency': 'f', 'synset': 'soap.n.01', 'synonyms': ['soap'], 'id': 979, 'def': 'a cleansing agent made from the salts of vegetable or animal fats', 'name': 'soap'}, {'frequency': 'f', 'synset': 'soccer_ball.n.01', 'synonyms': ['soccer_ball'], 'id': 980, 'def': "an inflated ball used in playing soccer (called `football' outside of the United States)", 'name': 'soccer_ball'}, {'frequency': 'f', 'synset': 'sock.n.01', 'synonyms': ['sock'], 'id': 981, 'def': 'cloth covering for the foot; worn inside the shoe; reaches to between the ankle and the knee', 'name': 'sock'}, {'frequency': 'f', 'synset': 'sofa.n.01', 'synonyms': ['sofa', 'couch', 'lounge'], 'id': 982, 'def': 'an upholstered seat for more than one person', 'name': 'sofa'}, {'frequency': 'r', 'synset': 'softball.n.01', 'synonyms': ['softball'], 'id': 983, 'def': 'ball used in playing softball', 'name': 'softball'}, {'frequency': 'c', 'synset': 'solar_array.n.01', 'synonyms': ['solar_array', 'solar_battery', 'solar_panel'], 'id': 984, 'def': 'electrical device consisting of a large array of connected solar cells', 'name': 'solar_array'}, {'frequency': 'r', 'synset': 'sombrero.n.02', 'synonyms': ['sombrero'], 'id': 985, 'def': 'a straw hat with a tall crown and broad brim; worn in American southwest and in Mexico', 'name': 'sombrero'}, {'frequency': 'f', 'synset': 'soup.n.01', 'synonyms': ['soup'], 'id': 986, 'def': 'liquid food especially of meat or fish or vegetable stock often containing pieces of solid food', 'name': 'soup'}, {'frequency': 'r', 'synset': 'soup_bowl.n.01', 'synonyms': ['soup_bowl'], 'id': 987, 'def': 'a bowl for serving soup', 'name': 'soup_bowl'}, {'frequency': 'c', 'synset': 'soupspoon.n.01', 'synonyms': ['soupspoon'], 'id': 988, 'def': 'a spoon with a rounded bowl for eating soup', 'name': 'soupspoon'}, {'frequency': 'c', 'synset': 'sour_cream.n.01', 'synonyms': ['sour_cream', 'soured_cream'], 'id': 989, 'def': 'soured light cream', 'name': 'sour_cream'}, {'frequency': 'r', 'synset': 'soya_milk.n.01', 'synonyms': ['soya_milk', 'soybean_milk', 'soymilk'], 'id': 990, 'def': 'a milk substitute containing soybean flour and water; used in some infant formulas and in making tofu', 'name': 'soya_milk'}, {'frequency': 'r', 'synset': 'space_shuttle.n.01', 'synonyms': ['space_shuttle'], 'id': 991, 'def': "a reusable spacecraft with wings for a controlled descent through the Earth's atmosphere", 'name': 'space_shuttle'}, {'frequency': 'r', 'synset': 'sparkler.n.02', 'synonyms': ['sparkler_(fireworks)'], 'id': 992, 'def': 'a firework that burns slowly and throws out a shower of sparks', 'name': 'sparkler_(fireworks)'}, {'frequency': 'f', 'synset': 'spatula.n.02', 'synonyms': ['spatula'], 'id': 993, 'def': 'a hand tool with a thin flexible blade used to mix or spread soft substances', 'name': 'spatula'}, {'frequency': 'r', 'synset': 'spear.n.01', 'synonyms': ['spear', 'lance'], 'id': 994, 'def': 'a long pointed rod used as a tool or weapon', 'name': 'spear'}, {'frequency': 'f', 'synset': 'spectacles.n.01', 'synonyms': ['spectacles', 'specs', 'eyeglasses', 'glasses'], 'id': 995, 'def': 'optical instrument consisting of a frame that holds a pair of lenses for correcting defective vision', 'name': 'spectacles'}, {'frequency': 'c', 'synset': 'spice_rack.n.01', 'synonyms': ['spice_rack'], 'id': 996, 'def': 'a rack for displaying containers filled with spices', 'name': 'spice_rack'}, {'frequency': 'c', 'synset': 'spider.n.01', 'synonyms': ['spider'], 'id': 997, 'def': 'predatory arachnid with eight legs, two poison fangs, two feelers, and usually two silk-spinning organs at the back end of the body', 'name': 'spider'}, {'frequency': 'r', 'synset': 'spiny_lobster.n.02', 'synonyms': ['crawfish', 'crayfish'], 'id': 998, 'def': 'large edible marine crustacean having a spiny carapace but lacking the large pincers of true lobsters', 'name': 'crawfish'}, {'frequency': 'c', 'synset': 'sponge.n.01', 'synonyms': ['sponge'], 'id': 999, 'def': 'a porous mass usable to absorb water typically used for cleaning', 'name': 'sponge'}, {'frequency': 'f', 'synset': 'spoon.n.01', 'synonyms': ['spoon'], 'id': 1000, 'def': 'a piece of cutlery with a shallow bowl-shaped container and a handle', 'name': 'spoon'}, {'frequency': 'c', 'synset': 'sportswear.n.01', 'synonyms': ['sportswear', 'athletic_wear', 'activewear'], 'id': 1001, 'def': 'attire worn for sport or for casual wear', 'name': 'sportswear'}, {'frequency': 'c', 'synset': 'spotlight.n.02', 'synonyms': ['spotlight'], 'id': 1002, 'def': 'a lamp that produces a strong beam of light to illuminate a restricted area; used to focus attention of a stage performer', 'name': 'spotlight'}, {'frequency': 'r', 'synset': 'squid.n.01', 'synonyms': ['squid_(food)', 'calamari', 'calamary'], 'id': 1003, 'def': '(Italian cuisine) squid prepared as food', 'name': 'squid_(food)'}, {'frequency': 'c', 'synset': 'squirrel.n.01', 'synonyms': ['squirrel'], 'id': 1004, 'def': 'a kind of arboreal rodent having a long bushy tail', 'name': 'squirrel'}, {'frequency': 'r', 'synset': 'stagecoach.n.01', 'synonyms': ['stagecoach'], 'id': 1005, 'def': 'a large coach-and-four formerly used to carry passengers and mail on regular routes between towns', 'name': 'stagecoach'}, {'frequency': 'c', 'synset': 'stapler.n.01', 'synonyms': ['stapler_(stapling_machine)'], 'id': 1006, 'def': 'a machine that inserts staples into sheets of paper in order to fasten them together', 'name': 'stapler_(stapling_machine)'}, {'frequency': 'c', 'synset': 'starfish.n.01', 'synonyms': ['starfish', 'sea_star'], 'id': 1007, 'def': 'echinoderms characterized by five arms extending from a central disk', 'name': 'starfish'}, {'frequency': 'f', 'synset': 'statue.n.01', 'synonyms': ['statue_(sculpture)'], 'id': 1008, 'def': 'a sculpture representing a human or animal', 'name': 'statue_(sculpture)'}, {'frequency': 'c', 'synset': 'steak.n.01', 'synonyms': ['steak_(food)'], 'id': 1009, 'def': 'a slice of meat cut from the fleshy part of an animal or large fish', 'name': 'steak_(food)'}, {'frequency': 'r', 'synset': 'steak_knife.n.01', 'synonyms': ['steak_knife'], 'id': 1010, 'def': 'a sharp table knife used in eating steak', 'name': 'steak_knife'}, {'frequency': 'f', 'synset': 'steering_wheel.n.01', 'synonyms': ['steering_wheel'], 'id': 1011, 'def': 'a handwheel that is used for steering', 'name': 'steering_wheel'}, {'frequency': 'r', 'synset': 'step_ladder.n.01', 'synonyms': ['stepladder'], 'id': 1012, 'def': 'a folding portable ladder hinged at the top', 'name': 'stepladder'}, {'frequency': 'c', 'synset': 'step_stool.n.01', 'synonyms': ['step_stool'], 'id': 1013, 'def': 'a stool that has one or two steps that fold under the seat', 'name': 'step_stool'}, {'frequency': 'c', 'synset': 'stereo.n.01', 'synonyms': ['stereo_(sound_system)'], 'id': 1014, 'def': 'electronic device for playing audio', 'name': 'stereo_(sound_system)'}, {'frequency': 'r', 'synset': 'stew.n.02', 'synonyms': ['stew'], 'id': 1015, 'def': 'food prepared by stewing especially meat or fish with vegetables', 'name': 'stew'}, {'frequency': 'r', 'synset': 'stirrer.n.02', 'synonyms': ['stirrer'], 'id': 1016, 'def': 'an implement used for stirring', 'name': 'stirrer'}, {'frequency': 'f', 'synset': 'stirrup.n.01', 'synonyms': ['stirrup'], 'id': 1017, 'def': "support consisting of metal loops into which rider's feet go", 'name': 'stirrup'}, {'frequency': 'f', 'synset': 'stool.n.01', 'synonyms': ['stool'], 'id': 1018, 'def': 'a simple seat without a back or arms', 'name': 'stool'}, {'frequency': 'f', 'synset': 'stop_sign.n.01', 'synonyms': ['stop_sign'], 'id': 1019, 'def': 'a traffic sign to notify drivers that they must come to a complete stop', 'name': 'stop_sign'}, {'frequency': 'f', 'synset': 'stoplight.n.01', 'synonyms': ['brake_light'], 'id': 1020, 'def': 'a red light on the rear of a motor vehicle that signals when the brakes are applied', 'name': 'brake_light'}, {'frequency': 'f', 'synset': 'stove.n.01', 'synonyms': ['stove', 'kitchen_stove', 'range_(kitchen_appliance)', 'kitchen_range', 'cooking_stove'], 'id': 1021, 'def': 'a kitchen appliance used for cooking food', 'name': 'stove'}, {'frequency': 'c', 'synset': 'strainer.n.01', 'synonyms': ['strainer'], 'id': 1022, 'def': 'a filter to retain larger pieces while smaller pieces and liquids pass through', 'name': 'strainer'}, {'frequency': 'f', 'synset': 'strap.n.01', 'synonyms': ['strap'], 'id': 1023, 'def': 'an elongated strip of material for binding things together or holding', 'name': 'strap'}, {'frequency': 'f', 'synset': 'straw.n.04', 'synonyms': ['straw_(for_drinking)', 'drinking_straw'], 'id': 1024, 'def': 'a thin paper or plastic tube used to suck liquids into the mouth', 'name': 'straw_(for_drinking)'}, {'frequency': 'f', 'synset': 'strawberry.n.01', 'synonyms': ['strawberry'], 'id': 1025, 'def': 'sweet fleshy red fruit', 'name': 'strawberry'}, {'frequency': 'f', 'synset': 'street_sign.n.01', 'synonyms': ['street_sign'], 'id': 1026, 'def': 'a sign visible from the street', 'name': 'street_sign'}, {'frequency': 'f', 'synset': 'streetlight.n.01', 'synonyms': ['streetlight', 'street_lamp'], 'id': 1027, 'def': 'a lamp supported on a lamppost; for illuminating a street', 'name': 'streetlight'}, {'frequency': 'r', 'synset': 'string_cheese.n.01', 'synonyms': ['string_cheese'], 'id': 1028, 'def': 'cheese formed in long strings twisted together', 'name': 'string_cheese'}, {'frequency': 'r', 'synset': 'stylus.n.02', 'synonyms': ['stylus'], 'id': 1029, 'def': 'a pointed tool for writing or drawing or engraving, including pens', 'name': 'stylus'}, {'frequency': 'r', 'synset': 'subwoofer.n.01', 'synonyms': ['subwoofer'], 'id': 1030, 'def': 'a loudspeaker that is designed to reproduce very low bass frequencies', 'name': 'subwoofer'}, {'frequency': 'r', 'synset': 'sugar_bowl.n.01', 'synonyms': ['sugar_bowl'], 'id': 1031, 'def': 'a dish in which sugar is served', 'name': 'sugar_bowl'}, {'frequency': 'r', 'synset': 'sugarcane.n.01', 'synonyms': ['sugarcane_(plant)'], 'id': 1032, 'def': 'juicy canes whose sap is a source of molasses and commercial sugar; fresh canes are sometimes chewed for the juice', 'name': 'sugarcane_(plant)'}, {'frequency': 'f', 'synset': 'suit.n.01', 'synonyms': ['suit_(clothing)'], 'id': 1033, 'def': 'a set of garments (usually including a jacket and trousers or skirt) for outerwear all of the same fabric and color', 'name': 'suit_(clothing)'}, {'frequency': 'c', 'synset': 'sunflower.n.01', 'synonyms': ['sunflower'], 'id': 1034, 'def': 'any plant of the genus Helianthus having large flower heads with dark disk florets and showy yellow rays', 'name': 'sunflower'}, {'frequency': 'f', 'synset': 'sunglasses.n.01', 'synonyms': ['sunglasses'], 'id': 1035, 'def': 'spectacles that are darkened or polarized to protect the eyes from the glare of the sun', 'name': 'sunglasses'}, {'frequency': 'c', 'synset': 'sunhat.n.01', 'synonyms': ['sunhat'], 'id': 1036, 'def': 'a hat with a broad brim that protects the face from direct exposure to the sun', 'name': 'sunhat'}, {'frequency': 'f', 'synset': 'surfboard.n.01', 'synonyms': ['surfboard'], 'id': 1037, 'def': 'a narrow buoyant board for riding surf', 'name': 'surfboard'}, {'frequency': 'c', 'synset': 'sushi.n.01', 'synonyms': ['sushi'], 'id': 1038, 'def': 'rice (with raw fish) wrapped in seaweed', 'name': 'sushi'}, {'frequency': 'c', 'synset': 'swab.n.02', 'synonyms': ['mop'], 'id': 1039, 'def': 'cleaning implement consisting of absorbent material fastened to a handle; for cleaning floors', 'name': 'mop'}, {'frequency': 'c', 'synset': 'sweat_pants.n.01', 'synonyms': ['sweat_pants'], 'id': 1040, 'def': 'loose-fitting trousers with elastic cuffs; worn by athletes', 'name': 'sweat_pants'}, {'frequency': 'c', 'synset': 'sweatband.n.02', 'synonyms': ['sweatband'], 'id': 1041, 'def': 'a band of material tied around the forehead or wrist to absorb sweat', 'name': 'sweatband'}, {'frequency': 'f', 'synset': 'sweater.n.01', 'synonyms': ['sweater'], 'id': 1042, 'def': 'a crocheted or knitted garment covering the upper part of the body', 'name': 'sweater'}, {'frequency': 'f', 'synset': 'sweatshirt.n.01', 'synonyms': ['sweatshirt'], 'id': 1043, 'def': 'cotton knit pullover with long sleeves worn during athletic activity', 'name': 'sweatshirt'}, {'frequency': 'c', 'synset': 'sweet_potato.n.02', 'synonyms': ['sweet_potato'], 'id': 1044, 'def': 'the edible tuberous root of the sweet potato vine', 'name': 'sweet_potato'}, {'frequency': 'f', 'synset': 'swimsuit.n.01', 'synonyms': ['swimsuit', 'swimwear', 'bathing_suit', 'swimming_costume', 'bathing_costume', 'swimming_trunks', 'bathing_trunks'], 'id': 1045, 'def': 'garment worn for swimming', 'name': 'swimsuit'}, {'frequency': 'c', 'synset': 'sword.n.01', 'synonyms': ['sword'], 'id': 1046, 'def': 'a cutting or thrusting weapon that has a long metal blade', 'name': 'sword'}, {'frequency': 'r', 'synset': 'syringe.n.01', 'synonyms': ['syringe'], 'id': 1047, 'def': 'a medical instrument used to inject or withdraw fluids', 'name': 'syringe'}, {'frequency': 'r', 'synset': 'tabasco.n.02', 'synonyms': ['Tabasco_sauce'], 'id': 1048, 'def': 'very spicy sauce (trade name Tabasco) made from fully-aged red peppers', 'name': 'Tabasco_sauce'}, {'frequency': 'r', 'synset': 'table-tennis_table.n.01', 'synonyms': ['table-tennis_table', 'ping-pong_table'], 'id': 1049, 'def': 'a table used for playing table tennis', 'name': 'table-tennis_table'}, {'frequency': 'f', 'synset': 'table.n.02', 'synonyms': ['table'], 'id': 1050, 'def': 'a piece of furniture having a smooth flat top that is usually supported by one or more vertical legs', 'name': 'table'}, {'frequency': 'c', 'synset': 'table_lamp.n.01', 'synonyms': ['table_lamp'], 'id': 1051, 'def': 'a lamp that sits on a table', 'name': 'table_lamp'}, {'frequency': 'f', 'synset': 'tablecloth.n.01', 'synonyms': ['tablecloth'], 'id': 1052, 'def': 'a covering spread over a dining table', 'name': 'tablecloth'}, {'frequency': 'r', 'synset': 'tachometer.n.01', 'synonyms': ['tachometer'], 'id': 1053, 'def': 'measuring instrument for indicating speed of rotation', 'name': 'tachometer'}, {'frequency': 'r', 'synset': 'taco.n.02', 'synonyms': ['taco'], 'id': 1054, 'def': 'a small tortilla cupped around a filling', 'name': 'taco'}, {'frequency': 'f', 'synset': 'tag.n.02', 'synonyms': ['tag'], 'id': 1055, 'def': 'a label associated with something for the purpose of identification or information', 'name': 'tag'}, {'frequency': 'f', 'synset': 'taillight.n.01', 'synonyms': ['taillight', 'rear_light'], 'id': 1056, 'def': 'lamp (usually red) mounted at the rear of a motor vehicle', 'name': 'taillight'}, {'frequency': 'r', 'synset': 'tambourine.n.01', 'synonyms': ['tambourine'], 'id': 1057, 'def': 'a shallow drum with a single drumhead and with metallic disks in the sides', 'name': 'tambourine'}, {'frequency': 'r', 'synset': 'tank.n.01', 'synonyms': ['army_tank', 'armored_combat_vehicle', 'armoured_combat_vehicle'], 'id': 1058, 'def': 'an enclosed armored military vehicle; has a cannon and moves on caterpillar treads', 'name': 'army_tank'}, {'frequency': 'f', 'synset': 'tank.n.02', 'synonyms': ['tank_(storage_vessel)', 'storage_tank'], 'id': 1059, 'def': 'a large (usually metallic) vessel for holding gases or liquids', 'name': 'tank_(storage_vessel)'}, {'frequency': 'f', 'synset': 'tank_top.n.01', 'synonyms': ['tank_top_(clothing)'], 'id': 1060, 'def': 'a tight-fitting sleeveless shirt with wide shoulder straps and low neck and no front opening', 'name': 'tank_top_(clothing)'}, {'frequency': 'f', 'synset': 'tape.n.01', 'synonyms': ['tape_(sticky_cloth_or_paper)'], 'id': 1061, 'def': 'a long thin piece of cloth or paper as used for binding or fastening', 'name': 'tape_(sticky_cloth_or_paper)'}, {'frequency': 'c', 'synset': 'tape.n.04', 'synonyms': ['tape_measure', 'measuring_tape'], 'id': 1062, 'def': 'measuring instrument consisting of a narrow strip (cloth or metal) marked in inches or centimeters and used for measuring lengths', 'name': 'tape_measure'}, {'frequency': 'c', 'synset': 'tapestry.n.02', 'synonyms': ['tapestry'], 'id': 1063, 'def': 'a heavy textile with a woven design; used for curtains and upholstery', 'name': 'tapestry'}, {'frequency': 'f', 'synset': 'tarpaulin.n.01', 'synonyms': ['tarp'], 'id': 1064, 'def': 'waterproofed canvas', 'name': 'tarp'}, {'frequency': 'c', 'synset': 'tartan.n.01', 'synonyms': ['tartan', 'plaid'], 'id': 1065, 'def': 'a cloth having a crisscross design', 'name': 'tartan'}, {'frequency': 'c', 'synset': 'tassel.n.01', 'synonyms': ['tassel'], 'id': 1066, 'def': 'adornment consisting of a bunch of cords fastened at one end', 'name': 'tassel'}, {'frequency': 'c', 'synset': 'tea_bag.n.01', 'synonyms': ['tea_bag'], 'id': 1067, 'def': 'a measured amount of tea in a bag for an individual serving of tea', 'name': 'tea_bag'}, {'frequency': 'c', 'synset': 'teacup.n.02', 'synonyms': ['teacup'], 'id': 1068, 'def': 'a cup from which tea is drunk', 'name': 'teacup'}, {'frequency': 'c', 'synset': 'teakettle.n.01', 'synonyms': ['teakettle'], 'id': 1069, 'def': 'kettle for boiling water to make tea', 'name': 'teakettle'}, {'frequency': 'f', 'synset': 'teapot.n.01', 'synonyms': ['teapot'], 'id': 1070, 'def': 'pot for brewing tea; usually has a spout and handle', 'name': 'teapot'}, {'frequency': 'f', 'synset': 'teddy.n.01', 'synonyms': ['teddy_bear'], 'id': 1071, 'def': "plaything consisting of a child's toy bear (usually plush and stuffed with soft materials)", 'name': 'teddy_bear'}, {'frequency': 'f', 'synset': 'telephone.n.01', 'synonyms': ['telephone', 'phone', 'telephone_set'], 'id': 1072, 'def': 'electronic device for communicating by voice over long distances (includes wired and wireless/cell phones)', 'name': 'telephone'}, {'frequency': 'c', 'synset': 'telephone_booth.n.01', 'synonyms': ['telephone_booth', 'phone_booth', 'call_box', 'telephone_box', 'telephone_kiosk'], 'id': 1073, 'def': 'booth for using a telephone', 'name': 'telephone_booth'}, {'frequency': 'f', 'synset': 'telephone_pole.n.01', 'synonyms': ['telephone_pole', 'telegraph_pole', 'telegraph_post'], 'id': 1074, 'def': 'tall pole supporting telephone wires', 'name': 'telephone_pole'}, {'frequency': 'r', 'synset': 'telephoto_lens.n.01', 'synonyms': ['telephoto_lens', 'zoom_lens'], 'id': 1075, 'def': 'a camera lens that magnifies the image', 'name': 'telephoto_lens'}, {'frequency': 'c', 'synset': 'television_camera.n.01', 'synonyms': ['television_camera', 'tv_camera'], 'id': 1076, 'def': 'television equipment for capturing and recording video', 'name': 'television_camera'}, {'frequency': 'f', 'synset': 'television_receiver.n.01', 'synonyms': ['television_set', 'tv', 'tv_set'], 'id': 1077, 'def': 'an electronic device that receives television signals and displays them on a screen', 'name': 'television_set'}, {'frequency': 'f', 'synset': 'tennis_ball.n.01', 'synonyms': ['tennis_ball'], 'id': 1078, 'def': 'ball about the size of a fist used in playing tennis', 'name': 'tennis_ball'}, {'frequency': 'f', 'synset': 'tennis_racket.n.01', 'synonyms': ['tennis_racket'], 'id': 1079, 'def': 'a racket used to play tennis', 'name': 'tennis_racket'}, {'frequency': 'r', 'synset': 'tequila.n.01', 'synonyms': ['tequila'], 'id': 1080, 'def': 'Mexican liquor made from fermented juices of an agave plant', 'name': 'tequila'}, {'frequency': 'c', 'synset': 'thermometer.n.01', 'synonyms': ['thermometer'], 'id': 1081, 'def': 'measuring instrument for measuring temperature', 'name': 'thermometer'}, {'frequency': 'c', 'synset': 'thermos.n.01', 'synonyms': ['thermos_bottle'], 'id': 1082, 'def': 'vacuum flask that preserves temperature of hot or cold drinks', 'name': 'thermos_bottle'}, {'frequency': 'f', 'synset': 'thermostat.n.01', 'synonyms': ['thermostat'], 'id': 1083, 'def': 'a regulator for automatically regulating temperature by starting or stopping the supply of heat', 'name': 'thermostat'}, {'frequency': 'r', 'synset': 'thimble.n.02', 'synonyms': ['thimble'], 'id': 1084, 'def': 'a small metal cap to protect the finger while sewing; can be used as a small container', 'name': 'thimble'}, {'frequency': 'c', 'synset': 'thread.n.01', 'synonyms': ['thread', 'yarn'], 'id': 1085, 'def': 'a fine cord of twisted fibers (of cotton or silk or wool or nylon etc.) used in sewing and weaving', 'name': 'thread'}, {'frequency': 'c', 'synset': 'thumbtack.n.01', 'synonyms': ['thumbtack', 'drawing_pin', 'pushpin'], 'id': 1086, 'def': 'a tack for attaching papers to a bulletin board or drawing board', 'name': 'thumbtack'}, {'frequency': 'c', 'synset': 'tiara.n.01', 'synonyms': ['tiara'], 'id': 1087, 'def': 'a jeweled headdress worn by women on formal occasions', 'name': 'tiara'}, {'frequency': 'c', 'synset': 'tiger.n.02', 'synonyms': ['tiger'], 'id': 1088, 'def': 'large feline of forests in most of Asia having a tawny coat with black stripes', 'name': 'tiger'}, {'frequency': 'c', 'synset': 'tights.n.01', 'synonyms': ['tights_(clothing)', 'leotards'], 'id': 1089, 'def': 'skintight knit hose covering the body from the waist to the feet worn by acrobats and dancers and as stockings by women and girls', 'name': 'tights_(clothing)'}, {'frequency': 'c', 'synset': 'timer.n.01', 'synonyms': ['timer', 'stopwatch'], 'id': 1090, 'def': 'a timepiece that measures a time interval and signals its end', 'name': 'timer'}, {'frequency': 'f', 'synset': 'tinfoil.n.01', 'synonyms': ['tinfoil'], 'id': 1091, 'def': 'foil made of tin or an alloy of tin and lead', 'name': 'tinfoil'}, {'frequency': 'c', 'synset': 'tinsel.n.01', 'synonyms': ['tinsel'], 'id': 1092, 'def': 'a showy decoration that is basically valueless', 'name': 'tinsel'}, {'frequency': 'f', 'synset': 'tissue.n.02', 'synonyms': ['tissue_paper'], 'id': 1093, 'def': 'a soft thin (usually translucent) paper', 'name': 'tissue_paper'}, {'frequency': 'c', 'synset': 'toast.n.01', 'synonyms': ['toast_(food)'], 'id': 1094, 'def': 'slice of bread that has been toasted', 'name': 'toast_(food)'}, {'frequency': 'f', 'synset': 'toaster.n.02', 'synonyms': ['toaster'], 'id': 1095, 'def': 'a kitchen appliance (usually electric) for toasting bread', 'name': 'toaster'}, {'frequency': 'f', 'synset': 'toaster_oven.n.01', 'synonyms': ['toaster_oven'], 'id': 1096, 'def': 'kitchen appliance consisting of a small electric oven for toasting or warming food', 'name': 'toaster_oven'}, {'frequency': 'f', 'synset': 'toilet.n.02', 'synonyms': ['toilet'], 'id': 1097, 'def': 'a plumbing fixture for defecation and urination', 'name': 'toilet'}, {'frequency': 'f', 'synset': 'toilet_tissue.n.01', 'synonyms': ['toilet_tissue', 'toilet_paper', 'bathroom_tissue'], 'id': 1098, 'def': 'a soft thin absorbent paper for use in toilets', 'name': 'toilet_tissue'}, {'frequency': 'f', 'synset': 'tomato.n.01', 'synonyms': ['tomato'], 'id': 1099, 'def': 'mildly acid red or yellow pulpy fruit eaten as a vegetable', 'name': 'tomato'}, {'frequency': 'f', 'synset': 'tongs.n.01', 'synonyms': ['tongs'], 'id': 1100, 'def': 'any of various devices for taking hold of objects; usually have two hinged legs with handles above and pointed hooks below', 'name': 'tongs'}, {'frequency': 'c', 'synset': 'toolbox.n.01', 'synonyms': ['toolbox'], 'id': 1101, 'def': 'a box or chest or cabinet for holding hand tools', 'name': 'toolbox'}, {'frequency': 'f', 'synset': 'toothbrush.n.01', 'synonyms': ['toothbrush'], 'id': 1102, 'def': 'small brush; has long handle; used to clean teeth', 'name': 'toothbrush'}, {'frequency': 'f', 'synset': 'toothpaste.n.01', 'synonyms': ['toothpaste'], 'id': 1103, 'def': 'a dentifrice in the form of a paste', 'name': 'toothpaste'}, {'frequency': 'f', 'synset': 'toothpick.n.01', 'synonyms': ['toothpick'], 'id': 1104, 'def': 'pick consisting of a small strip of wood or plastic; used to pick food from between the teeth', 'name': 'toothpick'}, {'frequency': 'f', 'synset': 'top.n.09', 'synonyms': ['cover'], 'id': 1105, 'def': 'covering for a hole (especially a hole in the top of a container)', 'name': 'cover'}, {'frequency': 'c', 'synset': 'tortilla.n.01', 'synonyms': ['tortilla'], 'id': 1106, 'def': 'thin unleavened pancake made from cornmeal or wheat flour', 'name': 'tortilla'}, {'frequency': 'c', 'synset': 'tow_truck.n.01', 'synonyms': ['tow_truck'], 'id': 1107, 'def': 'a truck equipped to hoist and pull wrecked cars (or to remove cars from no-parking zones)', 'name': 'tow_truck'}, {'frequency': 'f', 'synset': 'towel.n.01', 'synonyms': ['towel'], 'id': 1108, 'def': 'a rectangular piece of absorbent cloth (or paper) for drying or wiping', 'name': 'towel'}, {'frequency': 'f', 'synset': 'towel_rack.n.01', 'synonyms': ['towel_rack', 'towel_rail', 'towel_bar'], 'id': 1109, 'def': 'a rack consisting of one or more bars on which towels can be hung', 'name': 'towel_rack'}, {'frequency': 'f', 'synset': 'toy.n.03', 'synonyms': ['toy'], 'id': 1110, 'def': 'a device regarded as providing amusement', 'name': 'toy'}, {'frequency': 'c', 'synset': 'tractor.n.01', 'synonyms': ['tractor_(farm_equipment)'], 'id': 1111, 'def': 'a wheeled vehicle with large wheels; used in farming and other applications', 'name': 'tractor_(farm_equipment)'}, {'frequency': 'f', 'synset': 'traffic_light.n.01', 'synonyms': ['traffic_light'], 'id': 1112, 'def': 'a device to control vehicle traffic often consisting of three or more lights', 'name': 'traffic_light'}, {'frequency': 'c', 'synset': 'trail_bike.n.01', 'synonyms': ['dirt_bike'], 'id': 1113, 'def': 'a lightweight motorcycle equipped with rugged tires and suspension for off-road use', 'name': 'dirt_bike'}, {'frequency': 'f', 'synset': 'trailer_truck.n.01', 'synonyms': ['trailer_truck', 'tractor_trailer', 'trucking_rig', 'articulated_lorry', 'semi_truck'], 'id': 1114, 'def': 'a truck consisting of a tractor and trailer together', 'name': 'trailer_truck'}, {'frequency': 'f', 'synset': 'train.n.01', 'synonyms': ['train_(railroad_vehicle)', 'railroad_train'], 'id': 1115, 'def': 'public or private transport provided by a line of railway cars coupled together and drawn by a locomotive', 'name': 'train_(railroad_vehicle)'}, {'frequency': 'r', 'synset': 'trampoline.n.01', 'synonyms': ['trampoline'], 'id': 1116, 'def': 'gymnastic apparatus consisting of a strong canvas sheet attached with springs to a metal frame', 'name': 'trampoline'}, {'frequency': 'f', 'synset': 'tray.n.01', 'synonyms': ['tray'], 'id': 1117, 'def': 'an open receptacle for holding or displaying or serving articles or food', 'name': 'tray'}, {'frequency': 'r', 'synset': 'trench_coat.n.01', 'synonyms': ['trench_coat'], 'id': 1118, 'def': 'a military style raincoat; belted with deep pockets', 'name': 'trench_coat'}, {'frequency': 'r', 'synset': 'triangle.n.05', 'synonyms': ['triangle_(musical_instrument)'], 'id': 1119, 'def': 'a percussion instrument consisting of a metal bar bent in the shape of an open triangle', 'name': 'triangle_(musical_instrument)'}, {'frequency': 'c', 'synset': 'tricycle.n.01', 'synonyms': ['tricycle'], 'id': 1120, 'def': 'a vehicle with three wheels that is moved by foot pedals', 'name': 'tricycle'}, {'frequency': 'f', 'synset': 'tripod.n.01', 'synonyms': ['tripod'], 'id': 1121, 'def': 'a three-legged rack used for support', 'name': 'tripod'}, {'frequency': 'f', 'synset': 'trouser.n.01', 'synonyms': ['trousers', 'pants_(clothing)'], 'id': 1122, 'def': 'a garment extending from the waist to the knee or ankle, covering each leg separately', 'name': 'trousers'}, {'frequency': 'f', 'synset': 'truck.n.01', 'synonyms': ['truck'], 'id': 1123, 'def': 'an automotive vehicle suitable for hauling', 'name': 'truck'}, {'frequency': 'r', 'synset': 'truffle.n.03', 'synonyms': ['truffle_(chocolate)', 'chocolate_truffle'], 'id': 1124, 'def': 'creamy chocolate candy', 'name': 'truffle_(chocolate)'}, {'frequency': 'c', 'synset': 'trunk.n.02', 'synonyms': ['trunk'], 'id': 1125, 'def': 'luggage consisting of a large strong case used when traveling or for storage', 'name': 'trunk'}, {'frequency': 'r', 'synset': 'tub.n.02', 'synonyms': ['vat'], 'id': 1126, 'def': 'a large vessel for holding or storing liquids', 'name': 'vat'}, {'frequency': 'c', 'synset': 'turban.n.01', 'synonyms': ['turban'], 'id': 1127, 'def': 'a traditional headdress consisting of a long scarf wrapped around the head', 'name': 'turban'}, {'frequency': 'c', 'synset': 'turkey.n.04', 'synonyms': ['turkey_(food)'], 'id': 1128, 'def': 'flesh of large domesticated fowl usually roasted', 'name': 'turkey_(food)'}, {'frequency': 'r', 'synset': 'turnip.n.01', 'synonyms': ['turnip'], 'id': 1129, 'def': 'widely cultivated plant having a large fleshy edible white or yellow root', 'name': 'turnip'}, {'frequency': 'c', 'synset': 'turtle.n.02', 'synonyms': ['turtle'], 'id': 1130, 'def': 'any of various aquatic and land reptiles having a bony shell and flipper-like limbs for swimming', 'name': 'turtle'}, {'frequency': 'c', 'synset': 'turtleneck.n.01', 'synonyms': ['turtleneck_(clothing)', 'polo-neck'], 'id': 1131, 'def': 'a sweater or jersey with a high close-fitting collar', 'name': 'turtleneck_(clothing)'}, {'frequency': 'c', 'synset': 'typewriter.n.01', 'synonyms': ['typewriter'], 'id': 1132, 'def': 'hand-operated character printer for printing written messages one character at a time', 'name': 'typewriter'}, {'frequency': 'f', 'synset': 'umbrella.n.01', 'synonyms': ['umbrella'], 'id': 1133, 'def': 'a lightweight handheld collapsible canopy', 'name': 'umbrella'}, {'frequency': 'f', 'synset': 'underwear.n.01', 'synonyms': ['underwear', 'underclothes', 'underclothing', 'underpants'], 'id': 1134, 'def': 'undergarment worn next to the skin and under the outer garments', 'name': 'underwear'}, {'frequency': 'r', 'synset': 'unicycle.n.01', 'synonyms': ['unicycle'], 'id': 1135, 'def': 'a vehicle with a single wheel that is driven by pedals', 'name': 'unicycle'}, {'frequency': 'f', 'synset': 'urinal.n.01', 'synonyms': ['urinal'], 'id': 1136, 'def': 'a plumbing fixture (usually attached to the wall) used by men to urinate', 'name': 'urinal'}, {'frequency': 'c', 'synset': 'urn.n.01', 'synonyms': ['urn'], 'id': 1137, 'def': 'a large vase that usually has a pedestal or feet', 'name': 'urn'}, {'frequency': 'c', 'synset': 'vacuum.n.04', 'synonyms': ['vacuum_cleaner'], 'id': 1138, 'def': 'an electrical home appliance that cleans by suction', 'name': 'vacuum_cleaner'}, {'frequency': 'f', 'synset': 'vase.n.01', 'synonyms': ['vase'], 'id': 1139, 'def': 'an open jar of glass or porcelain used as an ornament or to hold flowers', 'name': 'vase'}, {'frequency': 'c', 'synset': 'vending_machine.n.01', 'synonyms': ['vending_machine'], 'id': 1140, 'def': 'a slot machine for selling goods', 'name': 'vending_machine'}, {'frequency': 'f', 'synset': 'vent.n.01', 'synonyms': ['vent', 'blowhole', 'air_vent'], 'id': 1141, 'def': 'a hole for the escape of gas or air', 'name': 'vent'}, {'frequency': 'f', 'synset': 'vest.n.01', 'synonyms': ['vest', 'waistcoat'], 'id': 1142, 'def': "a man's sleeveless garment worn underneath a coat", 'name': 'vest'}, {'frequency': 'c', 'synset': 'videotape.n.01', 'synonyms': ['videotape'], 'id': 1143, 'def': 'a video recording made on magnetic tape', 'name': 'videotape'}, {'frequency': 'r', 'synset': 'vinegar.n.01', 'synonyms': ['vinegar'], 'id': 1144, 'def': 'sour-tasting liquid produced usually by oxidation of the alcohol in wine or cider and used as a condiment or food preservative', 'name': 'vinegar'}, {'frequency': 'r', 'synset': 'violin.n.01', 'synonyms': ['violin', 'fiddle'], 'id': 1145, 'def': 'bowed stringed instrument that is the highest member of the violin family', 'name': 'violin'}, {'frequency': 'r', 'synset': 'vodka.n.01', 'synonyms': ['vodka'], 'id': 1146, 'def': 'unaged colorless liquor originating in Russia', 'name': 'vodka'}, {'frequency': 'c', 'synset': 'volleyball.n.02', 'synonyms': ['volleyball'], 'id': 1147, 'def': 'an inflated ball used in playing volleyball', 'name': 'volleyball'}, {'frequency': 'r', 'synset': 'vulture.n.01', 'synonyms': ['vulture'], 'id': 1148, 'def': 'any of various large birds of prey having naked heads and weak claws and feeding chiefly on carrion', 'name': 'vulture'}, {'frequency': 'c', 'synset': 'waffle.n.01', 'synonyms': ['waffle'], 'id': 1149, 'def': 'pancake batter baked in a waffle iron', 'name': 'waffle'}, {'frequency': 'r', 'synset': 'waffle_iron.n.01', 'synonyms': ['waffle_iron'], 'id': 1150, 'def': 'a kitchen appliance for baking waffles', 'name': 'waffle_iron'}, {'frequency': 'c', 'synset': 'wagon.n.01', 'synonyms': ['wagon'], 'id': 1151, 'def': 'any of various kinds of wheeled vehicles drawn by an animal or a tractor', 'name': 'wagon'}, {'frequency': 'c', 'synset': 'wagon_wheel.n.01', 'synonyms': ['wagon_wheel'], 'id': 1152, 'def': 'a wheel of a wagon', 'name': 'wagon_wheel'}, {'frequency': 'c', 'synset': 'walking_stick.n.01', 'synonyms': ['walking_stick'], 'id': 1153, 'def': 'a stick carried in the hand for support in walking', 'name': 'walking_stick'}, {'frequency': 'c', 'synset': 'wall_clock.n.01', 'synonyms': ['wall_clock'], 'id': 1154, 'def': 'a clock mounted on a wall', 'name': 'wall_clock'}, {'frequency': 'f', 'synset': 'wall_socket.n.01', 'synonyms': ['wall_socket', 'wall_plug', 'electric_outlet', 'electrical_outlet', 'outlet', 'electric_receptacle'], 'id': 1155, 'def': 'receptacle providing a place in a wiring system where current can be taken to run electrical devices', 'name': 'wall_socket'}, {'frequency': 'f', 'synset': 'wallet.n.01', 'synonyms': ['wallet', 'billfold'], 'id': 1156, 'def': 'a pocket-size case for holding papers and paper money', 'name': 'wallet'}, {'frequency': 'r', 'synset': 'walrus.n.01', 'synonyms': ['walrus'], 'id': 1157, 'def': 'either of two large northern marine mammals having ivory tusks and tough hide over thick blubber', 'name': 'walrus'}, {'frequency': 'r', 'synset': 'wardrobe.n.01', 'synonyms': ['wardrobe'], 'id': 1158, 'def': 'a tall piece of furniture that provides storage space for clothes; has a door and rails or hooks for hanging clothes', 'name': 'wardrobe'}, {'frequency': 'r', 'synset': 'washbasin.n.01', 'synonyms': ['washbasin', 'basin_(for_washing)', 'washbowl', 'washstand', 'handbasin'], 'id': 1159, 'def': 'a bathroom sink that is permanently installed and connected to a water supply and drainpipe; where you can wash your hands and face', 'name': 'washbasin'}, {'frequency': 'c', 'synset': 'washer.n.03', 'synonyms': ['automatic_washer', 'washing_machine'], 'id': 1160, 'def': 'a home appliance for washing clothes and linens automatically', 'name': 'automatic_washer'}, {'frequency': 'f', 'synset': 'watch.n.01', 'synonyms': ['watch', 'wristwatch'], 'id': 1161, 'def': 'a small, portable timepiece', 'name': 'watch'}, {'frequency': 'f', 'synset': 'water_bottle.n.01', 'synonyms': ['water_bottle'], 'id': 1162, 'def': 'a bottle for holding water', 'name': 'water_bottle'}, {'frequency': 'c', 'synset': 'water_cooler.n.01', 'synonyms': ['water_cooler'], 'id': 1163, 'def': 'a device for cooling and dispensing drinking water', 'name': 'water_cooler'}, {'frequency': 'c', 'synset': 'water_faucet.n.01', 'synonyms': ['water_faucet', 'water_tap', 'tap_(water_faucet)'], 'id': 1164, 'def': 'a faucet for drawing water from a pipe or cask', 'name': 'water_faucet'}, {'frequency': 'r', 'synset': 'water_heater.n.01', 'synonyms': ['water_heater', 'hot-water_heater'], 'id': 1165, 'def': 'a heater and storage tank to supply heated water', 'name': 'water_heater'}, {'frequency': 'c', 'synset': 'water_jug.n.01', 'synonyms': ['water_jug'], 'id': 1166, 'def': 'a jug that holds water', 'name': 'water_jug'}, {'frequency': 'r', 'synset': 'water_pistol.n.01', 'synonyms': ['water_gun', 'squirt_gun'], 'id': 1167, 'def': 'plaything consisting of a toy pistol that squirts water', 'name': 'water_gun'}, {'frequency': 'c', 'synset': 'water_scooter.n.01', 'synonyms': ['water_scooter', 'sea_scooter', 'jet_ski'], 'id': 1168, 'def': 'a motorboat resembling a motor scooter (NOT A SURFBOARD OR WATER SKI)', 'name': 'water_scooter'}, {'frequency': 'c', 'synset': 'water_ski.n.01', 'synonyms': ['water_ski'], 'id': 1169, 'def': 'broad ski for skimming over water towed by a speedboat (DO NOT MARK WATER)', 'name': 'water_ski'}, {'frequency': 'c', 'synset': 'water_tower.n.01', 'synonyms': ['water_tower'], 'id': 1170, 'def': 'a large reservoir for water', 'name': 'water_tower'}, {'frequency': 'c', 'synset': 'watering_can.n.01', 'synonyms': ['watering_can'], 'id': 1171, 'def': 'a container with a handle and a spout with a perforated nozzle; used to sprinkle water over plants', 'name': 'watering_can'}, {'frequency': 'f', 'synset': 'watermelon.n.02', 'synonyms': ['watermelon'], 'id': 1172, 'def': 'large oblong or roundish melon with a hard green rind and sweet watery red or occasionally yellowish pulp', 'name': 'watermelon'}, {'frequency': 'f', 'synset': 'weathervane.n.01', 'synonyms': ['weathervane', 'vane_(weathervane)', 'wind_vane'], 'id': 1173, 'def': 'mechanical device attached to an elevated structure; rotates freely to show the direction of the wind', 'name': 'weathervane'}, {'frequency': 'c', 'synset': 'webcam.n.01', 'synonyms': ['webcam'], 'id': 1174, 'def': 'a digital camera designed to take digital photographs and transmit them over the internet', 'name': 'webcam'}, {'frequency': 'c', 'synset': 'wedding_cake.n.01', 'synonyms': ['wedding_cake', 'bridecake'], 'id': 1175, 'def': 'a rich cake with two or more tiers and covered with frosting and decorations; served at a wedding reception', 'name': 'wedding_cake'}, {'frequency': 'c', 'synset': 'wedding_ring.n.01', 'synonyms': ['wedding_ring', 'wedding_band'], 'id': 1176, 'def': 'a ring given to the bride and/or groom at the wedding', 'name': 'wedding_ring'}, {'frequency': 'f', 'synset': 'wet_suit.n.01', 'synonyms': ['wet_suit'], 'id': 1177, 'def': 'a close-fitting garment made of a permeable material; worn in cold water to retain body heat', 'name': 'wet_suit'}, {'frequency': 'f', 'synset': 'wheel.n.01', 'synonyms': ['wheel'], 'id': 1178, 'def': 'a circular frame with spokes (or a solid disc) that can rotate on a shaft or axle', 'name': 'wheel'}, {'frequency': 'c', 'synset': 'wheelchair.n.01', 'synonyms': ['wheelchair'], 'id': 1179, 'def': 'a movable chair mounted on large wheels', 'name': 'wheelchair'}, {'frequency': 'c', 'synset': 'whipped_cream.n.01', 'synonyms': ['whipped_cream'], 'id': 1180, 'def': 'cream that has been beaten until light and fluffy', 'name': 'whipped_cream'}, {'frequency': 'c', 'synset': 'whistle.n.03', 'synonyms': ['whistle'], 'id': 1181, 'def': 'a small wind instrument that produces a whistling sound by blowing into it', 'name': 'whistle'}, {'frequency': 'c', 'synset': 'wig.n.01', 'synonyms': ['wig'], 'id': 1182, 'def': 'hairpiece covering the head and made of real or synthetic hair', 'name': 'wig'}, {'frequency': 'c', 'synset': 'wind_chime.n.01', 'synonyms': ['wind_chime'], 'id': 1183, 'def': 'a decorative arrangement of pieces of metal or glass or pottery that hang together loosely so the wind can cause them to tinkle', 'name': 'wind_chime'}, {'frequency': 'c', 'synset': 'windmill.n.01', 'synonyms': ['windmill'], 'id': 1184, 'def': 'A mill or turbine that is powered by wind', 'name': 'windmill'}, {'frequency': 'c', 'synset': 'window_box.n.01', 'synonyms': ['window_box_(for_plants)'], 'id': 1185, 'def': 'a container for growing plants on a windowsill', 'name': 'window_box_(for_plants)'}, {'frequency': 'f', 'synset': 'windshield_wiper.n.01', 'synonyms': ['windshield_wiper', 'windscreen_wiper', 'wiper_(for_windshield/screen)'], 'id': 1186, 'def': 'a mechanical device that cleans the windshield', 'name': 'windshield_wiper'}, {'frequency': 'c', 'synset': 'windsock.n.01', 'synonyms': ['windsock', 'air_sock', 'air-sleeve', 'wind_sleeve', 'wind_cone'], 'id': 1187, 'def': 'a truncated cloth cone mounted on a mast/pole; shows wind direction', 'name': 'windsock'}, {'frequency': 'f', 'synset': 'wine_bottle.n.01', 'synonyms': ['wine_bottle'], 'id': 1188, 'def': 'a bottle for holding wine', 'name': 'wine_bottle'}, {'frequency': 'c', 'synset': 'wine_bucket.n.01', 'synonyms': ['wine_bucket', 'wine_cooler'], 'id': 1189, 'def': 'a bucket of ice used to chill a bottle of wine', 'name': 'wine_bucket'}, {'frequency': 'f', 'synset': 'wineglass.n.01', 'synonyms': ['wineglass'], 'id': 1190, 'def': 'a glass that has a stem and in which wine is served', 'name': 'wineglass'}, {'frequency': 'f', 'synset': 'winker.n.02', 'synonyms': ['blinder_(for_horses)'], 'id': 1191, 'def': 'blinds that prevent a horse from seeing something on either side', 'name': 'blinder_(for_horses)'}, {'frequency': 'c', 'synset': 'wok.n.01', 'synonyms': ['wok'], 'id': 1192, 'def': 'pan with a convex bottom; used for frying in Chinese cooking', 'name': 'wok'}, {'frequency': 'r', 'synset': 'wolf.n.01', 'synonyms': ['wolf'], 'id': 1193, 'def': 'a wild carnivorous mammal of the dog family, living and hunting in packs', 'name': 'wolf'}, {'frequency': 'c', 'synset': 'wooden_spoon.n.02', 'synonyms': ['wooden_spoon'], 'id': 1194, 'def': 'a spoon made of wood', 'name': 'wooden_spoon'}, {'frequency': 'c', 'synset': 'wreath.n.01', 'synonyms': ['wreath'], 'id': 1195, 'def': 'an arrangement of flowers, leaves, or stems fastened in a ring', 'name': 'wreath'}, {'frequency': 'c', 'synset': 'wrench.n.03', 'synonyms': ['wrench', 'spanner'], 'id': 1196, 'def': 'a hand tool that is used to hold or twist a nut or bolt', 'name': 'wrench'}, {'frequency': 'f', 'synset': 'wristband.n.01', 'synonyms': ['wristband'], 'id': 1197, 'def': 'band consisting of a part of a sleeve that covers the wrist', 'name': 'wristband'}, {'frequency': 'f', 'synset': 'wristlet.n.01', 'synonyms': ['wristlet', 'wrist_band'], 'id': 1198, 'def': 'a band or bracelet worn around the wrist', 'name': 'wristlet'}, {'frequency': 'c', 'synset': 'yacht.n.01', 'synonyms': ['yacht'], 'id': 1199, 'def': 'an expensive vessel propelled by sail or power and used for cruising or racing', 'name': 'yacht'}, {'frequency': 'c', 'synset': 'yogurt.n.01', 'synonyms': ['yogurt', 'yoghurt', 'yoghourt'], 'id': 1200, 'def': 'a custard-like food made from curdled milk', 'name': 'yogurt'}, {'frequency': 'c', 'synset': 'yoke.n.07', 'synonyms': ['yoke_(animal_equipment)'], 'id': 1201, 'def': 'gear joining two animals at the neck; NOT egg yolk', 'name': 'yoke_(animal_equipment)'}, {'frequency': 'f', 'synset': 'zebra.n.01', 'synonyms': ['zebra'], 'id': 1202, 'def': 'any of several fleet black-and-white striped African equines', 'name': 'zebra'}, {'frequency': 'c', 'synset': 'zucchini.n.02', 'synonyms': ['zucchini', 'courgette'], 'id': 1203, 'def': 'small cucumber-shaped vegetable marrow; typically dark green', 'name': 'zucchini'}] # noqa -# fmt: on diff --git a/spaces/ysharma/testing_gradio_wheels/app.py b/spaces/ysharma/testing_gradio_wheels/app.py deleted file mode 100644 index e958fc7032e83cd27070e78d477efcce4a7be0ea..0000000000000000000000000000000000000000 --- a/spaces/ysharma/testing_gradio_wheels/app.py +++ /dev/null @@ -1,14 +0,0 @@ -import gradio as gr - -def dummy(fex): - print(fex) - return fex - -with gr.Blocks() as demo: - with gr.Row(): - fex1 = gr.FileExplorer(value="/content/test/untitled.txt", height=200) - fex2 = gr.FileExplorer() - btn=gr.Button() - btn.click(dummy, fex1, fex2) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/yuan1615/EmpathyVC/text/__init__.py b/spaces/yuan1615/EmpathyVC/text/__init__.py deleted file mode 100644 index 32b2cf3bb973e53af0b6d319b15f10f796af424c..0000000000000000000000000000000000000000 --- a/spaces/yuan1615/EmpathyVC/text/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text.symbols import symbols -PROSODYS = ['_', 'I', 'B1', 'B2', 'B3'] \ No newline at end of file diff --git a/spaces/zeno-ml/translation-report/README.md b/spaces/zeno-ml/translation-report/README.md deleted file mode 100644 index 326764b11a5435263a3f319c5f0f0f0ee8d996c3..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Zeno Translation Report -emoji: 💠 -colorFrom: red -colorTo: blue -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zhang-wei-jian/docker/node_modules/statuses/index.js b/spaces/zhang-wei-jian/docker/node_modules/statuses/index.js deleted file mode 100644 index 4df469a05d1a293ac67077f149f17b24ff49d2b1..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/statuses/index.js +++ /dev/null @@ -1,113 +0,0 @@ -/*! - * statuses - * Copyright(c) 2014 Jonathan Ong - * Copyright(c) 2016 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict' - -/** - * Module dependencies. - * @private - */ - -var codes = require('./codes.json') - -/** - * Module exports. - * @public - */ - -module.exports = status - -// status code to message map -status.STATUS_CODES = codes - -// array of status codes -status.codes = populateStatusesMap(status, codes) - -// status codes for redirects -status.redirect = { - 300: true, - 301: true, - 302: true, - 303: true, - 305: true, - 307: true, - 308: true -} - -// status codes for empty bodies -status.empty = { - 204: true, - 205: true, - 304: true -} - -// status codes for when you should retry the request -status.retry = { - 502: true, - 503: true, - 504: true -} - -/** - * Populate the statuses map for given codes. - * @private - */ - -function populateStatusesMap (statuses, codes) { - var arr = [] - - Object.keys(codes).forEach(function forEachCode (code) { - var message = codes[code] - var status = Number(code) - - // Populate properties - statuses[status] = message - statuses[message] = status - statuses[message.toLowerCase()] = status - - // Add to array - arr.push(status) - }) - - return arr -} - -/** - * Get the status code. - * - * Given a number, this will throw if it is not a known status - * code, otherwise the code will be returned. Given a string, - * the string will be parsed for a number and return the code - * if valid, otherwise will lookup the code assuming this is - * the status message. - * - * @param {string|number} code - * @returns {number} - * @public - */ - -function status (code) { - if (typeof code === 'number') { - if (!status[code]) throw new Error('invalid status code: ' + code) - return code - } - - if (typeof code !== 'string') { - throw new TypeError('code must be a number or string') - } - - // '403' - var n = parseInt(code, 10) - if (!isNaN(n)) { - if (!status[n]) throw new Error('invalid status code: ' + n) - return n - } - - n = status[code.toLowerCase()] - if (!n) throw new Error('invalid status message: "' + code + '"') - return n -} diff --git a/spaces/zhangliwei7758/vits-uma-genshin-honkai/README.md b/spaces/zhangliwei7758/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 2fd2870bef9c579ab20b33fdd09aea238aeb1f1d..0000000000000000000000000000000000000000 --- a/spaces/zhangliwei7758/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: sayashi/vits-uma-genshin-honkai ---- diff --git a/spaces/zhaoys/wfms-kuiwenc/src/components/ui/dropdown-menu.tsx b/spaces/zhaoys/wfms-kuiwenc/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/zhaoys/wfms-kuiwenc/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/toaster.tsx b/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/zhoupin30/zhoupin30/src/pages/api/kblob.ts b/spaces/zhoupin30/zhoupin30/src/pages/api/kblob.ts deleted file mode 100644 index 06f6f6743162be1463c5ae9a2262e8b4ad5d1631..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/pages/api/kblob.ts +++ /dev/null @@ -1,55 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import FormData from 'form-data' -import { debug, fetch } from '@/lib/isomorphic' -import { KBlobRequest } from '@/lib/bots/bing/types' -import { createHeaders } from '@/lib/utils' - -const API_DOMAIN = 'https://www.bing.com' - -export const config = { - api: { - bodyParser: { - sizeLimit: '10mb' // Set desired value here - } - } -} - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest - - const formData = new FormData() - formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - if (imageBase64) { - formData.append('imageBase64', imageBase64) - } - - const response = await fetch(`${API_DOMAIN}/images/kblob`, - { - method: 'POST', - body: formData.getBuffer(), - headers: { - 'Referer': 'https://www.bing.com/search', - ...formData.getHeaders() - } - } - ) - - if (response.status !== 200) { - throw new Error('图片上传失败') - } - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(await response.text()) - } catch (e) { - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/zhsso/roop/roop/predicter.py b/spaces/zhsso/roop/roop/predicter.py deleted file mode 100644 index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000 --- a/spaces/zhsso/roop/roop/predicter.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy -import opennsfw2 -from PIL import Image - -from roop.typing import Frame - -MAX_PROBABILITY = 0.85 - - -def predict_frame(target_frame: Frame) -> bool: - image = Image.fromarray(target_frame) - image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO) - model = opennsfw2.make_open_nsfw_model() - views = numpy.expand_dims(image, axis=0) - _, probability = model.predict(views)[0] - return probability > MAX_PROBABILITY - - -def predict_image(target_path: str) -> bool: - return opennsfw2.predict_image(target_path) > MAX_PROBABILITY - - -def predict_video(target_path: str) -> bool: - _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100) - return any(probability > MAX_PROBABILITY for probability in probabilities) diff --git a/spaces/zomehwh/sovits-xiaoke/inference/infer_tool.py b/spaces/zomehwh/sovits-xiaoke/inference/infer_tool.py deleted file mode 100644 index 17781828effcb228794624e23659f83b53b239d0..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-xiaoke/inference/infer_tool.py +++ /dev/null @@ -1,327 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path - -import librosa -import maad -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class Svc(object): - def __init__(self, net_g_path, config_path, hubert_path="hubert/hubert-soft-0d54a1f4.pt", - onnx=False): - self.onnx = onnx - self.net_g_path = net_g_path - self.hubert_path = hubert_path - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.speakers = {} - for spk, sid in self.hps_ms.spk.items(): - self.speakers[sid] = spk - self.spk2id = self.hps_ms.spk - # 加载hubert - self.hubert_soft = hubert_model.hubert_soft(hubert_path) - if torch.cuda.is_available(): - self.hubert_soft = self.hubert_soft.cuda() - self.load_model() - - def load_model(self): - # 获取模型配置 - if self.onnx: - raise NotImplementedError - # self.net_g_ms = SynthesizerTrnForONNX( - # 178, - # self.hps_ms.data.filter_length // 2 + 1, - # self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - # n_speakers=self.hps_ms.data.n_speakers, - # **self.hps_ms.model) - # _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - else: - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - def get_units(self, source, sr): - - source = source.unsqueeze(0).to(self.dev) - with torch.inference_mode(): - start = time.time() - units = self.hubert_soft.units(source) - use_time = time.time() - start - print("hubert use time:{}".format(use_time)) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - f0 = resize2d_f0(f0, soft.shape[0]*3) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - if type(speaker_id) == str: - speaker_id = self.spk2id[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.dev) - if "half" in self.net_g_path and torch.cuda.is_available(): - stn_tst = torch.HalfTensor(soft) - else: - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.dev) - start = time.time() - x_tst = torch.repeat_interleave(x_tst, repeats=3, dim=1).transpose(1, 2) - audio = self.net_g_ms.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - -# class SvcONNXInferModel(object): -# def __init__(self, hubert_onnx, vits_onnx, config_path): -# self.config_path = config_path -# self.vits_onnx = vits_onnx -# self.hubert_onnx = hubert_onnx -# self.hubert_onnx_session = onnxruntime.InferenceSession(hubert_onnx, providers=['CUDAExecutionProvider', ]) -# self.inspect_onnx(self.hubert_onnx_session) -# self.vits_onnx_session = onnxruntime.InferenceSession(vits_onnx, providers=['CUDAExecutionProvider', ]) -# self.inspect_onnx(self.vits_onnx_session) -# self.hps_ms = utils.get_hparams_from_file(self.config_path) -# self.target_sample = self.hps_ms.data.sampling_rate -# self.feature_input = FeatureInput(self.hps_ms.data.sampling_rate, self.hps_ms.data.hop_length) -# -# @staticmethod -# def inspect_onnx(session): -# for i in session.get_inputs(): -# print("name:{}\tshape:{}\tdtype:{}".format(i.name, i.shape, i.type)) -# for i in session.get_outputs(): -# print("name:{}\tshape:{}\tdtype:{}".format(i.name, i.shape, i.type)) -# -# def infer(self, speaker_id, tran, raw_path): -# sid = np.array([int(speaker_id)], dtype=np.int64) -# soft, pitch = self.get_unit_pitch(raw_path, tran) -# pitch = np.expand_dims(pitch, axis=0).astype(np.int64) -# stn_tst = soft -# x_tst = np.expand_dims(stn_tst, axis=0) -# x_tst_lengths = np.array([stn_tst.shape[0]], dtype=np.int64) -# # 使用ONNX Runtime进行推理 -# start = time.time() -# audio = self.vits_onnx_session.run(output_names=["audio"], -# input_feed={ -# "hidden_unit": x_tst, -# "lengths": x_tst_lengths, -# "pitch": pitch, -# "sid": sid, -# })[0][0, 0] -# use_time = time.time() - start -# print("vits_onnx_session.run time:{}".format(use_time)) -# audio = torch.from_numpy(audio) -# return audio, audio.shape[-1] -# -# def get_units(self, source, sr): -# source = torchaudio.functional.resample(source, sr, 16000) -# if len(source.shape) == 2 and source.shape[1] >= 2: -# source = torch.mean(source, dim=0).unsqueeze(0) -# source = source.unsqueeze(0) -# # 使用ONNX Runtime进行推理 -# start = time.time() -# units = self.hubert_onnx_session.run(output_names=["embed"], -# input_feed={"source": source.numpy()})[0] -# use_time = time.time() - start -# print("hubert_onnx_session.run time:{}".format(use_time)) -# return units -# -# def transcribe(self, source, sr, length, transform): -# feature_pit = self.feature_input.compute_f0(source, sr) -# feature_pit = feature_pit * 2 ** (transform / 12) -# feature_pit = resize2d_f0(feature_pit, length) -# coarse_pit = self.feature_input.coarse_f0(feature_pit) -# return coarse_pit -# -# def get_unit_pitch(self, in_path, tran): -# source, sr = torchaudio.load(in_path) -# soft = self.get_units(source, sr).squeeze(0) -# input_pitch = self.transcribe(source.numpy()[0], sr, soft.shape[0], tran) -# return soft, input_pitch - - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path): - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav) - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/zxy666/bingo-chatai666/src/components/chat-attachments.tsx b/spaces/zxy666/bingo-chatai666/src/components/chat-attachments.tsx deleted file mode 100644 index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000 --- a/spaces/zxy666/bingo-chatai666/src/components/chat-attachments.tsx +++ /dev/null @@ -1,37 +0,0 @@ -import Image from 'next/image' -import ClearIcon from '@/assets/images/clear.svg' -import RefreshIcon from '@/assets/images/refresh.svg' -import { FileItem } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' -import { useBing } from '@/lib/hooks/use-bing' - -type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'> - -export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) { - return attachmentList.length ? ( -
                    - {attachmentList.map(file => ( -
                    - {file.status === 'loading' && ( -
                    -
                    -
                    ) - } - {file.status !== 'error' && ( -
                    - -
                    ) - } - {file.status === 'error' && ( -
                    - refresh uploadImage(file.url)} /> -
                    - )} - -
                    - ))} -
                    - ) : null -}