diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit PhantomPDF Business 9.0.0.29935 Crack [TechTools].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit PhantomPDF Business 9.0.0.29935 Crack [TechTools].md deleted file mode 100644 index 517b49c2ebbe161774976b44cfb6dc52555db288..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit PhantomPDF Business 9.0.0.29935 Crack [TechTools].md +++ /dev/null @@ -1,226 +0,0 @@ - -

Foxit PhantomPDF Business 9.0.0.29935 Crack [TechTools]

-

If you are looking for a powerful, versatile, and user-friendly PDF software, you have come to the right place.

-

Foxit PhantomPDF Business 9.0.0.29935 Crack [TechTools]


Download ✓✓✓ https://byltly.com/2uKwpv



-

In this article, we will introduce you to Foxit PhantomPDF Business 9.0.0.29935, a comprehensive PDF solution that lets you create, edit, convert, secure, protect, collaborate, and share PDF files with ease.

-

We will also show you how to download and install Foxit PhantomPDF Business 9.0.0.29935 Crack [TechTools], a reliable source that provides you with a full version of the software for free.

-

So, without further ado, let's get started!

-

What is Foxit PhantomPDF Business?

-

Foxit PhantomPDF Business is a professional PDF software that offers a complete set of features for working with PDF documents.

-

With Foxit PhantomPDF Business, you can:

-

Foxit PhantomPDF Business 9 full crack download
-How to install and activate Foxit PhantomPDF Business 9
-Foxit PhantomPDF Business 9.0.0.29935 full version free download
-Foxit PhantomPDF Business 9 crack mega link
-Foxit PhantomPDF Business 9 serial key generator
-Foxit PhantomPDF Business 9 activation code
-Foxit PhantomPDF Business 9 license key
-Foxit PhantomPDF Business 9 patch
-Foxit PhantomPDF Business 9 keygen
-Foxit PhantomPDF Business 9 portable
-Foxit PhantomPDF Business 9 review
-Foxit PhantomPDF Business 9 features
-Foxit PhantomPDF Business 9 system requirements
-Foxit PhantomPDF Business 9 tutorial
-Foxit PhantomPDF Business 9 comparison
-Foxit PhantomPDF Business 9 alternatives
-Foxit PhantomPDF Business 9 vs Adobe Acrobat Pro DC
-Foxit PhantomPDF Business 9 vs Nitro Pro
-Foxit PhantomPDF Business 9 vs PDFelement
-Foxit PhantomPDF Business 9 vs PDF-XChange Editor Plus
-Foxit PhantomPDF Business 9 vs Soda PDF
-Foxit PhantomPDF Business 9 vs Able2Extract Professional
-Foxit PhantomPDF Business 9 vs Master PDF Editor
-Foxit PhantomPDF Business 9 vs PDF Architect
-Foxit PhantomPDF Business 9 vs PDF Studio Pro
-Foxit PhantomPDF Business 9 for Windows 10
-Foxit PhantomPDF Business 9 for Mac OS X
-Foxit PhantomPDF Business 9 for Linux
-Foxit PhantomPDF Business 9 for Android
-Foxit PhantomPDF Business 9 for iOS
-Foxit PhantomPDF Business 9 online editor
-Foxit PhantomPDF Business 9 cloud service
-Foxit PhantomPDF Business 9 OCR feature
-Foxit PhantomPDF Business 9 digital signature feature
-Foxit PhantomPDF Business 9 form filling feature
-Foxit PhantomPDF Business 9 document conversion feature
-Foxit PhantomPDF Business 9 document security feature
-Foxit PhantomPDF Business 9 document collaboration feature
-Foxit PhantomPDF Business 9 document annotation feature
-Foxit PhantomPDF Business 9 document editing feature
-How to create PDF files with Foxit PhantomPDF Business 9
-How to edit PDF files with Foxit PhantomPDF Business 9
-How to convert PDF files with Foxit PhantomPDF Business 9
-How to sign PDF files with Foxit PhantomPDF Business 9
-How to fill PDF forms with Foxit PhantomPDF Business 9
-How to secure PDF files with Foxit PhantomPDF Business 9
-How to collaborate on PDF files with Foxit PhantomPDF Business 9
-How to annotate PDF files with Foxit PhantomPDF Business 9
-How to optimize PDF files with Foxit PhantomPDF Business 9

- - -

AirAttack 2 - Airplane Shooter is an excellent game for fans of arcade-style shoot 'em up games who want more quality and content than AirAttack HD. However, it also has some drawbacks that might disappoint some players. If you are looking for a more polished, varied, and balanced air combat game, you might want to look elsewhere.

-

Conclusion

-

In conclusion, we have reviewed some of the best game download air attack options for Android users. We have looked at their features, gameplay, pros, cons, user reviews, ratings, and more. We have compared Air Attack (Ad), AirAttack HD, and AirAttack 2 - Airplane Shooter, and found that they all have their strengths and weaknesses.

-

Our recommendation for the best air combat game for Android users is AirAttack 2 - Airplane Shooter. It has the most advanced graphics, sound, gameplay, and content among the three games. It also has the most variety and challenge in terms of missions and enemies. It is a game that will keep you entertained and engaged for hours.

-

However, you might have a different preference or need than us. You might prefer a simpler or harder game, or a cheaper or more expensive game. You might also want to try other air combat games that we did not mention in this article. The choice is yours.

-

The only way to find out which game is the best for you is to download them and play them yourself. You can find them on Google Play Store by following these links:

- -

We hope you enjoyed reading this article and found it helpful. If you did, please share it with your friends and family who might also be interested in playing air combat games on their Android devices. Thank you for your time and attention.

-

FAQs

-

Here are some frequently asked questions and answers related to the topic of game download air attack:

-

Q: What are the benefits of playing air combat games on Android devices?

-

A: Playing air combat games on Android devices can have many benefits, such as:

- -

Q: What are the drawbacks of playing air combat games on Android devices?

-

A: Playing air combat games on Android devices can also have some drawbacks, such as:

- -

Q: How can I play air combat games on Android devices safely and responsibly?

-

A: Playing air combat games on Android devices can be safe and responsible if you follow some tips, such as:

- -

Q: What are some of the alternatives to air combat games on Android devices?

-

A: If you are not interested in or satisfied with air combat games on Android devices, you might want to try some of the alternatives, such as:

- -

Q: Where can I find more information and resources about air combat games on Android devices?

-

A: If you want to learn more about air combat games on Android devices, you can check out some of the following sources:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/American Truck Simulator Mods Everything You Need to Know About ATS Modding.md b/spaces/1phancelerku/anime-remove-background/American Truck Simulator Mods Everything You Need to Know About ATS Modding.md deleted file mode 100644 index cade91380dd9860822c8431d8a098b7a2dffc157..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/American Truck Simulator Mods Everything You Need to Know About ATS Modding.md +++ /dev/null @@ -1,115 +0,0 @@ -
-

American Truck Simulator Mods: How to Enhance Your Gaming Experience

-

American Truck Simulator (ATS) is a popular simulation game that lets you drive various trucks across different states of America. You can explore scenic routes, deliver cargoes, and enjoy the realistic physics and graphics of the game. But did you know that you can make your gaming experience even better with American Truck Simulator mods?

-

Mods are modifications or additions that change or improve some aspects of the game. They are created by fans or developers who want to share their creativity and passion with other players. There are thousands of mods available for ATS, ranging from trucks and trailers to maps and sounds. In this article, we will explain what are ATS mods, why you should use them, and how to install them.

-

american truck simulator mods


Downloadhttps://jinyurl.com/2uNMxA



-

What are American Truck Simulator Mods?

-

ATS mods are files that alter or enhance the original game content. They can add new features, fix bugs, or change the gameplay. Some mods are official, meaning they are made by the game developers and released as updates or DLCs. Others are unofficial, meaning they are made by fans or third-party developers and uploaded on various websites or platforms.

-

Types of ATS Mods

-

There are many types of ATS mods, depending on what they modify or add to the game. Here are some of the most common ones:

-

Trucks

-

Trucks are the main vehicles in ATS, and there are many mods that add new trucks or improve the existing ones. You can find mods that add famous brands like Scania, Volvo, or Mercedes-Benz, or models that are not available in the base game. You can also find mods that change the appearance, performance, or sound of the trucks.

-

Trailers

-

Trailers are the cargoes that you haul in ATS, and there are also many mods that add new trailers or improve the existing ones. You can find mods that add realistic trailers from real companies, or custom trailers with unique designs or features. You can also find mods that change the weight, size, or physics of the trailers.

-

Maps

-

Maps are the areas that you explore in ATS, and there are also many mods that add new maps or improve the existing ones. You can find mods that add new states, regions, or countries to the game, or expand the existing ones with more roads, cities, or landmarks. You can also find mods that change the terrain, weather, or traffic of the maps.

-

american truck simulator mods ats
-american truck simulator mods steam
-american truck simulator mods scania
-american truck simulator mods map
-american truck simulator mods realistic
-american truck simulator mods traffic
-american truck simulator mods trailer
-american truck simulator mods sound
-american truck simulator mods kenworth
-american truck simulator mods peterbilt
-american truck simulator mods volvo
-american truck simulator mods freightliner
-american truck simulator mods mack
-american truck simulator mods international
-american truck simulator mods western star
-american truck simulator mods engine
-american truck simulator mods tuning
-american truck simulator mods interior
-american truck simulator mods skin
-american truck simulator mods lights
-american truck simulator mods weather
-american truck simulator mods physics
-american truck simulator mods multiplayer
-american truck simulator mods bus
-american truck simulator mods car
-american truck simulator mods ford
-american truck simulator mods dodge
-american truck simulator mods chevy
-american truck simulator mods gmc
-american truck simulator mods toyota
-american truck simulator mods honda
-american truck simulator mods nissan
-american truck simulator mods tesla
-american truck simulator mods jeep
-american truck simulator mods bmw
-american truck simulator mods mercedes
-american truck simulator mods audi
-american truck simulator mods porsche
-american truck simulator mods ferrari
-american truck simulator mods lamborghini
-american truck simulator mods harley davidson
-american truck simulator mods motorcycle
-american truck simulator mods helicopter
-american truck simulator mods airplane
-american truck simulator mods boat
-american truck simulator mods train
-american truck simulator mods logging
-american truck simulator mods farming

-

Sounds

-

Sounds are the noises that you hear in ATS, and there are also many mods that add new sounds or improve the existing ones. You can find mods that add realistic sounds for the trucks, trailers, engines, horns, brakes, or environment. You can also find mods that change the music, radio, or voice of the game.

-

Others

-

There are also other types of ATS mods that do not fit into the previous categories. You can find mods that add new skins, accessories, lights, graphics, physics, gameplay features, or tools to the game. You can also find mods that fix errors, bugs, or glitches in the game.

-

Why Use ATS Mods?

-

ATS mods can enhance your gaming experience in many ways. They can make your game more personalized, realistic, varied, and fun. However, they can also have some drawbacks that you should be aware of. Here are some of the pros and cons of using ATS mods:

-

Benefits of ATS Mods

Benefits of ATS Mods

-

Using ATS mods can have many benefits for your gaming experience. Here are some of them:

-

Customization

-

One of the main reasons why people use ATS mods is to customize their game according to their preferences and tastes. You can choose the trucks, trailers, maps, sounds, and other features that you like and make your game more unique and personal. You can also mix and match different mods to create your own combinations and styles.

-

Realism

-

Another reason why people use ATS mods is to make their game more realistic and immersive. You can find mods that add more details, accuracy, and authenticity to the game, such as real brands, models, companies, roads, landmarks, weather, traffic, and sounds. You can also find mods that improve the graphics, physics, and gameplay of the game, making it more challenging and rewarding.

-

Variety

-

A third reason why people use ATS mods is to add more variety and diversity to their game. You can find mods that add new content, features, or options to the game, such as new trucks, trailers, maps, sounds, skins, accessories, lights, graphics, physics, gameplay features, or tools. You can also find mods that change the content, features, or options of the game, such as different weights, sizes, colors, shapes, designs, or functions.

-

Fun

-

A fourth reason why people use ATS mods is to have more fun and enjoyment in their game. You can find mods that add humor, creativity, or novelty to the game, such as funny trucks, trailers, maps, sounds, skins, accessories, lights, graphics, physics, gameplay features, or tools. You can also find mods that make the game easier or harder, depending on your preference and skill level.

-

Drawbacks of ATS Mods

-

However, using ATS mods can also have some drawbacks for your gaming experience. Here are some of them:

-

Compatibility

-

One of the main problems with ATS mods is that they may not be compatible with each other or with the base game. Some mods may conflict or interfere with other mods or with the original game files. This can cause errors, crashes, or glitches in your game. To avoid this problem, you should always check the compatibility of the mods before installing them. You should also keep your game updated to the latest version and use a mod manager to organize your mods.

-

Quality

-

Another problem with ATS mods is that they may vary in quality and reliability. Some mods may be well-made and tested by their creators or users. Others may be poorly-made or untested by their creators or users. This can affect the performance, functionality, or appearance of your game. To avoid this problem, you should always read the reviews and ratings of the mods before downloading them. You should also backup your game files before installing any mod.

-

Safety

-

A third problem with ATS mods is that they may not be safe or secure for your computer or device. Some mods may contain viruses, malware, spyware, or other harmful software that can damage your system or steal your data. Others may contain inappropriate or illegal content that can offend you or get you in trouble. To avoid this problem, To avoid this problem, you should always download the mods from trusted and reputable sources. You should also scan the mods with an antivirus or anti-malware program before installing them. You should also be careful about the content and legality of the mods you use.

-

How to Install ATS Mods?

-

If you want to use ATS mods, you need to know how to install them properly. The installation process may vary depending on the type and format of the mod, but here are some general steps that you can follow:

-

Downloading ATS Mods

-

The first step is to download the mod that you want to use. You can find many websites or platforms that offer ATS mods, such as Steam Workshop, American Truck Simulator Mods, ATS Mods Studio, or ModLand. You can browse through the categories, search by keywords, or filter by ratings, downloads, or updates. You can also read the descriptions, reviews, and comments of the mods to learn more about them.

-

Once you find the mod that you like, you need to download it to your computer or device. Most mods are in ZIP or RAR format, which are compressed files that contain the mod files. Some mods may also be in SCS format, which are the original game files. You need to save the mod file in a folder that you can easily access later.

-

Installing ATS Mods

-

The next step is to install the mod that you downloaded. You need to extract the mod file from the ZIP or RAR format using a program like WinRAR or 7-Zip. You will get one or more files with the extension .scs or .zip. These are the mod files that you need to copy or move to your game folder.

-

The game folder is where your game is installed on your computer or device. The default location is C:\Program Files (x86)\Steam\steamapps\common\American Truck Simulator\mod. If you have a different location, you can find it by right-clicking on your game icon, selecting Properties, and then clicking on Browse Local Files. Once you find the game folder, you need to open the mod subfolder and paste the mod files there.

-

Activating ATS Mods

-

The final step is to activate the mod that you installed. You need to launch your game and go to the Mod Manager menu. You will see a list of all the mods that you have in your game folder. You need to select the mod that you want to use and click on Enable. You can also change the order of the mods by dragging and dropping them. The order may affect how the mods work together, so you should follow the instructions of the mod creators.

-

Once you activate the mod, you need to confirm your changes and restart your game. You will see a message that says "Changes require game restart". Click on OK and wait for your game to load again. You can then enjoy your game with the mod that you installed.

-

Conclusion

-

ATS mods are a great way to enhance your gaming experience with American Truck Simulator. They can add new trucks, trailers, maps, sounds, and other features to your game. They can also make your game more customized, realistic, varied, and fun. However, they can also have some drawbacks, such as compatibility, quality, and safety issues. Therefore, you should always be careful and responsible when using ATS mods.

-

We hope this article has helped you understand what are ATS mods, why you should use them, and how to install them. If you have any questions or comments, please feel free to share them below. Happy trucking!

-

FAQs

-

Here are some frequently asked questions about ATS mods:

-

Where can I find more ATS mods?

-

You can find more ATS mods on various websites or platforms that offer them. Some of the most popular ones are Steam Workshop, American Truck Simulator Mods, ATS Mods Studio, and ModLand. You can also search on Google or YouTube for more sources and recommendations.

-

How can I uninstall ATS mods?

-

You can uninstall ATS mods by following the same steps as installing them, but in reverse order. You need to go to the Mod Manager menu in your game and disable the mod that you want to uninstall. Then, you need to go to your game folder and delete the mod file from the mod subfolder. Finally, you need to restart your game and confirm your changes.

-

How can I update ATS mods?

-

You can update ATS mods by downloading and installing the latest version of the mod from its source. You need to replace the old mod file with the new one in your game folder and activate it in your Mod Manager menu. You may You may also need to update your game to the latest version and check the compatibility of the mod with other mods or the base game. You should always read the changelog and instructions of the mod before updating it.

-

How can I create my own ATS mods?

-

You can create your own ATS mods by using some tools and programs that are available for modding. Some of the most common ones are Blender, ZModeler, Photoshop, SCS Workshop Uploader, and SCS Extractor. You can also use some tutorials and guides that are available online or on YouTube to learn how to mod. However, creating your own ATS mods requires some skills, knowledge, and patience, so be prepared to spend some time and effort on it.

-

Are ATS mods legal?

-

ATS mods are legal as long as they do not violate the terms and conditions of the game or the mod source. You should always respect the rights and credits of the mod creators and users. You should not use or distribute any mod that contains illegal or inappropriate content, such as piracy, plagiarism, nudity, violence, or hate speech. You should also not use or distribute any mod that harms or exploits the game, the mod source, or other players.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Free Download Epic Conquest MOD APK - Unlimited Resources and Fun.md b/spaces/1phancelerku/anime-remove-background/Free Download Epic Conquest MOD APK - Unlimited Resources and Fun.md deleted file mode 100644 index 30b95a7ffb74121bbac1a8d258254087598f3095..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Free Download Epic Conquest MOD APK - Unlimited Resources and Fun.md +++ /dev/null @@ -1,97 +0,0 @@ - -

Free Download Epic Conquest Mod Apk: A Guide for RPG Lovers

-

If you are a fan of action RPG games and anime-style story telling, you might want to check out Epic Conquest. It is a classic single-player RPG with special touch in the combat and story, giving you an experience that's hard to find in other free offline RPGs. And if you want to enjoy the game without spending any money or watching any ads, you might want to try Epic Conquest Mod Apk. In this article, we will tell you what Epic Conquest is, what Epic Conquest Mod Apk is, how to download and install it, and some tips and tricks for playing it.

-

What is Epic Conquest?

-

Epic Conquest is a game created by a small indie team of 4, with burning passion and love for action RPG games and anime. It is also the sequel of their previous game (with the same title) on mobile that captivated millions of players. But don't be afraid to jump right into this game, as it has a different story and characters from the first one.

-

free download epic conquest mod apk


Download ★★★ https://jinyurl.com/2uNLd2



-

A classic single-player action RPG with a beautiful story and amazing hack and slash action

-

Epic Conquest has a fantasy romance story that will not disappoint you. It has visual novel style dialogue with character expressions, beautiful CG illustrations, and an epic ending. You can choose between four playable characters with totally different playstyles. You can also customize your character's build by distributing your stats, skills, masteries, and equipment. The combat system is intense and strategic. You have to learn the enemies' behavior and find the best time to strike. You can also use various skills and perks to enhance your performance. The game has four levels of difficulty for you to challenge yourself.

-

A game created by a small indie team of 4 with passion and love for RPG games and anime

-

Epic Conquest is a game that shows how much effort and love the developers have for RPG games and anime. They have created a game that is not pay to win, and can be played offline without any internet connection. They have also added many features and content to the game, such as costumes, side stories, mini games, and more. They have also listened to the feedback and suggestions of the players, and have improved the game accordingly. They have also provided regular updates and bug fixes to the game, making it more stable and enjoyable.

-

A game that is not pay to win and can be played offline

-

Epic Conquest is a game that respects your time and money. You don't have to spend any real money to progress in the game. You can earn everything by playing the game and completing quests, achievements, and challenges. You also don't have to watch any ads to get rewards or bonuses. The game is completely ad-free, unless you want to support the developers by watching optional ads. You can also play the game offline without any internet connection. You can enjoy the game anytime and anywhere you want.

-

epic conquest mod apk unlimited money
-epic conquest mod apk latest version
-epic conquest mod apk offline
-epic conquest mod apk android 1
-epic conquest mod apk rexdl
-epic conquest mod apk revdl
-epic conquest mod apk happymod
-epic conquest mod apk no root
-epic conquest mod apk unlimited ruby
-epic conquest mod apk unlimited skill points
-epic conquest mod apk unlimited gold
-epic conquest mod apk unlimited gems
-epic conquest mod apk unlimited everything
-epic conquest mod apk unlocked all
-epic conquest mod apk god mode
-epic conquest mod apk high damage
-epic conquest mod apk mega mod
-epic conquest mod apk premium
-epic conquest mod apk pro
-epic conquest mod apk full version
-epic conquest mod apk free shopping
-epic conquest mod apk free purchase
-epic conquest mod apk free upgrade
-epic conquest mod apk free craft
-epic conquest mod apk free items
-epic conquest mod apk free download for android
-free download game epic conquest mod apk
-free download of epic conquest mod apk
-how to download epic conquest mod apk for free
-where to download epic conquest mod apk for free
-download link for epic conquest mod apk free
-direct download link for epic conquest mod apk free
-fast download link for epic conquest mod apk free
-best site to download epic conquest mod apk free
-best app to download epic conquest mod apk free
-best way to download epic conquest mod apk free
-easy way to download epic conquest mod apk free
-safe way to download epic conquest mod apk free
-secure way to download epic conquest mod apk free
-virus-free way to download epic conquest mod apk free
-malware-free way to download epic conquest mod apk free
-ad-free way to download epic conquest mod apk free
-no survey way to download epic conquest mod apk free
-no verification way to download epic conquest mod apk free
-no password way to download epic conquest mod apk free
-no registration way to download epic conquest mod apk free
-no subscription way to download epic conquest mod apk free
-no payment way to download epic conquest mod apk free

-

What is Epic Conquest Mod Apk?

-

Epic Conquest Mod Apk is a modified version of the original game that gives you unlimited money and other benefits. It is a way to enjoy the game without spending real money or watching ads. It is also a way to unlock all the features, costumes, and characters in the game.

-

A modified version of the original game that gives you unlimited money and other benefits

-

Epic Conquest Mod Apk is a file that you can download and install on your device. It will replace the original game with a modified one that has some changes in the code. One of the changes is that you will get unlimited money in the game. You can use this money to buy anything you want in the game, such as items, equipment, skills, masteries, costumes, and more. You will also get other benefits, such as increased damage, defense, speed, and health. You will also be able to access all the premium features in the game, such as cloud save, no cooldowns, no ads, and more.

-

A way to enjoy the game without spending real money or watching ads

-

Epic Conquest Mod Apk is a way to enjoy the game without spending real money or watching ads. You don't have to worry about running out of money or resources in the game. You don't have to watch any ads to get rewards or bonuses. You don't have to wait for anything in the game. You can play the game as much as you want, without any limitations or restrictions.

-

A way to unlock all the features, costumes, and characters in the game

-

Epic Conquest Mod Apk is a way to unlock all the features, costumes, and characters in the game. You don't have to complete any quests, achievements, or challenges to unlock them. You don't have to spend any money or time to unlock them. You can access them from the start of the game. You can choose any character you want, with any costume you want. You can also switch between characters anytime you want.

-

How to download and install Epic Conquest Mod Apk?

-

Downloading and installing Epic Conquest Mod Apk is easy and safe if you follow these steps:

-

Find a reliable source that offers the latest version of the mod apk file

-

The first step is to find a reliable source that offers the latest version of the mod apk file. There are many websites that claim to provide mod apk files for various games, but not all of them are trustworthy or updated. Some of them may contain viruses or malware that can harm your device or steal your data. Some of them may not work properly or may cause errors or crashes in the game. Therefore, you need to be careful when choosing a source for downloading Epic Conquest Mod Apk.

-

One of the sources that we recommend is [Epic Conquest Mod Apk]. This website provides mod apk files for various games, including Epic Conquest. It has a simple and user-friendly interface that allows you to download might want to choose Leon. You can also switch between characters anytime you want, but you have to start from the beginning of the game with each character.

-

Upgrade your stats, skills, masteries, and equipment to match your build

-

Epic Conquest has a complex and flexible system that allows you to customize your character's build. You can upgrade your stats, skills, masteries, and equipment to match your build. Stats are the basic attributes of your character, such as strength, intelligence, agility, and vitality. You can increase them by leveling up or using stat points. Skills are the special abilities of your character, such as sword slash, fireball, shadow strike, and holy light. You can unlock and upgrade them by using skill points. Masteries are the passive bonuses of your character, such as critical chance, magic resistance, dodge rate, and healing power. You can unlock and upgrade them by using mastery points. Equipment are the items that you wear or use in the game, such as weapons, armor, accessories, and consumables. You can buy or find them in the game world.

-

You should upgrade your stats, skills, masteries, and equipment to match your build. For example, if you want to be a damage dealer, you might want to focus on increasing your strength or intelligence, depending on your character. You might also want to unlock and upgrade skills that deal high damage or have low cooldowns. You might also want to unlock and upgrade masteries that increase your critical chance or damage. You might also want to equip weapons and armor that have high attack or magic power.

-

Learn the enemies' behavior and find the best time to strike

-

Epic Conquest has a variety of enemies that have different behavior and patterns. Some of them are aggressive and will chase you down. Some of them are defensive and will block or dodge your attacks. Some of them are ranged and will shoot you from afar. Some of them are melee and will try to hit you up close. Some of them have special abilities or attacks that can stun you, poison you, or knock you back.

-

You should learn the enemies' behavior and find the best time to strike. For example, if you are facing an aggressive enemy, you might want to wait for them to attack first and then counterattack when they are vulnerable. If you are facing a defensive enemy, you might want to use skills that can break their guard or stun them. If you are facing a ranged enemy, you might want to close the distance or use skills that can reach them. If you are facing a melee enemy, you might want to keep your distance or use skills that can knock them back.

-

Explore the world map and complete quests, achievements, and challenges

-

Epic Conquest has a vast world map that is full of secrets and surprises. You can explore different areas and regions in the game world, such as forests, caves, deserts, cities, and more. You can also find hidden chests, items, enemies, and bosses in the game world. You can also complete quests, achievements, and challenges in the game world. Quests are the main missions that advance the story and reward you with money, items, and experience. Achievements are the optional goals that test your skills and knowledge and reward you with money, items, and mastery points. Challenges are the special modes that add extra difficulty and fun to the game and reward you with money, items, and skill points.

-

You should explore the world map and complete quests, achievements, and challenges in the game world. This will help you to level up your character, improve your build, discover new things, and have more fun.

-

Conclusion

-

Epic Conquest is a great game for RPG fans who love anime-style story and action. It has a beautiful story, amazing combat, and a lot of content to enjoy. Epic Conquest Mod Apk is a convenient way to enjoy the game without spending money or time. It gives you unlimited money and other benefits that make the game easier and more fun. Downloading and installing Epic Conquest Mod Apk is easy and safe if you follow the steps above. If you are looking for a free offline RPG game that will keep you entertained for hours, you should try Epic Conquest Mod Apk.

-

FAQs

-

Is Epic Conquest Mod Apk legal?

-

Epic Conquest Mod Apk is not legal, as it violates the terms and conditions of the original game. It is also not endorsed or supported by the developers of the original game. However, it is unlikely that you will face any legal consequences for using Epic Conquest Mod Apk, as long as you use it for personal use only and do not distribute or share it with others.

-

Is Epic Conquest Mod Apk safe?

-

Epic Conquest Mod Apk is safe if you download it from a reliable source, such as [Epic Conquest Mod Apk]. However, you should always be careful when downloading any mod apk file from the internet, as some of them may contain viruses or malware that can harm your device or steal your data. You should also scan the file with an antivirus or anti-malware program before installing it on your device.

-

Can I play Epic Conquest Mod Apk online?

-

Epic Conquest Mod Apk can be played online, but it is not recommended. The original game does not have any online features or multiplayer modes, so there is no point in playing it online. Moreover, playing Epic Conquest Mod Apk online may expose you to the risk of being banned or blocked by the developers of the original game. Therefore, it is better to play Epic Conquest Mod Apk offline without any internet connection.

-

Can I switch between characters in Epic Conquest Mod Apk?

-

You can switch between characters in Epic Conquest Mod Apk anytime you want. However, you have to start from the beginning of the game with each character. You cannot transfer your progress or items between characters. You can also create multiple save files for each character in Epic Conquest Mod Apk.

-

Can I update Epic Conquest Mod Apk?

-

You can update Epic Conquest Mod Apk if there is a new version available from the source that you downloaded it from. However, you should always backup your save files before updating Epic Conquest Mod Apk, as some updates may cause errors or crashes in the game. You should also check if the new version of Epic Conquest Mod Apk has the same features and benefits as the previous one.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/dist_model.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/dist_model.py deleted file mode 100644 index d8a14a61ca36f2562e16feb66c9625dd2f5e0469..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/dist_model.py +++ /dev/null @@ -1,284 +0,0 @@ - -from __future__ import absolute_import - -import sys -import numpy as np -import torch -from torch import nn -import os -from collections import OrderedDict -from torch.autograd import Variable -import itertools -from model.stylegan.lpips.base_model import BaseModel -from scipy.ndimage import zoom -import fractions -import functools -import skimage.transform -from tqdm import tqdm - -from IPython import embed - -from model.stylegan.lpips import networks_basic as networks -import model.stylegan.lpips as util - -class DistModel(BaseModel): - def name(self): - return self.model_name - - def initialize(self, model='net-lin', net='alex', colorspace='Lab', pnet_rand=False, pnet_tune=False, model_path=None, - use_gpu=True, printNet=False, spatial=False, - is_train=False, lr=.0001, beta1=0.5, version='0.1', gpu_ids=[0]): - ''' - INPUTS - model - ['net-lin'] for linearly calibrated network - ['net'] for off-the-shelf network - ['L2'] for L2 distance in Lab colorspace - ['SSIM'] for ssim in RGB colorspace - net - ['squeeze','alex','vgg'] - model_path - if None, will look in weights/[NET_NAME].pth - colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM - use_gpu - bool - whether or not to use a GPU - printNet - bool - whether or not to print network architecture out - spatial - bool - whether to output an array containing varying distances across spatial dimensions - spatial_shape - if given, output spatial shape. if None then spatial shape is determined automatically via spatial_factor (see below). - spatial_factor - if given, specifies upsampling factor relative to the largest spatial extent of a convolutional layer. if None then resized to size of input images. - spatial_order - spline order of filter for upsampling in spatial mode, by default 1 (bilinear). - is_train - bool - [True] for training mode - lr - float - initial learning rate - beta1 - float - initial momentum term for adam - version - 0.1 for latest, 0.0 was original (with a bug) - gpu_ids - int array - [0] by default, gpus to use - ''' - BaseModel.initialize(self, use_gpu=use_gpu, gpu_ids=gpu_ids) - - self.model = model - self.net = net - self.is_train = is_train - self.spatial = spatial - self.gpu_ids = gpu_ids - self.model_name = '%s [%s]'%(model,net) - - if(self.model == 'net-lin'): # pretrained net + linear layer - self.net = networks.PNetLin(pnet_rand=pnet_rand, pnet_tune=pnet_tune, pnet_type=net, - use_dropout=True, spatial=spatial, version=version, lpips=True) - kw = {} - if not use_gpu: - kw['map_location'] = 'cpu' - if(model_path is None): - import inspect - model_path = os.path.abspath(os.path.join(inspect.getfile(self.initialize), '..', 'weights/v%s/%s.pth'%(version,net))) - - if(not is_train): - print('Loading model from: %s'%model_path) - self.net.load_state_dict(torch.load(model_path, **kw), strict=False) - - elif(self.model=='net'): # pretrained network - self.net = networks.PNetLin(pnet_rand=pnet_rand, pnet_type=net, lpips=False) - elif(self.model in ['L2','l2']): - self.net = networks.L2(use_gpu=use_gpu,colorspace=colorspace) # not really a network, only for testing - self.model_name = 'L2' - elif(self.model in ['DSSIM','dssim','SSIM','ssim']): - self.net = networks.DSSIM(use_gpu=use_gpu,colorspace=colorspace) - self.model_name = 'SSIM' - else: - raise ValueError("Model [%s] not recognized." % self.model) - - self.parameters = list(self.net.parameters()) - - if self.is_train: # training mode - # extra network on top to go from distances (d0,d1) => predicted human judgment (h*) - self.rankLoss = networks.BCERankingLoss() - self.parameters += list(self.rankLoss.net.parameters()) - self.lr = lr - self.old_lr = lr - self.optimizer_net = torch.optim.Adam(self.parameters, lr=lr, betas=(beta1, 0.999)) - else: # test mode - self.net.eval() - - if(use_gpu): - self.net.to(gpu_ids[0]) - self.net = torch.nn.DataParallel(self.net, device_ids=gpu_ids) - if(self.is_train): - self.rankLoss = self.rankLoss.to(device=gpu_ids[0]) # just put this on GPU0 - - if(printNet): - print('---------- Networks initialized -------------') - networks.print_network(self.net) - print('-----------------------------------------------') - - def forward(self, in0, in1, retPerLayer=False): - ''' Function computes the distance between image patches in0 and in1 - INPUTS - in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1] - OUTPUT - computed distances between in0 and in1 - ''' - - return self.net.forward(in0, in1, retPerLayer=retPerLayer) - - # ***** TRAINING FUNCTIONS ***** - def optimize_parameters(self): - self.forward_train() - self.optimizer_net.zero_grad() - self.backward_train() - self.optimizer_net.step() - self.clamp_weights() - - def clamp_weights(self): - for module in self.net.modules(): - if(hasattr(module, 'weight') and module.kernel_size==(1,1)): - module.weight.data = torch.clamp(module.weight.data,min=0) - - def set_input(self, data): - self.input_ref = data['ref'] - self.input_p0 = data['p0'] - self.input_p1 = data['p1'] - self.input_judge = data['judge'] - - if(self.use_gpu): - self.input_ref = self.input_ref.to(device=self.gpu_ids[0]) - self.input_p0 = self.input_p0.to(device=self.gpu_ids[0]) - self.input_p1 = self.input_p1.to(device=self.gpu_ids[0]) - self.input_judge = self.input_judge.to(device=self.gpu_ids[0]) - - self.var_ref = Variable(self.input_ref,requires_grad=True) - self.var_p0 = Variable(self.input_p0,requires_grad=True) - self.var_p1 = Variable(self.input_p1,requires_grad=True) - - def forward_train(self): # run forward pass - # print(self.net.module.scaling_layer.shift) - # print(torch.norm(self.net.module.net.slice1[0].weight).item(), torch.norm(self.net.module.lin0.model[1].weight).item()) - - self.d0 = self.forward(self.var_ref, self.var_p0) - self.d1 = self.forward(self.var_ref, self.var_p1) - self.acc_r = self.compute_accuracy(self.d0,self.d1,self.input_judge) - - self.var_judge = Variable(1.*self.input_judge).view(self.d0.size()) - - self.loss_total = self.rankLoss.forward(self.d0, self.d1, self.var_judge*2.-1.) - - return self.loss_total - - def backward_train(self): - torch.mean(self.loss_total).backward() - - def compute_accuracy(self,d0,d1,judge): - ''' d0, d1 are Variables, judge is a Tensor ''' - d1_lt_d0 = (d1 %f' % (type,self.old_lr, lr)) - self.old_lr = lr - -def score_2afc_dataset(data_loader, func, name=''): - ''' Function computes Two Alternative Forced Choice (2AFC) score using - distance function 'func' in dataset 'data_loader' - INPUTS - data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside - func - callable distance function - calling d=func(in0,in1) should take 2 - pytorch tensors with shape Nx3xXxY, and return numpy array of length N - OUTPUTS - [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators - [1] - dictionary with following elements - d0s,d1s - N arrays containing distances between reference patch to perturbed patches - gts - N array in [0,1], preferred patch selected by human evaluators - (closer to "0" for left patch p0, "1" for right patch p1, - "0.6" means 60pct people preferred right patch, 40pct preferred left) - scores - N array in [0,1], corresponding to what percentage function agreed with humans - CONSTS - N - number of test triplets in data_loader - ''' - - d0s = [] - d1s = [] - gts = [] - - for data in tqdm(data_loader.load_data(), desc=name): - d0s+=func(data['ref'],data['p0']).data.cpu().numpy().flatten().tolist() - d1s+=func(data['ref'],data['p1']).data.cpu().numpy().flatten().tolist() - gts+=data['judge'].cpu().numpy().flatten().tolist() - - d0s = np.array(d0s) - d1s = np.array(d1s) - gts = np.array(gts) - scores = (d0s 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/extract_f0_print.py b/spaces/AI-Hobbyist/Hoyo-RVC/extract_f0_print.py deleted file mode 100644 index 8efe8a4345b942bd93306c3491588ed7edcb6c80..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/extract_f0_print.py +++ /dev/null @@ -1,160 +0,0 @@ -import os, traceback, sys, parselmouth - -now_dir = os.getcwd() -sys.path.append(now_dir) -from my_utils import load_audio -import pyworld -from scipy.io import wavfile -import numpy as np, logging - -logging.getLogger("numba").setLevel(logging.WARNING) -from multiprocessing import Process - -exp_dir = sys.argv[1] -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -n_p = int(sys.argv[2]) -f0method = sys.argv[3] - - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def compute_f0(self, path, f0_method): - x = load_audio(path, self.fs) - p_len = x.shape[0] // self.hop - if f0_method == "pm": - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0 = ( - parselmouth.Sound(x, self.fs) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.fs) - elif f0_method == "dio": - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.fs) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method): - if len(paths) == 0: - printt("no-f0-todo") - else: - printt("todo-f0-%s" % len(paths)) - n = max(len(paths) // 5, 1) # 每个进程最多打印5条 - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if idx % n == 0: - printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path)) - if ( - os.path.exists(opt_path1 + ".npy") == True - and os.path.exists(opt_path2 + ".npy") == True - ): - continue - featur_pit = self.compute_f0(inp_path, f0_method) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - except: - printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc())) - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - - ps = [] - for i in range(n_p): - p = Process( - target=featureInput.go, - args=( - paths[i::n_p], - f0method, - ), - ) - ps.append(p) - p.start() - for i in range(n_p): - ps[i].join() diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/matlab_cp2tform.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/matlab_cp2tform.py deleted file mode 100644 index 025b18ec2e64472bd4c0c636f9ae061526bdc8cd..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/matlab_cp2tform.py +++ /dev/null @@ -1,350 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Tue Jul 11 06:54:28 2017 - -@author: zhaoyafei -""" - -import numpy as np -from numpy.linalg import inv, norm, lstsq -from numpy.linalg import matrix_rank as rank - - -class MatlabCp2tormException(Exception): - def __str__(self): - return 'In File {}:{}'.format( - __file__, super.__str__(self)) - - -def tformfwd(trans, uv): - """ - Function: - ---------- - apply affine transform 'trans' to uv - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix - @uv: Kx2 np.array - each row is a pair of coordinates (x, y) - - Returns: - ---------- - @xy: Kx2 np.array - each row is a pair of transformed coordinates (x, y) - """ - uv = np.hstack(( - uv, np.ones((uv.shape[0], 1)) - )) - xy = np.dot(uv, trans) - xy = xy[:, 0:-1] - return xy - - -def tforminv(trans, uv): - """ - Function: - ---------- - apply the inverse of affine transform 'trans' to uv - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix - @uv: Kx2 np.array - each row is a pair of coordinates (x, y) - - Returns: - ---------- - @xy: Kx2 np.array - each row is a pair of inverse-transformed coordinates (x, y) - """ - Tinv = inv(trans) - xy = tformfwd(Tinv, uv) - return xy - - -def findNonreflectiveSimilarity(uv, xy, options=None): - options = {'K': 2} - - K = options['K'] - M = xy.shape[0] - x = xy[:, 0].reshape((-1, 1)) # use reshape to keep a column vector - y = xy[:, 1].reshape((-1, 1)) # use reshape to keep a column vector - # print('--->x, y:\n', x, y - - tmp1 = np.hstack((x, y, np.ones((M, 1)), np.zeros((M, 1)))) - tmp2 = np.hstack((y, -x, np.zeros((M, 1)), np.ones((M, 1)))) - X = np.vstack((tmp1, tmp2)) - # print('--->X.shape: ', X.shape - # print('X:\n', X - - u = uv[:, 0].reshape((-1, 1)) # use reshape to keep a column vector - v = uv[:, 1].reshape((-1, 1)) # use reshape to keep a column vector - U = np.vstack((u, v)) - # print('--->U.shape: ', U.shape - # print('U:\n', U - - # We know that X * r = U - if rank(X) >= 2 * K: - r, _, _, _ = lstsq(X, U, rcond=None) # Make sure this is what I want - r = np.squeeze(r) - else: - raise Exception('cp2tform:twoUniquePointsReq') - - # print('--->r:\n', r - - sc = r[0] - ss = r[1] - tx = r[2] - ty = r[3] - - Tinv = np.array([ - [sc, -ss, 0], - [ss, sc, 0], - [tx, ty, 1] - ]) - - # print('--->Tinv:\n', Tinv - - T = inv(Tinv) - # print('--->T:\n', T - - T[:, 2] = np.array([0, 0, 1]) - - return T, Tinv - - -def findSimilarity(uv, xy, options=None): - options = {'K': 2} - - # uv = np.array(uv) - # xy = np.array(xy) - - # Solve for trans1 - trans1, trans1_inv = findNonreflectiveSimilarity(uv, xy, options) - - # Solve for trans2 - - # manually reflect the xy data across the Y-axis - xyR = xy - xyR[:, 0] = -1 * xyR[:, 0] - - trans2r, trans2r_inv = findNonreflectiveSimilarity(uv, xyR, options) - - # manually reflect the tform to undo the reflection done on xyR - TreflectY = np.array([ - [-1, 0, 0], - [0, 1, 0], - [0, 0, 1] - ]) - - trans2 = np.dot(trans2r, TreflectY) - - # Figure out if trans1 or trans2 is better - xy1 = tformfwd(trans1, uv) - norm1 = norm(xy1 - xy) - - xy2 = tformfwd(trans2, uv) - norm2 = norm(xy2 - xy) - - if norm1 <= norm2: - return trans1, trans1_inv - else: - trans2_inv = inv(trans2) - return trans2, trans2_inv - - -def get_similarity_transform(src_pts, dst_pts, reflective=True): - """ - Function: - ---------- - Find Similarity Transform Matrix 'trans': - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y, 1] = [u, v, 1] * trans - - Parameters: - ---------- - @src_pts: Kx2 np.array - source points, each row is a pair of coordinates (x, y) - @dst_pts: Kx2 np.array - destination points, each row is a pair of transformed - coordinates (x, y) - @reflective: True or False - if True: - use reflective similarity transform - else: - use non-reflective similarity transform - - Returns: - ---------- - @trans: 3x3 np.array - transform matrix from uv to xy - trans_inv: 3x3 np.array - inverse of trans, transform matrix from xy to uv - """ - - if reflective: - trans, trans_inv = findSimilarity(src_pts, dst_pts) - else: - trans, trans_inv = findNonreflectiveSimilarity(src_pts, dst_pts) - - return trans, trans_inv - - -def cvt_tform_mat_for_cv2(trans): - """ - Function: - ---------- - Convert Transform Matrix 'trans' into 'cv2_trans' which could be - directly used by cv2.warpAffine(): - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y].T = cv_trans * [u, v, 1].T - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix from uv to xy - - Returns: - ---------- - @cv2_trans: 2x3 np.array - transform matrix from src_pts to dst_pts, could be directly used - for cv2.warpAffine() - """ - cv2_trans = trans[:, 0:2].T - - return cv2_trans - - -def get_similarity_transform_for_cv2(src_pts, dst_pts, reflective=True): - """ - Function: - ---------- - Find Similarity Transform Matrix 'cv2_trans' which could be - directly used by cv2.warpAffine(): - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y].T = cv_trans * [u, v, 1].T - - Parameters: - ---------- - @src_pts: Kx2 np.array - source points, each row is a pair of coordinates (x, y) - @dst_pts: Kx2 np.array - destination points, each row is a pair of transformed - coordinates (x, y) - reflective: True or False - if True: - use reflective similarity transform - else: - use non-reflective similarity transform - - Returns: - ---------- - @cv2_trans: 2x3 np.array - transform matrix from src_pts to dst_pts, could be directly used - for cv2.warpAffine() - """ - trans, trans_inv = get_similarity_transform(src_pts, dst_pts, reflective) - cv2_trans = cvt_tform_mat_for_cv2(trans) - - return cv2_trans - - -if __name__ == '__main__': - """ - u = [0, 6, -2] - v = [0, 3, 5] - x = [-1, 0, 4] - y = [-1, -10, 4] - - # In Matlab, run: - # - # uv = [u'; v']; - # xy = [x'; y']; - # tform_sim=cp2tform(uv,xy,'similarity'); - # - # trans = tform_sim.tdata.T - # ans = - # -0.0764 -1.6190 0 - # 1.6190 -0.0764 0 - # -3.2156 0.0290 1.0000 - # trans_inv = tform_sim.tdata.Tinv - # ans = - # - # -0.0291 0.6163 0 - # -0.6163 -0.0291 0 - # -0.0756 1.9826 1.0000 - # xy_m=tformfwd(tform_sim, u,v) - # - # xy_m = - # - # -3.2156 0.0290 - # 1.1833 -9.9143 - # 5.0323 2.8853 - # uv_m=tforminv(tform_sim, x,y) - # - # uv_m = - # - # 0.5698 1.3953 - # 6.0872 2.2733 - # -2.6570 4.3314 - """ - u = [0, 6, -2] - v = [0, 3, 5] - x = [-1, 0, 4] - y = [-1, -10, 4] - - uv = np.array((u, v)).T - xy = np.array((x, y)).T - - print('\n--->uv:') - print(uv) - print('\n--->xy:') - print(xy) - - trans, trans_inv = get_similarity_transform(uv, xy) - - print('\n--->trans matrix:') - print(trans) - - print('\n--->trans_inv matrix:') - print(trans_inv) - - print('\n---> apply transform to uv') - print('\nxy_m = uv_augmented * trans') - uv_aug = np.hstack(( - uv, np.ones((uv.shape[0], 1)) - )) - xy_m = np.dot(uv_aug, trans) - print(xy_m) - - print('\nxy_m = tformfwd(trans, uv)') - xy_m = tformfwd(trans, uv) - print(xy_m) - - print('\n---> apply inverse transform to xy') - print('\nuv_m = xy_augmented * trans_inv') - xy_aug = np.hstack(( - xy, np.ones((xy.shape[0], 1)) - )) - uv_m = np.dot(xy_aug, trans_inv) - print(uv_m) - - print('\nuv_m = tformfwd(trans_inv, xy)') - uv_m = tformfwd(trans_inv, xy) - print(uv_m) - - uv_m = tforminv(trans, xy) - print('\nuv_m = tforminv(trans, xy)') - print(uv_m) diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_egl.py b/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_egl.py deleted file mode 100644 index e2f4bef39e33c2794e6837b5a1bb127d8d4dba06..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_egl.py +++ /dev/null @@ -1,16 +0,0 @@ -# from pyrender.platforms import egl - - -def tmp_test_default_device(): - egl.get_default_device() - - -def tmp_test_query_device(): - devices = egl.query_devices() - assert len(devices) > 0 - - -def tmp_test_init_context(): - device = egl.query_devices()[0] - platform = egl.EGLPlatform(128, 128, device=device) - platform.init_context() diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/pitch_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/pitch_utils.py deleted file mode 100644 index f7fd166abd3a03bac5909e498669b482447435cf..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/pitch_utils.py +++ /dev/null @@ -1,76 +0,0 @@ -######### -# world -########## -import librosa -import numpy as np -import torch - -gamma = 0 -mcepInput = 3 # 0 for dB, 3 for magnitude -alpha = 0.45 -en_floor = 10 ** (-80 / 20) -FFT_SIZE = 2048 - - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def norm_f0(f0, uv, hparams): - is_torch = isinstance(f0, torch.Tensor) - if hparams['pitch_norm'] == 'standard': - f0 = (f0 - hparams['f0_mean']) / hparams['f0_std'] - if hparams['pitch_norm'] == 'log': - f0 = torch.log2(f0) if is_torch else np.log2(f0) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - return f0 - - -def norm_interp_f0(f0, hparams): - is_torch = isinstance(f0, torch.Tensor) - if is_torch: - device = f0.device - f0 = f0.data.cpu().numpy() - uv = f0 == 0 - f0 = norm_f0(f0, uv, hparams) - if sum(uv) == len(f0): - f0[uv] = 0 - elif sum(uv) > 0: - f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv]) - uv = torch.FloatTensor(uv) - f0 = torch.FloatTensor(f0) - if is_torch: - f0 = f0.to(device) - return f0, uv - - -def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None): - if hparams['pitch_norm'] == 'standard': - f0 = f0 * hparams['f0_std'] + hparams['f0_mean'] - if hparams['pitch_norm'] == 'log': - f0 = 2 ** f0 - if min is not None: - f0 = f0.clamp(min=min) - if max is not None: - f0 = f0.clamp(max=max) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - if pitch_padding is not None: - f0[pitch_padding] = 0 - return f0 diff --git a/spaces/AILab-CVC/SEED-LLaMA/README.md b/spaces/AILab-CVC/SEED-LLaMA/README.md deleted file mode 100644 index fd3b74bc1025c649b0bd1b6e7a99e800ea9bf2ea..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: SEED LLaMA -emoji: 🌖 -colorFrom: yellow -colorTo: purple -sdk: docker -pinned: false -license: llama2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ajit025/Text_to_Image_conversion/README.md b/spaces/Ajit025/Text_to_Image_conversion/README.md deleted file mode 100644 index 92dbc6cc70b86bb93c10abc960fd769e115f22f9..0000000000000000000000000000000000000000 --- a/spaces/Ajit025/Text_to_Image_conversion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text To Image Conversion -emoji: 🌍 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AkitoP/umamusume_bert_vits2/bert_gen.py b/spaces/AkitoP/umamusume_bert_vits2/bert_gen.py deleted file mode 100644 index 997599dcc7a8a2cc86b6fa60b0c83a0cbb3ad5c2..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/bert_gen.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from multiprocessing import Pool -import commons -import utils -from tqdm import tqdm -from text import cleaned_text_to_sequence, get_bert -import argparse -import torch.multiprocessing as mp - -import os -os.environ['http_proxy'] = 'http://localhost:11796' -os.environ['https_proxy'] = 'http://localhost:11796' -def process_line(line): - rank = mp.current_process()._identity - rank = rank[0] if len(rank) > 0 else 0 - if torch.cuda.is_available(): - gpu_id = rank % torch.cuda.device_count() - device = torch.device(f"cuda:{gpu_id}") - wav_path, _, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - - bert_path = wav_path.replace(".wav", ".bert.pt") - - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except Exception: - bert = get_bert(text, word2ph, language_str, device) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("-c", "--config", type=str, default="configs/config.json") - parser.add_argument("--num_processes", type=int, default=2) - args = parser.parse_args() - config_path = args.config - hps = utils.get_hparams_from_file(config_path) - lines = [] - with open(hps.data.training_files, encoding="utf-8") as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding="utf-8") as f: - lines.extend(f.readlines()) - - num_processes = args.num_processes - with Pool(processes=num_processes) as pool: - for _ in tqdm(pool.imap_unordered(process_line, lines), total=len(lines)): - pass diff --git a/spaces/AlanMars/QYL-AI-Space/run_Windows.bat b/spaces/AlanMars/QYL-AI-Space/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/AlanMars/QYL-AI-Space/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/Allakhazam/anythingV4/README.md b/spaces/Allakhazam/anythingV4/README.md deleted file mode 100644 index 463db2100f0e4c2b7e9f255558bb76ada469b119..0000000000000000000000000000000000000000 --- a/spaces/Allakhazam/anythingV4/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AnythingV4 -emoji: 🔥 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: artistic-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pndm/pipeline_pndm.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pndm/pipeline_pndm.py deleted file mode 100644 index 4add91fd1a6972f2c810352f357e5ff70a841433..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pndm/pipeline_pndm.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from typing import List, Optional, Tuple, Union - -import torch - -from ...models import UNet2DModel -from ...schedulers import PNDMScheduler -from ...utils import randn_tensor -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -class PNDMPipeline(DiffusionPipeline): - r""" - Pipeline for unconditional image generation. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Parameters: - unet ([`UNet2DModel`]): - A `UNet2DModel` to denoise the encoded image latents. - scheduler ([`PNDMScheduler`]): - A `PNDMScheduler` to be used in combination with `unet` to denoise the encoded image. - """ - - unet: UNet2DModel - scheduler: PNDMScheduler - - def __init__(self, unet: UNet2DModel, scheduler: PNDMScheduler): - super().__init__() - - scheduler = PNDMScheduler.from_config(scheduler.config) - - self.register_modules(unet=unet, scheduler=scheduler) - - @torch.no_grad() - def __call__( - self, - batch_size: int = 1, - num_inference_steps: int = 50, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ) -> Union[ImagePipelineOutput, Tuple]: - r""" - The call function to the pipeline for generation. - - Args: - batch_size (`int`, `optional`, defaults to 1): - The number of images to generate. - num_inference_steps (`int`, `optional`, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - generator (`torch.Generator`, `optional`): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - output_type (`str`, `optional`, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`ImagePipelineOutput`] instead of a plain tuple. - - Example: - - ```py - >>> from diffusers import PNDMPipeline - - >>> # load model and scheduler - >>> pndm = PNDMPipeline.from_pretrained("google/ddpm-cifar10-32") - - >>> # run pipeline in inference (sample random noise and denoise) - >>> image = pndm().images[0] - - >>> # save image - >>> image.save("pndm_generated_image.png") - ``` - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is - returned where the first element is a list with the generated images. - """ - # For more information on the sampling method you can take a look at Algorithm 2 of - # the official paper: https://arxiv.org/pdf/2202.09778.pdf - - # Sample gaussian noise to begin loop - image = randn_tensor( - (batch_size, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), - generator=generator, - device=self.device, - ) - - self.scheduler.set_timesteps(num_inference_steps) - for t in self.progress_bar(self.scheduler.timesteps): - model_output = self.unet(image, t).sample - - image = self.scheduler.step(model_output, t, image).prev_sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl.py deleted file mode 100644 index 2d251a6586588cc1dd95b7b0ac3068ac10f575ab..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl.py +++ /dev/null @@ -1,691 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import copy -import unittest - -import numpy as np -import torch -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerDiscreteScheduler, - HeunDiscreteScheduler, - StableDiffusionXLImg2ImgPipeline, - StableDiffusionXLPipeline, - UNet2DConditionModel, - UniPCMultistepScheduler, -) -from diffusers.utils import torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - -from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS -from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin - - -enable_full_determinism() - - -class StableDiffusionXLPipelineFastTests(PipelineLatentTesterMixin, PipelineTesterMixin, unittest.TestCase): - pipeline_class = StableDiffusionXLPipeline - params = TEXT_TO_IMAGE_PARAMS - batch_params = TEXT_TO_IMAGE_BATCH_PARAMS - image_params = TEXT_TO_IMAGE_IMAGE_PARAMS - image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - # SD2-specific config below - attention_head_dim=(2, 4), - use_linear_projection=True, - addition_embed_type="text_time", - addition_time_embed_dim=8, - transformer_layers_per_block=(1, 2), - projection_class_embeddings_input_dim=80, # 6 * 8 + 32 - cross_attention_dim=64, - ) - scheduler = EulerDiscreteScheduler( - beta_start=0.00085, - beta_end=0.012, - steps_offset=1, - beta_schedule="scaled_linear", - timestep_spacing="leading", - ) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - sample_size=128, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - # SD2-specific config below - hidden_act="gelu", - projection_dim=32, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) - tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "text_encoder_2": text_encoder_2, - "tokenizer_2": tokenizer_2, - # "safety_checker": None, - # "feature_extractor": None, - } - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 5.0, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_xl_euler(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionXLPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5873, 0.6128, 0.4797, 0.5122, 0.5674, 0.4639, 0.5227, 0.5149, 0.4747]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_xl_prompt_embeds(self): - components = self.get_dummy_components() - sd_pipe = StableDiffusionXLPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - # forward without prompt embeds - inputs = self.get_dummy_inputs(torch_device) - inputs["prompt"] = 2 * [inputs["prompt"]] - inputs["num_images_per_prompt"] = 2 - - output = sd_pipe(**inputs) - image_slice_1 = output.images[0, -3:, -3:, -1] - - # forward with prompt embeds - inputs = self.get_dummy_inputs(torch_device) - prompt = 2 * [inputs.pop("prompt")] - - ( - prompt_embeds, - negative_prompt_embeds, - pooled_prompt_embeds, - negative_pooled_prompt_embeds, - ) = sd_pipe.encode_prompt(prompt) - - output = sd_pipe( - **inputs, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - pooled_prompt_embeds=pooled_prompt_embeds, - negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, - ) - image_slice_2 = output.images[0, -3:, -3:, -1] - - # make sure that it's equal - assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 - - def test_stable_diffusion_xl_negative_prompt_embeds(self): - components = self.get_dummy_components() - sd_pipe = StableDiffusionXLPipeline(**components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - # forward without prompt embeds - inputs = self.get_dummy_inputs(torch_device) - negative_prompt = 3 * ["this is a negative prompt"] - inputs["negative_prompt"] = negative_prompt - inputs["prompt"] = 3 * [inputs["prompt"]] - - output = sd_pipe(**inputs) - image_slice_1 = output.images[0, -3:, -3:, -1] - - # forward with prompt embeds - inputs = self.get_dummy_inputs(torch_device) - negative_prompt = 3 * ["this is a negative prompt"] - prompt = 3 * [inputs.pop("prompt")] - - ( - prompt_embeds, - negative_prompt_embeds, - pooled_prompt_embeds, - negative_pooled_prompt_embeds, - ) = sd_pipe.encode_prompt(prompt, negative_prompt=negative_prompt) - - output = sd_pipe( - **inputs, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - pooled_prompt_embeds=pooled_prompt_embeds, - negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, - ) - image_slice_2 = output.images[0, -3:, -3:, -1] - - # make sure that it's equal - assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 - - def test_attention_slicing_forward_pass(self): - super().test_attention_slicing_forward_pass(expected_max_diff=3e-3) - - def test_inference_batch_single_identical(self): - super().test_inference_batch_single_identical(expected_max_diff=3e-3) - - @require_torch_gpu - def test_stable_diffusion_xl_offloads(self): - pipes = [] - components = self.get_dummy_components() - sd_pipe = StableDiffusionXLPipeline(**components).to(torch_device) - pipes.append(sd_pipe) - - components = self.get_dummy_components() - sd_pipe = StableDiffusionXLPipeline(**components) - sd_pipe.enable_model_cpu_offload() - pipes.append(sd_pipe) - - components = self.get_dummy_components() - sd_pipe = StableDiffusionXLPipeline(**components) - sd_pipe.enable_sequential_cpu_offload() - pipes.append(sd_pipe) - - image_slices = [] - for pipe in pipes: - pipe.unet.set_default_attn_processor() - - inputs = self.get_dummy_inputs(torch_device) - image = pipe(**inputs).images - - image_slices.append(image[0, -3:, -3:, -1].flatten()) - - assert np.abs(image_slices[0] - image_slices[1]).max() < 1e-3 - assert np.abs(image_slices[0] - image_slices[2]).max() < 1e-3 - - def test_stable_diffusion_two_xl_mixture_of_denoiser(self): - components = self.get_dummy_components() - pipe_1 = StableDiffusionXLPipeline(**components).to(torch_device) - pipe_1.unet.set_default_attn_processor() - pipe_2 = StableDiffusionXLImg2ImgPipeline(**components).to(torch_device) - pipe_2.unet.set_default_attn_processor() - - def assert_run_mixture( - num_steps, - split, - scheduler_cls_orig, - expected_tss, - num_train_timesteps=pipe_1.scheduler.config.num_train_timesteps, - ): - inputs = self.get_dummy_inputs(torch_device) - inputs["num_inference_steps"] = num_steps - - class scheduler_cls(scheduler_cls_orig): - pass - - pipe_1.scheduler = scheduler_cls.from_config(pipe_1.scheduler.config) - pipe_2.scheduler = scheduler_cls.from_config(pipe_2.scheduler.config) - - # Let's retrieve the number of timesteps we want to use - pipe_1.scheduler.set_timesteps(num_steps) - expected_steps = pipe_1.scheduler.timesteps.tolist() - - expected_steps_1 = list(filter(lambda ts: ts >= split, expected_tss)) - expected_steps_2 = list(filter(lambda ts: ts < split, expected_tss)) - - # now we monkey patch step `done_steps` - # list into the step function for testing - done_steps = [] - old_step = copy.copy(scheduler_cls.step) - - def new_step(self, *args, **kwargs): - done_steps.append(args[1].cpu().item()) # args[1] is always the passed `t` - return old_step(self, *args, **kwargs) - - scheduler_cls.step = new_step - - inputs_1 = { - **inputs, - **{ - "denoising_end": 1.0 - (split / num_train_timesteps), - "output_type": "latent", - }, - } - latents = pipe_1(**inputs_1).images[0] - - assert expected_steps_1 == done_steps, f"Failure with {scheduler_cls.__name__} and {num_steps} and {split}" - - inputs_2 = { - **inputs, - **{ - "denoising_start": 1.0 - (split / num_train_timesteps), - "image": latents, - }, - } - pipe_2(**inputs_2).images[0] - - assert expected_steps_2 == done_steps[len(expected_steps_1) :] - assert expected_steps == done_steps, f"Failure with {scheduler_cls.__name__} and {num_steps} and {split}" - - steps = 10 - for split in [300, 500, 700]: - for scheduler_cls_timesteps in [ - (DDIMScheduler, [901, 801, 701, 601, 501, 401, 301, 201, 101, 1]), - (EulerDiscreteScheduler, [901, 801, 701, 601, 501, 401, 301, 201, 101, 1]), - (DPMSolverMultistepScheduler, [901, 811, 721, 631, 541, 451, 361, 271, 181, 91]), - (UniPCMultistepScheduler, [901, 811, 721, 631, 541, 451, 361, 271, 181, 91]), - ( - HeunDiscreteScheduler, - [ - 901.0, - 801.0, - 801.0, - 701.0, - 701.0, - 601.0, - 601.0, - 501.0, - 501.0, - 401.0, - 401.0, - 301.0, - 301.0, - 201.0, - 201.0, - 101.0, - 101.0, - 1.0, - 1.0, - ], - ), - ]: - assert_run_mixture(steps, split, scheduler_cls_timesteps[0], scheduler_cls_timesteps[1]) - - steps = 25 - for split in [300, 500, 700]: - for scheduler_cls_timesteps in [ - ( - DDIMScheduler, - [ - 961, - 921, - 881, - 841, - 801, - 761, - 721, - 681, - 641, - 601, - 561, - 521, - 481, - 441, - 401, - 361, - 321, - 281, - 241, - 201, - 161, - 121, - 81, - 41, - 1, - ], - ), - ( - EulerDiscreteScheduler, - [ - 961.0, - 921.0, - 881.0, - 841.0, - 801.0, - 761.0, - 721.0, - 681.0, - 641.0, - 601.0, - 561.0, - 521.0, - 481.0, - 441.0, - 401.0, - 361.0, - 321.0, - 281.0, - 241.0, - 201.0, - 161.0, - 121.0, - 81.0, - 41.0, - 1.0, - ], - ), - ( - DPMSolverMultistepScheduler, - [ - 951, - 913, - 875, - 837, - 799, - 761, - 723, - 685, - 647, - 609, - 571, - 533, - 495, - 457, - 419, - 381, - 343, - 305, - 267, - 229, - 191, - 153, - 115, - 77, - 39, - ], - ), - ( - UniPCMultistepScheduler, - [ - 951, - 913, - 875, - 837, - 799, - 761, - 723, - 685, - 647, - 609, - 571, - 533, - 495, - 457, - 419, - 381, - 343, - 305, - 267, - 229, - 191, - 153, - 115, - 77, - 39, - ], - ), - ( - HeunDiscreteScheduler, - [ - 961.0, - 921.0, - 921.0, - 881.0, - 881.0, - 841.0, - 841.0, - 801.0, - 801.0, - 761.0, - 761.0, - 721.0, - 721.0, - 681.0, - 681.0, - 641.0, - 641.0, - 601.0, - 601.0, - 561.0, - 561.0, - 521.0, - 521.0, - 481.0, - 481.0, - 441.0, - 441.0, - 401.0, - 401.0, - 361.0, - 361.0, - 321.0, - 321.0, - 281.0, - 281.0, - 241.0, - 241.0, - 201.0, - 201.0, - 161.0, - 161.0, - 121.0, - 121.0, - 81.0, - 81.0, - 41.0, - 41.0, - 1.0, - 1.0, - ], - ), - ]: - assert_run_mixture(steps, split, scheduler_cls_timesteps[0], scheduler_cls_timesteps[1]) - - def test_stable_diffusion_three_xl_mixture_of_denoiser(self): - components = self.get_dummy_components() - pipe_1 = StableDiffusionXLPipeline(**components).to(torch_device) - pipe_1.unet.set_default_attn_processor() - pipe_2 = StableDiffusionXLImg2ImgPipeline(**components).to(torch_device) - pipe_2.unet.set_default_attn_processor() - pipe_3 = StableDiffusionXLImg2ImgPipeline(**components).to(torch_device) - pipe_3.unet.set_default_attn_processor() - - def assert_run_mixture( - num_steps, - split_1, - split_2, - scheduler_cls_orig, - num_train_timesteps=pipe_1.scheduler.config.num_train_timesteps, - ): - inputs = self.get_dummy_inputs(torch_device) - inputs["num_inference_steps"] = num_steps - - class scheduler_cls(scheduler_cls_orig): - pass - - pipe_1.scheduler = scheduler_cls.from_config(pipe_1.scheduler.config) - pipe_2.scheduler = scheduler_cls.from_config(pipe_2.scheduler.config) - pipe_3.scheduler = scheduler_cls.from_config(pipe_3.scheduler.config) - - # Let's retrieve the number of timesteps we want to use - pipe_1.scheduler.set_timesteps(num_steps) - expected_steps = pipe_1.scheduler.timesteps.tolist() - - split_1_ts = num_train_timesteps - int(round(num_train_timesteps * split_1)) - split_2_ts = num_train_timesteps - int(round(num_train_timesteps * split_2)) - expected_steps_1 = expected_steps[:split_1_ts] - expected_steps_2 = expected_steps[split_1_ts:split_2_ts] - expected_steps_3 = expected_steps[split_2_ts:] - - expected_steps_1 = list(filter(lambda ts: ts >= split_1_ts, expected_steps)) - expected_steps_2 = list(filter(lambda ts: ts >= split_2_ts and ts < split_1_ts, expected_steps)) - expected_steps_3 = list(filter(lambda ts: ts < split_2_ts, expected_steps)) - - # now we monkey patch step `done_steps` - # list into the step function for testing - done_steps = [] - old_step = copy.copy(scheduler_cls.step) - - def new_step(self, *args, **kwargs): - done_steps.append(args[1].cpu().item()) # args[1] is always the passed `t` - return old_step(self, *args, **kwargs) - - scheduler_cls.step = new_step - - inputs_1 = {**inputs, **{"denoising_end": split_1, "output_type": "latent"}} - latents = pipe_1(**inputs_1).images[0] - - assert ( - expected_steps_1 == done_steps - ), f"Failure with {scheduler_cls.__name__} and {num_steps} and {split_1} and {split_2}" - - with self.assertRaises(ValueError) as cm: - inputs_2 = { - **inputs, - **{ - "denoising_start": split_2, - "denoising_end": split_1, - "image": latents, - "output_type": "latent", - }, - } - pipe_2(**inputs_2).images[0] - assert "cannot be larger than or equal to `denoising_end`" in str(cm.exception) - - inputs_2 = { - **inputs, - **{"denoising_start": split_1, "denoising_end": split_2, "image": latents, "output_type": "latent"}, - } - pipe_2(**inputs_2).images[0] - - assert expected_steps_2 == done_steps[len(expected_steps_1) :] - - inputs_3 = {**inputs, **{"denoising_start": split_2, "image": latents}} - pipe_3(**inputs_3).images[0] - - assert expected_steps_3 == done_steps[len(expected_steps_1) + len(expected_steps_2) :] - assert ( - expected_steps == done_steps - ), f"Failure with {scheduler_cls.__name__} and {num_steps} and {split_1} and {split_2}" - - for steps in [7, 11, 20]: - for split_1, split_2 in zip([0.19, 0.32], [0.81, 0.68]): - for scheduler_cls in [ - DDIMScheduler, - EulerDiscreteScheduler, - DPMSolverMultistepScheduler, - UniPCMultistepScheduler, - HeunDiscreteScheduler, - ]: - assert_run_mixture(steps, split_1, split_2, scheduler_cls) - - def test_stable_diffusion_xl_multi_prompts(self): - components = self.get_dummy_components() - sd_pipe = self.pipeline_class(**components).to(torch_device) - - # forward with single prompt - inputs = self.get_dummy_inputs(torch_device) - output = sd_pipe(**inputs) - image_slice_1 = output.images[0, -3:, -3:, -1] - - # forward with same prompt duplicated - inputs = self.get_dummy_inputs(torch_device) - inputs["prompt_2"] = inputs["prompt"] - output = sd_pipe(**inputs) - image_slice_2 = output.images[0, -3:, -3:, -1] - - # ensure the results are equal - assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 - - # forward with different prompt - inputs = self.get_dummy_inputs(torch_device) - inputs["prompt_2"] = "different prompt" - output = sd_pipe(**inputs) - image_slice_3 = output.images[0, -3:, -3:, -1] - - # ensure the results are not equal - assert np.abs(image_slice_1.flatten() - image_slice_3.flatten()).max() > 1e-4 - - # manually set a negative_prompt - inputs = self.get_dummy_inputs(torch_device) - inputs["negative_prompt"] = "negative prompt" - output = sd_pipe(**inputs) - image_slice_1 = output.images[0, -3:, -3:, -1] - - # forward with same negative_prompt duplicated - inputs = self.get_dummy_inputs(torch_device) - inputs["negative_prompt"] = "negative prompt" - inputs["negative_prompt_2"] = inputs["negative_prompt"] - output = sd_pipe(**inputs) - image_slice_2 = output.images[0, -3:, -3:, -1] - - # ensure the results are equal - assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 - - # forward with different negative_prompt - inputs = self.get_dummy_inputs(torch_device) - inputs["negative_prompt"] = "negative prompt" - inputs["negative_prompt_2"] = "different negative prompt" - output = sd_pipe(**inputs) - image_slice_3 = output.images[0, -3:, -3:, -1] - - # ensure the results are not equal - assert np.abs(image_slice_1.flatten() - image_slice_3.flatten()).max() > 1e-4 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_rfp_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_rfp_1x_coco.py deleted file mode 100644 index 4430d8a677e48f84552eb23403bc874c56bda506..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/detectors/cascade_rcnn_r50_rfp_1x_coco.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = [ - '../_base_/models/cascade_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - type='DetectoRS_ResNet', - conv_cfg=dict(type='ConvAWS'), - output_img=True), - neck=dict( - type='RFP', - rfp_steps=2, - aspp_out_channels=64, - aspp_dilations=(1, 3, 6, 1), - rfp_backbone=dict( - rfp_inplanes=256, - type='DetectoRS_ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - conv_cfg=dict(type='ConvAWS'), - pretrained='torchvision://resnet50', - style='pytorch'))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x50_32x4d_fpn_gn_ws-all_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x50_32x4d_fpn_gn_ws-all_1x_coco.py deleted file mode 100644 index 1268980615b69009a33b785eeb59322372633d10..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x50_32x4d_fpn_gn_ws-all_1x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py' -conv_cfg = dict(type='ConvWS') -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://jhu/resnext50_32x4d_gn_ws', - backbone=dict( - type='ResNeXt', - depth=50, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/bucketing_bbox_coder.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/bucketing_bbox_coder.py deleted file mode 100644 index 92d24b4519edece7a4af8f5cfa9af025b25f2dad..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/bucketing_bbox_coder.py +++ /dev/null @@ -1,350 +0,0 @@ -import mmcv -import numpy as np -import torch -import torch.nn.functional as F - -from ..builder import BBOX_CODERS -from ..transforms import bbox_rescale -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class BucketingBBoxCoder(BaseBBoxCoder): - """Bucketing BBox Coder for Side-Aware Boundary Localization (SABL). - - Boundary Localization with Bucketing and Bucketing Guided Rescoring - are implemented here. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - num_buckets (int): Number of buckets. - scale_factor (int): Scale factor of proposals to generate buckets. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset upperbound to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, - num_buckets, - scale_factor, - offset_topk=2, - offset_upperbound=1.0, - cls_ignore_neighbor=True, - clip_border=True): - super(BucketingBBoxCoder, self).__init__() - self.num_buckets = num_buckets - self.scale_factor = scale_factor - self.offset_topk = offset_topk - self.offset_upperbound = offset_upperbound - self.cls_ignore_neighbor = cls_ignore_neighbor - self.clip_border = clip_border - - def encode(self, bboxes, gt_bboxes): - """Get bucketing estimation and fine regression targets during - training. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground truth boxes. - - Returns: - encoded_bboxes(tuple[Tensor]): bucketing estimation - and fine regression targets and weights - """ - - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2bucket(bboxes, gt_bboxes, self.num_buckets, - self.scale_factor, self.offset_topk, - self.offset_upperbound, - self.cls_ignore_neighbor) - return encoded_bboxes - - def decode(self, bboxes, pred_bboxes, max_shape=None): - """Apply transformation `pred_bboxes` to `boxes`. - Args: - boxes (torch.Tensor): Basic boxes. - pred_bboxes (torch.Tensor): Predictions for bucketing estimation - and fine regression - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert len(pred_bboxes) == 2 - cls_preds, offset_preds = pred_bboxes - assert cls_preds.size(0) == bboxes.size(0) and offset_preds.size( - 0) == bboxes.size(0) - decoded_bboxes = bucket2bbox(bboxes, cls_preds, offset_preds, - self.num_buckets, self.scale_factor, - max_shape, self.clip_border) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def generat_buckets(proposals, num_buckets, scale_factor=1.0): - """Generate buckets w.r.t bucket number and scale factor of proposals. - - Args: - proposals (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - - Returns: - tuple[Tensor]: (bucket_w, bucket_h, l_buckets, r_buckets, - t_buckets, d_buckets) - - - bucket_w: Width of buckets on x-axis. Shape (n, ). - - bucket_h: Height of buckets on y-axis. Shape (n, ). - - l_buckets: Left buckets. Shape (n, ceil(side_num/2)). - - r_buckets: Right buckets. Shape (n, ceil(side_num/2)). - - t_buckets: Top buckets. Shape (n, ceil(side_num/2)). - - d_buckets: Down buckets. Shape (n, ceil(side_num/2)). - """ - proposals = bbox_rescale(proposals, scale_factor) - - # number of buckets in each side - side_num = int(np.ceil(num_buckets / 2.0)) - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - px1 = proposals[..., 0] - py1 = proposals[..., 1] - px2 = proposals[..., 2] - py2 = proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - # left buckets - l_buckets = px1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # right buckets - r_buckets = px2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # top buckets - t_buckets = py1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - # down buckets - d_buckets = py2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - return bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, d_buckets - - -@mmcv.jit(coderize=True) -def bbox2bucket(proposals, - gt, - num_buckets, - scale_factor, - offset_topk=2, - offset_upperbound=1.0, - cls_ignore_neighbor=True): - """Generate buckets estimation and fine regression targets. - - Args: - proposals (Tensor): Shape (n, 4) - gt (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset allowance to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - - Returns: - tuple[Tensor]: (offsets, offsets_weights, bucket_labels, cls_weights). - - - offsets: Fine regression targets. \ - Shape (n, num_buckets*2). - - offsets_weights: Fine regression weights. \ - Shape (n, num_buckets*2). - - bucket_labels: Bucketing estimation labels. \ - Shape (n, num_buckets*2). - - cls_weights: Bucketing estimation weights. \ - Shape (n, num_buckets*2). - """ - assert proposals.size() == gt.size() - - # generate buckets - proposals = proposals.float() - gt = gt.float() - (bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, - d_buckets) = generat_buckets(proposals, num_buckets, scale_factor) - - gx1 = gt[..., 0] - gy1 = gt[..., 1] - gx2 = gt[..., 2] - gy2 = gt[..., 3] - - # generate offset targets and weights - # offsets from buckets to gts - l_offsets = (l_buckets - gx1[:, None]) / bucket_w[:, None] - r_offsets = (r_buckets - gx2[:, None]) / bucket_w[:, None] - t_offsets = (t_buckets - gy1[:, None]) / bucket_h[:, None] - d_offsets = (d_buckets - gy2[:, None]) / bucket_h[:, None] - - # select top-k nearset buckets - l_topk, l_label = l_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - r_topk, r_label = r_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - t_topk, t_label = t_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - d_topk, d_label = d_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - - offset_l_weights = l_offsets.new_zeros(l_offsets.size()) - offset_r_weights = r_offsets.new_zeros(r_offsets.size()) - offset_t_weights = t_offsets.new_zeros(t_offsets.size()) - offset_d_weights = d_offsets.new_zeros(d_offsets.size()) - inds = torch.arange(0, proposals.size(0)).to(proposals).long() - - # generate offset weights of top-k nearset buckets - for k in range(offset_topk): - if k >= 1: - offset_l_weights[inds, l_label[:, - k]] = (l_topk[:, k] < - offset_upperbound).float() - offset_r_weights[inds, r_label[:, - k]] = (r_topk[:, k] < - offset_upperbound).float() - offset_t_weights[inds, t_label[:, - k]] = (t_topk[:, k] < - offset_upperbound).float() - offset_d_weights[inds, d_label[:, - k]] = (d_topk[:, k] < - offset_upperbound).float() - else: - offset_l_weights[inds, l_label[:, k]] = 1.0 - offset_r_weights[inds, r_label[:, k]] = 1.0 - offset_t_weights[inds, t_label[:, k]] = 1.0 - offset_d_weights[inds, d_label[:, k]] = 1.0 - - offsets = torch.cat([l_offsets, r_offsets, t_offsets, d_offsets], dim=-1) - offsets_weights = torch.cat([ - offset_l_weights, offset_r_weights, offset_t_weights, offset_d_weights - ], - dim=-1) - - # generate bucket labels and weight - side_num = int(np.ceil(num_buckets / 2.0)) - labels = torch.stack( - [l_label[:, 0], r_label[:, 0], t_label[:, 0], d_label[:, 0]], dim=-1) - - batch_size = labels.size(0) - bucket_labels = F.one_hot(labels.view(-1), side_num).view(batch_size, - -1).float() - bucket_cls_l_weights = (l_offsets.abs() < 1).float() - bucket_cls_r_weights = (r_offsets.abs() < 1).float() - bucket_cls_t_weights = (t_offsets.abs() < 1).float() - bucket_cls_d_weights = (d_offsets.abs() < 1).float() - bucket_cls_weights = torch.cat([ - bucket_cls_l_weights, bucket_cls_r_weights, bucket_cls_t_weights, - bucket_cls_d_weights - ], - dim=-1) - # ignore second nearest buckets for cls if necessary - if cls_ignore_neighbor: - bucket_cls_weights = (~((bucket_cls_weights == 1) & - (bucket_labels == 0))).float() - else: - bucket_cls_weights[:] = 1.0 - return offsets, offsets_weights, bucket_labels, bucket_cls_weights - - -@mmcv.jit(coderize=True) -def bucket2bbox(proposals, - cls_preds, - offset_preds, - num_buckets, - scale_factor=1.0, - max_shape=None, - clip_border=True): - """Apply bucketing estimation (cls preds) and fine regression (offset - preds) to generate det bboxes. - - Args: - proposals (Tensor): Boxes to be transformed. Shape (n, 4) - cls_preds (Tensor): bucketing estimation. Shape (n, num_buckets*2). - offset_preds (Tensor): fine regression. Shape (n, num_buckets*2). - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Returns: - tuple[Tensor]: (bboxes, loc_confidence). - - - bboxes: predicted bboxes. Shape (n, 4) - - loc_confidence: localization confidence of predicted bboxes. - Shape (n,). - """ - - side_num = int(np.ceil(num_buckets / 2.0)) - cls_preds = cls_preds.view(-1, side_num) - offset_preds = offset_preds.view(-1, side_num) - - scores = F.softmax(cls_preds, dim=1) - score_topk, score_label = scores.topk(2, dim=1, largest=True, sorted=True) - - rescaled_proposals = bbox_rescale(proposals, scale_factor) - - pw = rescaled_proposals[..., 2] - rescaled_proposals[..., 0] - ph = rescaled_proposals[..., 3] - rescaled_proposals[..., 1] - px1 = rescaled_proposals[..., 0] - py1 = rescaled_proposals[..., 1] - px2 = rescaled_proposals[..., 2] - py2 = rescaled_proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - score_inds_l = score_label[0::4, 0] - score_inds_r = score_label[1::4, 0] - score_inds_t = score_label[2::4, 0] - score_inds_d = score_label[3::4, 0] - l_buckets = px1 + (0.5 + score_inds_l.float()) * bucket_w - r_buckets = px2 - (0.5 + score_inds_r.float()) * bucket_w - t_buckets = py1 + (0.5 + score_inds_t.float()) * bucket_h - d_buckets = py2 - (0.5 + score_inds_d.float()) * bucket_h - - offsets = offset_preds.view(-1, 4, side_num) - inds = torch.arange(proposals.size(0)).to(proposals).long() - l_offsets = offsets[:, 0, :][inds, score_inds_l] - r_offsets = offsets[:, 1, :][inds, score_inds_r] - t_offsets = offsets[:, 2, :][inds, score_inds_t] - d_offsets = offsets[:, 3, :][inds, score_inds_d] - - x1 = l_buckets - l_offsets * bucket_w - x2 = r_buckets - r_offsets * bucket_w - y1 = t_buckets - t_offsets * bucket_h - y2 = d_buckets - d_offsets * bucket_h - - if clip_border and max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.cat([x1[:, None], y1[:, None], x2[:, None], y2[:, None]], - dim=-1) - - # bucketing guided rescoring - loc_confidence = score_topk[:, 0] - top2_neighbor_inds = (score_label[:, 0] - score_label[:, 1]).abs() == 1 - loc_confidence += score_topk[:, 1] * top2_neighbor_inds.float() - loc_confidence = loc_confidence.view(-1, 4).mean(dim=1) - - return bboxes, loc_confidence diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/ocrnet_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/ocrnet_r50-d8.py deleted file mode 100644 index 615aa3ff703942b6c22b2d6e9642504dd3e41ebd..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/ocrnet_r50-d8.py +++ /dev/null @@ -1,47 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='CascadeEncoderDecoder', - num_stages=2, - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=[ - dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=2048, - in_index=3, - channels=512, - ocr_channels=256, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) - ], - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-nav/quarto-nav.js b/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-nav/quarto-nav.js deleted file mode 100644 index 3b21201f9537ba017353711b4a310936d9569945..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-nav/quarto-nav.js +++ /dev/null @@ -1,277 +0,0 @@ -const headroomChanged = new CustomEvent("quarto-hrChanged", { - detail: {}, - bubbles: true, - cancelable: false, - composed: false, -}); - -window.document.addEventListener("DOMContentLoaded", function () { - let init = false; - - // Manage the back to top button, if one is present. - let lastScrollTop = window.pageYOffset || document.documentElement.scrollTop; - const scrollDownBuffer = 5; - const scrollUpBuffer = 35; - const btn = document.getElementById("quarto-back-to-top"); - const hideBackToTop = () => { - btn.style.display = "none"; - }; - const showBackToTop = () => { - btn.style.display = "inline-block"; - }; - if (btn) { - window.document.addEventListener( - "scroll", - function () { - const currentScrollTop = - window.pageYOffset || document.documentElement.scrollTop; - - // Shows and hides the button 'intelligently' as the user scrolls - if (currentScrollTop - scrollDownBuffer > lastScrollTop) { - hideBackToTop(); - lastScrollTop = currentScrollTop <= 0 ? 0 : currentScrollTop; - } else if (currentScrollTop < lastScrollTop - scrollUpBuffer) { - showBackToTop(); - lastScrollTop = currentScrollTop <= 0 ? 0 : currentScrollTop; - } - - // Show the button at the bottom, hides it at the top - if (currentScrollTop <= 0) { - hideBackToTop(); - } else if ( - window.innerHeight + currentScrollTop >= - document.body.offsetHeight - ) { - showBackToTop(); - } - }, - false - ); - } - - function throttle(func, wait) { - var timeout; - return function () { - const context = this; - const args = arguments; - const later = function () { - clearTimeout(timeout); - timeout = null; - func.apply(context, args); - }; - - if (!timeout) { - timeout = setTimeout(later, wait); - } - }; - } - - function headerOffset() { - // Set an offset if there is are fixed top navbar - const headerEl = window.document.querySelector("header.fixed-top"); - if (headerEl) { - return headerEl.clientHeight; - } else { - return 0; - } - } - - function footerOffset() { - const footerEl = window.document.querySelector("footer.footer"); - if (footerEl) { - return footerEl.clientHeight; - } else { - return 0; - } - } - - function updateDocumentOffsetWithoutAnimation() { - updateDocumentOffset(false); - } - - function updateDocumentOffset(animated) { - // set body offset - const topOffset = headerOffset(); - const bodyOffset = topOffset + footerOffset(); - const bodyEl = window.document.body; - bodyEl.setAttribute("data-bs-offset", topOffset); - bodyEl.style.paddingTop = topOffset + "px"; - - // deal with sidebar offsets - const sidebars = window.document.querySelectorAll( - ".sidebar, .headroom-target" - ); - sidebars.forEach((sidebar) => { - if (!animated) { - sidebar.classList.add("notransition"); - // Remove the no transition class after the animation has time to complete - setTimeout(function () { - sidebar.classList.remove("notransition"); - }, 201); - } - - if (window.Headroom && sidebar.classList.contains("sidebar-unpinned")) { - sidebar.style.top = "0"; - sidebar.style.maxHeight = "100vh"; - } else { - sidebar.style.top = topOffset + "px"; - sidebar.style.maxHeight = "calc(100vh - " + topOffset + "px)"; - } - }); - - // allow space for footer - const mainContainer = window.document.querySelector(".quarto-container"); - if (mainContainer) { - mainContainer.style.minHeight = "calc(100vh - " + bodyOffset + "px)"; - } - - // link offset - let linkStyle = window.document.querySelector("#quarto-target-style"); - if (!linkStyle) { - linkStyle = window.document.createElement("style"); - linkStyle.setAttribute("id", "quarto-target-style"); - window.document.head.appendChild(linkStyle); - } - while (linkStyle.firstChild) { - linkStyle.removeChild(linkStyle.firstChild); - } - if (topOffset > 0) { - linkStyle.appendChild( - window.document.createTextNode(` - section:target::before { - content: ""; - display: block; - height: ${topOffset}px; - margin: -${topOffset}px 0 0; - }`) - ); - } - if (init) { - window.dispatchEvent(headroomChanged); - } - init = true; - } - - // initialize headroom - var header = window.document.querySelector("#quarto-header"); - if (header && window.Headroom) { - const headroom = new window.Headroom(header, { - tolerance: 5, - onPin: function () { - const sidebars = window.document.querySelectorAll( - ".sidebar, .headroom-target" - ); - sidebars.forEach((sidebar) => { - sidebar.classList.remove("sidebar-unpinned"); - }); - updateDocumentOffset(); - }, - onUnpin: function () { - const sidebars = window.document.querySelectorAll( - ".sidebar, .headroom-target" - ); - sidebars.forEach((sidebar) => { - sidebar.classList.add("sidebar-unpinned"); - }); - updateDocumentOffset(); - }, - }); - headroom.init(); - - let frozen = false; - window.quartoToggleHeadroom = function () { - if (frozen) { - headroom.unfreeze(); - frozen = false; - } else { - headroom.freeze(); - frozen = true; - } - }; - } - - window.addEventListener( - "hashchange", - function (e) { - if ( - getComputedStyle(document.documentElement).scrollBehavior !== "smooth" - ) { - window.scrollTo(0, window.pageYOffset - headerOffset()); - } - }, - false - ); - - // Observe size changed for the header - const headerEl = window.document.querySelector("header.fixed-top"); - if (headerEl && window.ResizeObserver) { - const observer = new window.ResizeObserver( - updateDocumentOffsetWithoutAnimation - ); - observer.observe(headerEl, { - attributes: true, - childList: true, - characterData: true, - }); - } else { - window.addEventListener( - "resize", - throttle(updateDocumentOffsetWithoutAnimation, 50) - ); - } - setTimeout(updateDocumentOffsetWithoutAnimation, 250); - - // fixup index.html links if we aren't on the filesystem - if (window.location.protocol !== "file:") { - const links = window.document.querySelectorAll("a"); - for (let i = 0; i < links.length; i++) { - if (links[i].href) { - links[i].href = links[i].href.replace(/\/index\.html/, "/"); - } - } - - // Fixup any sharing links that require urls - // Append url to any sharing urls - const sharingLinks = window.document.querySelectorAll( - "a.sidebar-tools-main-item" - ); - for (let i = 0; i < sharingLinks.length; i++) { - const sharingLink = sharingLinks[i]; - const href = sharingLink.getAttribute("href"); - if (href) { - sharingLink.setAttribute( - "href", - href.replace("|url|", window.location.href) - ); - } - } - - // Scroll the active navigation item into view, if necessary - const navSidebar = window.document.querySelector("nav#quarto-sidebar"); - if (navSidebar) { - // Find the active item - const activeItem = navSidebar.querySelector("li.sidebar-item a.active"); - if (activeItem) { - // Wait for the scroll height and height to resolve by observing size changes on the - // nav element that is scrollable - const resizeObserver = new ResizeObserver((_entries) => { - // The bottom of the element - const elBottom = activeItem.offsetTop; - const viewBottom = navSidebar.scrollTop + navSidebar.clientHeight; - - // The element height and scroll height are the same, then we are still loading - if (viewBottom !== navSidebar.scrollHeight) { - // Determine if the item isn't visible and scroll to it - if (elBottom >= viewBottom) { - navSidebar.scrollTop = elBottom; - } - - // stop observing now since we've completed the scroll - resizeObserver.unobserve(navSidebar); - } - }); - resizeObserver.observe(navSidebar); - } - } - } -}); diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-cai-chat-square.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-cai-chat-square.css deleted file mode 100644 index 0098da35ee7eb7bd164abd48ecd74554337f5a53..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-cai-chat-square.css +++ /dev/null @@ -1,21 +0,0 @@ -@import url("file/css/chat_style-cai-chat.css"); - -.circle-bot, .circle-you { - height: 90px; - width: 60px; - border-radius: 10px; - background-color: #656565; -} - -.circle-bot img, .circle-you img { - border-radius: 8.333px; -} - -.circle-you { - background-color: #656565; -} - -.message { - padding-bottom: 30px; - grid-template-columns: 70px minmax(0, 1fr); -} diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/priority.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/priority.py deleted file mode 100644 index 64cc4e3a05f8d5b89ab6eb32461e6e80f1d62e67..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/priority.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum - - -class Priority(Enum): - """Hook priority levels. - - +--------------+------------+ - | Level | Value | - +==============+============+ - | HIGHEST | 0 | - +--------------+------------+ - | VERY_HIGH | 10 | - +--------------+------------+ - | HIGH | 30 | - +--------------+------------+ - | ABOVE_NORMAL | 40 | - +--------------+------------+ - | NORMAL | 50 | - +--------------+------------+ - | BELOW_NORMAL | 60 | - +--------------+------------+ - | LOW | 70 | - +--------------+------------+ - | VERY_LOW | 90 | - +--------------+------------+ - | LOWEST | 100 | - +--------------+------------+ - """ - - HIGHEST = 0 - VERY_HIGH = 10 - HIGH = 30 - ABOVE_NORMAL = 40 - NORMAL = 50 - BELOW_NORMAL = 60 - LOW = 70 - VERY_LOW = 90 - LOWEST = 100 - - -def get_priority(priority): - """Get priority value. - - Args: - priority (int or str or :obj:`Priority`): Priority. - - Returns: - int: The priority value. - """ - if isinstance(priority, int): - if priority < 0 or priority > 100: - raise ValueError('priority must be between 0 and 100') - return priority - elif isinstance(priority, Priority): - return priority.value - elif isinstance(priority, str): - return Priority[priority.upper()].value - else: - raise TypeError('priority must be an integer or Priority enum value') diff --git a/spaces/Anuj-Panthri/imdb_review_sentiment/README.md b/spaces/Anuj-Panthri/imdb_review_sentiment/README.md deleted file mode 100644 index d71b6ee17224289bc1891b1fc9f71bf4e66ca0bb..0000000000000000000000000000000000000000 --- a/spaces/Anuj-Panthri/imdb_review_sentiment/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Imdb Review Sentiment -emoji: 😻 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Apex-X/ROOPOK/roop/processors/frame/face_swapper.py b/spaces/Apex-X/ROOPOK/roop/processors/frame/face_swapper.py deleted file mode 100644 index e65d67f1efee1693448b6597ddeb9fc4f1460794..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/ROOPOK/roop/processors/frame/face_swapper.py +++ /dev/null @@ -1,100 +0,0 @@ -from typing import Any, List, Callable -import cv2 -import insightface -import threading - -import roop.globals -import roop.processors.frame.core -from roop.core import update_status -from roop.face_analyser import get_one_face, get_many_faces, find_similar_face -from roop.face_reference import get_face_reference, set_face_reference, clear_face_reference -from roop.typing import Face, Frame -from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video - -FACE_SWAPPER = None -THREAD_LOCK = threading.Lock() -NAME = 'ROOP.FACE-SWAPPER' - - -def get_face_swapper() -> Any: - global FACE_SWAPPER - - with THREAD_LOCK: - if FACE_SWAPPER is None: - model_path = resolve_relative_path('../models/inswapper_128.onnx') - FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=roop.globals.execution_providers) - return FACE_SWAPPER - - -def clear_face_swapper() -> None: - global FACE_SWAPPER - - FACE_SWAPPER = None - - -def pre_check() -> bool: - download_directory_path = resolve_relative_path('../models') - conditional_download(download_directory_path, ['https://huggingface.co/deepinsight/inswapper/resolve/main/inswapper_128.onnx']) - return True - - -def pre_start() -> bool: - if not is_image(roop.globals.source_path): - update_status('Select an image for source path.', NAME) - return False - elif not get_one_face(cv2.imread(roop.globals.source_path)): - update_status('No face in source path detected.', NAME) - return False - if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path): - update_status('Select an image or video for target path.', NAME) - return False - return True - - -def post_process() -> None: - clear_face_swapper() - clear_face_reference() - - -def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame: - return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True) - - -def process_frame(source_face: Face, reference_face: Face, temp_frame: Frame) -> Frame: - if roop.globals.many_faces: - many_faces = get_many_faces(temp_frame) - if many_faces: - for target_face in many_faces: - temp_frame = swap_face(source_face, target_face, temp_frame) - else: - target_face = find_similar_face(temp_frame, reference_face) - if target_face: - temp_frame = swap_face(source_face, target_face, temp_frame) - return temp_frame - - -def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None: - source_face = get_one_face(cv2.imread(source_path)) - reference_face = None if roop.globals.many_faces else get_face_reference() - for temp_frame_path in temp_frame_paths: - temp_frame = cv2.imread(temp_frame_path) - result = process_frame(source_face, reference_face, temp_frame) - cv2.imwrite(temp_frame_path, result) - if update: - update() - - -def process_image(source_path: str, target_path: str, output_path: str) -> None: - source_face = get_one_face(cv2.imread(source_path)) - target_frame = cv2.imread(target_path) - reference_face = None if roop.globals.many_faces else get_one_face(target_frame, roop.globals.reference_face_position) - result = process_frame(source_face, reference_face, target_frame) - cv2.imwrite(output_path, result) - - -def process_video(source_path: str, temp_frame_paths: List[str]) -> None: - if not roop.globals.many_faces and not get_face_reference(): - reference_frame = cv2.imread(temp_frame_paths[roop.globals.reference_frame_number]) - reference_face = get_one_face(reference_frame, roop.globals.reference_face_position) - set_face_reference(reference_face) - roop.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames) diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/utils/autocast.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/__init__.py b/spaces/Artrajz/vits-simple-api/bert_vits2/__init__.py deleted file mode 100644 index d3d019aa31cbe7a5333aa00a110ddf9ae58e2d7a..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/bert_vits2/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from bert_vits2.bert_vits2 import Bert_VITS2 -from bert_vits2 import text diff --git a/spaces/Ash123/stable-diffusion-nano/app.py b/spaces/Ash123/stable-diffusion-nano/app.py deleted file mode 100644 index def4998f008ac3a4c868cb48cf3cfe8aba017996..0000000000000000000000000000000000000000 --- a/spaces/Ash123/stable-diffusion-nano/app.py +++ /dev/null @@ -1,330 +0,0 @@ -import gradio as gr -import jax -import jax.numpy as jnp -from diffusers import FlaxPNDMScheduler, FlaxStableDiffusionPipeline -from flax.jax_utils import replicate -from flax.training.common_utils import shard -from share_btn import community_icon_html, loading_icon_html, share_js - -DTYPE = jnp.float16 - -pipeline, pipeline_params = FlaxStableDiffusionPipeline.from_pretrained( - "bguisard/stable-diffusion-nano-2-1", - dtype=DTYPE, -) -if DTYPE != jnp.float32: - # There is a known issue with schedulers when loading from a pre trained - # pipeline. We need the schedulers to always use float32. - # See: https://github.com/huggingface/diffusers/issues/2155 - scheduler, scheduler_params = FlaxPNDMScheduler.from_pretrained( - pretrained_model_name_or_path="bguisard/stable-diffusion-nano-2-1", - subfolder="scheduler", - dtype=jnp.float32, - ) - pipeline_params["scheduler"] = scheduler_params - pipeline.scheduler = scheduler - - -def generate_image(prompt: str, negative_prompt: str = "", inference_steps: int = 25, prng_seed: int = 0, guidance_scale: float = 9): - rng = jax.random.PRNGKey(int(prng_seed)) - rng = jax.random.split(rng, jax.device_count()) - p_params = replicate(pipeline_params) - - num_samples = 1 - prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) - prompt_ids = shard(prompt_ids) - - if negative_prompt == "": - images = pipeline( - prompt_ids=prompt_ids, - params=p_params, - prng_seed=rng, - height=128, - width=128, - num_inference_steps=int(inference_steps), - guidance_scale=float(guidance_scale), - jit=True, - ).images - else: - neg_prompt_ids = pipeline.prepare_inputs( - [negative_prompt] * num_samples) - neg_prompt_ids = shard(neg_prompt_ids) - images = pipeline( - prompt_ids=prompt_ids, - params=p_params, - prng_seed=rng, - height=128, - width=128, - num_inference_steps=int(inference_steps), - neg_prompt_ids=neg_prompt_ids, - guidance_scale=float(guidance_scale), - jit=True, - ).images - images = images.reshape((num_samples,) + images.shape[-3:]) - images = pipeline.numpy_to_pil(images) - return images[0] - -examples = [ - ["A watercolor painting of a bird"], - ["A watercolor painting of an otter"] -] -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - max-width: 730px!important; - margin: auto; - padding-top: 1.5rem; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - cache_examples=True, - postprocess=False) - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - .share_button { - color:#6366f1!important; - } - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-text-input, #negative-prompt-text-input{padding: .45rem 0.625rem} - .image_duplication{position: absolute; width: 100px; left: 50px} - -""" - -block = gr.Blocks(theme="gradio/soft",css=css) - -with block as demo: - gr.HTML( - """ -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - -

- Stable Diffusion Nano Demo -

-
-

- Stable Diffusion Nano was built during the JAX/Diffusers community sprint 🧨 based on Stable Diffusion 2.1 and finetuned on 128x128 images for fast prototyping.
-

-
- """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row(elem_id="prompt-container").style(equal_height=True): - with gr.Column(scale=2): - prompt_input = gr.Textbox( - label="Enter your prompt", - max_lines=1, - placeholder="Enter your prompt", - elem_id="prompt-text-input", - show_label=False, - ) - negative = gr.Textbox( - label="Enter your negative prompt", - max_lines=1, - placeholder="Enter a negative prompt", - elem_id="negative-prompt-text-input", - show_label=False, - ) - btn = gr.Button("Generate image", label="Primary Button", variant="primary") - - gallery = gr.Image( - label="Generated images", show_label=False, elem_id="gallery" - ) - - - with gr.Row(): - with gr.Column(scale=2): - with gr.Accordion("Advanced settings"): - seed_input = gr.inputs.Number(default=0, label="Seed") - inf_steps_input = gr.inputs.Slider( - minimum=1, maximum=100, default=25, step=1, label="Inference Steps" - ) - guidance_scale = gr.inputs.Slider( - label="Guidance Scale", minimum=0, maximum=50, default=9, step=0.1 - ) - with gr.Column(scale=1): - # advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") - ex = gr.Examples(examples=examples, - fn=generate_image, - inputs=[prompt_input, negative,inf_steps_input, seed_input, guidance_scale], - outputs=[gallery], - cache_examples=False) - ex.dataset.headers = [""] - - share_button = gr.Button("Share to community",elem_classes="share_button") - - - negative.submit(generate_image, inputs=[ - prompt_input, negative, inf_steps_input, seed_input, guidance_scale], outputs=[gallery], postprocess=False) - prompt_input.submit(generate_image, inputs=[ - prompt_input, negative, inf_steps_input, seed_input, guidance_scale], outputs=[gallery], postprocess=False) - btn.click(generate_image, inputs=[prompt_input, negative, inf_steps_input, - seed_input, guidance_scale], outputs=[gallery], postprocess=False) - - share_button.click( - None, - [], - [], - _js=share_js, - ) - gr.Markdown("Model by Stable Diffusion Nano Team",elem_classes="footer") - with gr.Accordion(label="License", open=False): - gr.HTML( - """ -
-

LICENSE

- The model is licensed with a CreativeML OpenRAIL++ license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license

-

Biases and content acknowledgment

- Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the LAION-2B Aesthetic dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the model card

-
- """ - ) -demo.queue(concurrency_count=10) -demo.launch() - diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/certifi/core.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/certifi/core.py deleted file mode 100644 index c3e546604c85678dd72db35893c46ffe2d79c052..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/certifi/core.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -certifi.py -~~~~~~~~~~ - -This module returns the installation location of cacert.pem or its contents. -""" -import sys - - -if sys.version_info >= (3, 11): - - from importlib.resources import as_file, files - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the file - # in cases where we're inside of a zipimport situation until someone - # actually calls where(), but we don't want to re-extract the file - # on every call of where(), so we'll do it once then store it in a - # global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you to - # manage the cleanup of this file, so it doesn't actually return a - # path, it returns a context manager that will give you the path - # when you enter it and will do any cleanup when you leave it. In - # the common case of not needing a temporary file, it will just - # return the file system location and the __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = as_file(files("pip._vendor.certifi").joinpath("cacert.pem")) - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return files("pip._vendor.certifi").joinpath("cacert.pem").read_text(encoding="ascii") - -elif sys.version_info >= (3, 7): - - from importlib.resources import path as get_path, read_text - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the - # file in cases where we're inside of a zipimport situation until - # someone actually calls where(), but we don't want to re-extract - # the file on every call of where(), so we'll do it once then store - # it in a global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you - # to manage the cleanup of this file, so it doesn't actually - # return a path, it returns a context manager that will give - # you the path when you enter it and will do any cleanup when - # you leave it. In the common case of not needing a temporary - # file, it will just return the file system location and the - # __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = get_path("pip._vendor.certifi", "cacert.pem") - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return read_text("pip._vendor.certifi", "cacert.pem", encoding="ascii") - -else: - import os - import types - from typing import Union - - Package = Union[types.ModuleType, str] - Resource = Union[str, "os.PathLike"] - - # This fallback will work for Python versions prior to 3.7 that lack the - # importlib.resources module but relies on the existing `where` function - # so won't address issues with environments like PyOxidizer that don't set - # __file__ on modules. - def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict' - ) -> str: - with open(where(), encoding=encoding) as data: - return data.read() - - # If we don't have importlib.resources, then we will just do the old logic - # of assuming we're on the filesystem and munge the path directly. - def where() -> str: - f = os.path.dirname(__file__) - - return os.path.join(f, "cacert.pem") - - def contents() -> str: - return read_text("pip._vendor.certifi", "cacert.pem", encoding="ascii") diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/_mapping.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/_mapping.py deleted file mode 100644 index 6e34f9607847cb74f8469823c01776baf8216b59..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/_mapping.py +++ /dev/null @@ -1,23 +0,0 @@ -# Automatically generated by scripts/gen_mapfiles.py. -# DO NOT EDIT BY HAND; run `make mapfiles` instead. - -FORMATTERS = { - 'BBCodeFormatter': ('pygments.formatters.bbcode', 'BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'), - 'BmpImageFormatter': ('pygments.formatters.img', 'img_bmp', ('bmp', 'bitmap'), ('*.bmp',), 'Create a bitmap image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'), - 'GifImageFormatter': ('pygments.formatters.img', 'img_gif', ('gif',), ('*.gif',), 'Create a GIF image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'), - 'GroffFormatter': ('pygments.formatters.groff', 'groff', ('groff', 'troff', 'roff'), (), 'Format tokens with groff escapes to change their color and font style.'), - 'HtmlFormatter': ('pygments.formatters.html', 'HTML', ('html',), ('*.html', '*.htm'), "Format tokens as HTML 4 ```` tags within a ``
`` tag, wrapped in a ``
`` tag. The ``
``'s CSS class can be set by the `cssclass` option."), - 'IRCFormatter': ('pygments.formatters.irc', 'IRC', ('irc', 'IRC'), (), 'Format tokens with IRC color sequences'), - 'ImageFormatter': ('pygments.formatters.img', 'img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'), - 'JpgImageFormatter': ('pygments.formatters.img', 'img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'), - 'LatexFormatter': ('pygments.formatters.latex', 'LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'), - 'NullFormatter': ('pygments.formatters.other', 'Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'), - 'PangoMarkupFormatter': ('pygments.formatters.pangomarkup', 'Pango Markup', ('pango', 'pangomarkup'), (), 'Format tokens as Pango Markup code. It can then be rendered to an SVG.'), - 'RawTokenFormatter': ('pygments.formatters.other', 'Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'), - 'RtfFormatter': ('pygments.formatters.rtf', 'RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft(R) Word(R) documents.'), - 'SvgFormatter': ('pygments.formatters.svg', 'SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file. This formatter is still experimental. Each line of code is a ```` element with explicit ``x`` and ``y`` coordinates containing ```` elements with the individual token styles.'), - 'Terminal256Formatter': ('pygments.formatters.terminal256', 'Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'), - 'TerminalFormatter': ('pygments.formatters.terminal', 'Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.'), - 'TerminalTrueColorFormatter': ('pygments.formatters.terminal256', 'TerminalTrueColor', ('terminal16m', 'console16m', '16m'), (), 'Format tokens with ANSI color sequences, for output in a true-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'), - 'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.'), -} diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/webencodings/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/webencodings/__init__.py deleted file mode 100644 index d21d697c887bed1f8ab7f36d10185e986d9f1e54..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/webencodings/__init__.py +++ /dev/null @@ -1,342 +0,0 @@ -# coding: utf-8 -""" - - webencodings - ~~~~~~~~~~~~ - - This is a Python implementation of the `WHATWG Encoding standard - `. See README for details. - - :copyright: Copyright 2012 by Simon Sapin - :license: BSD, see LICENSE for details. - -""" - -from __future__ import unicode_literals - -import codecs - -from .labels import LABELS - - -VERSION = '0.5.1' - - -# Some names in Encoding are not valid Python aliases. Remap these. -PYTHON_NAMES = { - 'iso-8859-8-i': 'iso-8859-8', - 'x-mac-cyrillic': 'mac-cyrillic', - 'macintosh': 'mac-roman', - 'windows-874': 'cp874'} - -CACHE = {} - - -def ascii_lower(string): - r"""Transform (only) ASCII letters to lower case: A-Z is mapped to a-z. - - :param string: An Unicode string. - :returns: A new Unicode string. - - This is used for `ASCII case-insensitive - `_ - matching of encoding labels. - The same matching is also used, among other things, - for `CSS keywords `_. - - This is different from the :meth:`~py:str.lower` method of Unicode strings - which also affect non-ASCII characters, - sometimes mapping them into the ASCII range: - - >>> keyword = u'Bac\N{KELVIN SIGN}ground' - >>> assert keyword.lower() == u'background' - >>> assert ascii_lower(keyword) != keyword.lower() - >>> assert ascii_lower(keyword) == u'bac\N{KELVIN SIGN}ground' - - """ - # This turns out to be faster than unicode.translate() - return string.encode('utf8').lower().decode('utf8') - - -def lookup(label): - """ - Look for an encoding by its label. - This is the spec’s `get an encoding - `_ algorithm. - Supported labels are listed there. - - :param label: A string. - :returns: - An :class:`Encoding` object, or :obj:`None` for an unknown label. - - """ - # Only strip ASCII whitespace: U+0009, U+000A, U+000C, U+000D, and U+0020. - label = ascii_lower(label.strip('\t\n\f\r ')) - name = LABELS.get(label) - if name is None: - return None - encoding = CACHE.get(name) - if encoding is None: - if name == 'x-user-defined': - from .x_user_defined import codec_info - else: - python_name = PYTHON_NAMES.get(name, name) - # Any python_name value that gets to here should be valid. - codec_info = codecs.lookup(python_name) - encoding = Encoding(name, codec_info) - CACHE[name] = encoding - return encoding - - -def _get_encoding(encoding_or_label): - """ - Accept either an encoding object or label. - - :param encoding: An :class:`Encoding` object or a label string. - :returns: An :class:`Encoding` object. - :raises: :exc:`~exceptions.LookupError` for an unknown label. - - """ - if hasattr(encoding_or_label, 'codec_info'): - return encoding_or_label - - encoding = lookup(encoding_or_label) - if encoding is None: - raise LookupError('Unknown encoding label: %r' % encoding_or_label) - return encoding - - -class Encoding(object): - """Reresents a character encoding such as UTF-8, - that can be used for decoding or encoding. - - .. attribute:: name - - Canonical name of the encoding - - .. attribute:: codec_info - - The actual implementation of the encoding, - a stdlib :class:`~codecs.CodecInfo` object. - See :func:`codecs.register`. - - """ - def __init__(self, name, codec_info): - self.name = name - self.codec_info = codec_info - - def __repr__(self): - return '' % self.name - - -#: The UTF-8 encoding. Should be used for new content and formats. -UTF8 = lookup('utf-8') - -_UTF16LE = lookup('utf-16le') -_UTF16BE = lookup('utf-16be') - - -def decode(input, fallback_encoding, errors='replace'): - """ - Decode a single string. - - :param input: A byte string - :param fallback_encoding: - An :class:`Encoding` object or a label string. - The encoding to use if :obj:`input` does note have a BOM. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :return: - A ``(output, encoding)`` tuple of an Unicode string - and an :obj:`Encoding`. - - """ - # Fail early if `encoding` is an invalid label. - fallback_encoding = _get_encoding(fallback_encoding) - bom_encoding, input = _detect_bom(input) - encoding = bom_encoding or fallback_encoding - return encoding.codec_info.decode(input, errors)[0], encoding - - -def _detect_bom(input): - """Return (bom_encoding, input), with any BOM removed from the input.""" - if input.startswith(b'\xFF\xFE'): - return _UTF16LE, input[2:] - if input.startswith(b'\xFE\xFF'): - return _UTF16BE, input[2:] - if input.startswith(b'\xEF\xBB\xBF'): - return UTF8, input[3:] - return None, input - - -def encode(input, encoding=UTF8, errors='strict'): - """ - Encode a single string. - - :param input: An Unicode string. - :param encoding: An :class:`Encoding` object or a label string. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :return: A byte string. - - """ - return _get_encoding(encoding).codec_info.encode(input, errors)[0] - - -def iter_decode(input, fallback_encoding, errors='replace'): - """ - "Pull"-based decoder. - - :param input: - An iterable of byte strings. - - The input is first consumed just enough to determine the encoding - based on the precense of a BOM, - then consumed on demand when the return value is. - :param fallback_encoding: - An :class:`Encoding` object or a label string. - The encoding to use if :obj:`input` does note have a BOM. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :returns: - An ``(output, encoding)`` tuple. - :obj:`output` is an iterable of Unicode strings, - :obj:`encoding` is the :obj:`Encoding` that is being used. - - """ - - decoder = IncrementalDecoder(fallback_encoding, errors) - generator = _iter_decode_generator(input, decoder) - encoding = next(generator) - return generator, encoding - - -def _iter_decode_generator(input, decoder): - """Return a generator that first yields the :obj:`Encoding`, - then yields output chukns as Unicode strings. - - """ - decode = decoder.decode - input = iter(input) - for chunck in input: - output = decode(chunck) - if output: - assert decoder.encoding is not None - yield decoder.encoding - yield output - break - else: - # Input exhausted without determining the encoding - output = decode(b'', final=True) - assert decoder.encoding is not None - yield decoder.encoding - if output: - yield output - return - - for chunck in input: - output = decode(chunck) - if output: - yield output - output = decode(b'', final=True) - if output: - yield output - - -def iter_encode(input, encoding=UTF8, errors='strict'): - """ - “Pull”-based encoder. - - :param input: An iterable of Unicode strings. - :param encoding: An :class:`Encoding` object or a label string. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - :returns: An iterable of byte strings. - - """ - # Fail early if `encoding` is an invalid label. - encode = IncrementalEncoder(encoding, errors).encode - return _iter_encode_generator(input, encode) - - -def _iter_encode_generator(input, encode): - for chunck in input: - output = encode(chunck) - if output: - yield output - output = encode('', final=True) - if output: - yield output - - -class IncrementalDecoder(object): - """ - “Push”-based decoder. - - :param fallback_encoding: - An :class:`Encoding` object or a label string. - The encoding to use if :obj:`input` does note have a BOM. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - - """ - def __init__(self, fallback_encoding, errors='replace'): - # Fail early if `encoding` is an invalid label. - self._fallback_encoding = _get_encoding(fallback_encoding) - self._errors = errors - self._buffer = b'' - self._decoder = None - #: The actual :class:`Encoding` that is being used, - #: or :obj:`None` if that is not determined yet. - #: (Ie. if there is not enough input yet to determine - #: if there is a BOM.) - self.encoding = None # Not known yet. - - def decode(self, input, final=False): - """Decode one chunk of the input. - - :param input: A byte string. - :param final: - Indicate that no more input is available. - Must be :obj:`True` if this is the last call. - :returns: An Unicode string. - - """ - decoder = self._decoder - if decoder is not None: - return decoder(input, final) - - input = self._buffer + input - encoding, input = _detect_bom(input) - if encoding is None: - if len(input) < 3 and not final: # Not enough data yet. - self._buffer = input - return '' - else: # No BOM - encoding = self._fallback_encoding - decoder = encoding.codec_info.incrementaldecoder(self._errors).decode - self._decoder = decoder - self.encoding = encoding - return decoder(input, final) - - -class IncrementalEncoder(object): - """ - “Push”-based encoder. - - :param encoding: An :class:`Encoding` object or a label string. - :param errors: Type of error handling. See :func:`codecs.register`. - :raises: :exc:`~exceptions.LookupError` for an unknown encoding label. - - .. method:: encode(input, final=False) - - :param input: An Unicode string. - :param final: - Indicate that no more input is available. - Must be :obj:`True` if this is the last call. - :returns: A byte string. - - """ - def __init__(self, encoding=UTF8, errors='strict'): - encoding = _get_encoding(encoding) - self.encode = encoding.codec_info.incrementalencoder(errors).encode diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/glob.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/glob.py deleted file mode 100644 index 87062b8187fa4f74a8c4edbaa60bd9a8b2d506a4..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/glob.py +++ /dev/null @@ -1,167 +0,0 @@ -""" -Filename globbing utility. Mostly a copy of `glob` from Python 3.5. - -Changes include: - * `yield from` and PEP3102 `*` removed. - * Hidden files are not ignored. -""" - -import os -import re -import fnmatch - -__all__ = ["glob", "iglob", "escape"] - - -def glob(pathname, recursive=False): - """Return a list of paths matching a pathname pattern. - - The pattern may contain simple shell-style wildcards a la - fnmatch. However, unlike fnmatch, filenames starting with a - dot are special cases that are not matched by '*' and '?' - patterns. - - If recursive is true, the pattern '**' will match any files and - zero or more directories and subdirectories. - """ - return list(iglob(pathname, recursive=recursive)) - - -def iglob(pathname, recursive=False): - """Return an iterator which yields the paths matching a pathname pattern. - - The pattern may contain simple shell-style wildcards a la - fnmatch. However, unlike fnmatch, filenames starting with a - dot are special cases that are not matched by '*' and '?' - patterns. - - If recursive is true, the pattern '**' will match any files and - zero or more directories and subdirectories. - """ - it = _iglob(pathname, recursive) - if recursive and _isrecursive(pathname): - s = next(it) # skip empty string - assert not s - return it - - -def _iglob(pathname, recursive): - dirname, basename = os.path.split(pathname) - glob_in_dir = glob2 if recursive and _isrecursive(basename) else glob1 - - if not has_magic(pathname): - if basename: - if os.path.lexists(pathname): - yield pathname - else: - # Patterns ending with a slash should match only directories - if os.path.isdir(dirname): - yield pathname - return - - if not dirname: - yield from glob_in_dir(dirname, basename) - return - # `os.path.split()` returns the argument itself as a dirname if it is a - # drive or UNC path. Prevent an infinite recursion if a drive or UNC path - # contains magic characters (i.e. r'\\?\C:'). - if dirname != pathname and has_magic(dirname): - dirs = _iglob(dirname, recursive) - else: - dirs = [dirname] - if not has_magic(basename): - glob_in_dir = glob0 - for dirname in dirs: - for name in glob_in_dir(dirname, basename): - yield os.path.join(dirname, name) - - -# These 2 helper functions non-recursively glob inside a literal directory. -# They return a list of basenames. `glob1` accepts a pattern while `glob0` -# takes a literal basename (so it only has to check for its existence). - - -def glob1(dirname, pattern): - if not dirname: - if isinstance(pattern, bytes): - dirname = os.curdir.encode('ASCII') - else: - dirname = os.curdir - try: - names = os.listdir(dirname) - except OSError: - return [] - return fnmatch.filter(names, pattern) - - -def glob0(dirname, basename): - if not basename: - # `os.path.split()` returns an empty basename for paths ending with a - # directory separator. 'q*x/' should match only directories. - if os.path.isdir(dirname): - return [basename] - else: - if os.path.lexists(os.path.join(dirname, basename)): - return [basename] - return [] - - -# This helper function recursively yields relative pathnames inside a literal -# directory. - - -def glob2(dirname, pattern): - assert _isrecursive(pattern) - yield pattern[:0] - for x in _rlistdir(dirname): - yield x - - -# Recursively yields relative pathnames inside a literal directory. -def _rlistdir(dirname): - if not dirname: - if isinstance(dirname, bytes): - dirname = os.curdir.encode('ASCII') - else: - dirname = os.curdir - try: - names = os.listdir(dirname) - except os.error: - return - for x in names: - yield x - path = os.path.join(dirname, x) if dirname else x - for y in _rlistdir(path): - yield os.path.join(x, y) - - -magic_check = re.compile('([*?[])') -magic_check_bytes = re.compile(b'([*?[])') - - -def has_magic(s): - if isinstance(s, bytes): - match = magic_check_bytes.search(s) - else: - match = magic_check.search(s) - return match is not None - - -def _isrecursive(pattern): - if isinstance(pattern, bytes): - return pattern == b'**' - else: - return pattern == '**' - - -def escape(pathname): - """Escape all special characters. - """ - # Escaping is done by wrapping any of "*?[" between square brackets. - # Metacharacters do not work in the drive part and shouldn't be escaped. - drive, pathname = os.path.splitdrive(pathname) - if isinstance(pathname, bytes): - pathname = magic_check_bytes.sub(br'[\1]', pathname) - else: - pathname = magic_check.sub(r'[\1]', pathname) - return drive + pathname diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/detection_utils.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/detection_utils.py deleted file mode 100644 index 2707eb430f4474c4a8a8968e5bf4caf2124d9f36..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/detection_utils.py +++ /dev/null @@ -1,623 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -Common data processing utilities that are used in a -typical object detection data pipeline. -""" -import logging -import numpy as np -from typing import List, Union -import pycocotools.mask as mask_util -import torch -from PIL import Image - -from detectron2.structures import ( - BitMasks, - Boxes, - BoxMode, - Instances, - Keypoints, - PolygonMasks, - RotatedBoxes, - polygons_to_bitmask, -) -from detectron2.utils.file_io import PathManager - -from . import transforms as T -from .catalog import MetadataCatalog - -__all__ = [ - "SizeMismatchError", - "convert_image_to_rgb", - "check_image_size", - "transform_proposals", - "transform_instance_annotations", - "annotations_to_instances", - "annotations_to_instances_rotated", - "build_augmentation", - "build_transform_gen", - "create_keypoint_hflip_indices", - "filter_empty_instances", - "read_image", -] - - -class SizeMismatchError(ValueError): - """ - When loaded image has difference width/height compared with annotation. - """ - - -# https://en.wikipedia.org/wiki/YUV#SDTV_with_BT.601 -_M_RGB2YUV = [[0.299, 0.587, 0.114], [-0.14713, -0.28886, 0.436], [0.615, -0.51499, -0.10001]] -_M_YUV2RGB = [[1.0, 0.0, 1.13983], [1.0, -0.39465, -0.58060], [1.0, 2.03211, 0.0]] - -# https://www.exiv2.org/tags.html -_EXIF_ORIENT = 274 # exif 'Orientation' tag - - -def convert_PIL_to_numpy(image, format): - """ - Convert PIL image to numpy array of target format. - - Args: - image (PIL.Image): a PIL image - format (str): the format of output image - - Returns: - (np.ndarray): also see `read_image` - """ - if format is not None: - # PIL only supports RGB, so convert to RGB and flip channels over below - conversion_format = format - if format in ["BGR", "YUV-BT.601"]: - conversion_format = "RGB" - image = image.convert(conversion_format) - image = np.asarray(image) - # PIL squeezes out the channel dimension for "L", so make it HWC - if format == "L": - image = np.expand_dims(image, -1) - - # handle formats not supported by PIL - elif format == "BGR": - # flip channels if needed - image = image[:, :, ::-1] - elif format == "YUV-BT.601": - image = image / 255.0 - image = np.dot(image, np.array(_M_RGB2YUV).T) - - return image - - -def convert_image_to_rgb(image, format): - """ - Convert an image from given format to RGB. - - Args: - image (np.ndarray or Tensor): an HWC image - format (str): the format of input image, also see `read_image` - - Returns: - (np.ndarray): (H,W,3) RGB image in 0-255 range, can be either float or uint8 - """ - if isinstance(image, torch.Tensor): - image = image.cpu().numpy() - if format == "BGR": - image = image[:, :, [2, 1, 0]] - elif format == "YUV-BT.601": - image = np.dot(image, np.array(_M_YUV2RGB).T) - image = image * 255.0 - else: - if format == "L": - image = image[:, :, 0] - image = image.astype(np.uint8) - image = np.asarray(Image.fromarray(image, mode=format).convert("RGB")) - return image - - -def _apply_exif_orientation(image): - """ - Applies the exif orientation correctly. - - This code exists per the bug: - https://github.com/python-pillow/Pillow/issues/3973 - with the function `ImageOps.exif_transpose`. The Pillow source raises errors with - various methods, especially `tobytes` - - Function based on: - https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59 - https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527 - - Args: - image (PIL.Image): a PIL image - - Returns: - (PIL.Image): the PIL image with exif orientation applied, if applicable - """ - if not hasattr(image, "getexif"): - return image - - try: - exif = image.getexif() - except Exception: # https://github.com/facebookresearch/detectron2/issues/1885 - exif = None - - if exif is None: - return image - - orientation = exif.get(_EXIF_ORIENT) - - method = { - 2: Image.FLIP_LEFT_RIGHT, - 3: Image.ROTATE_180, - 4: Image.FLIP_TOP_BOTTOM, - 5: Image.TRANSPOSE, - 6: Image.ROTATE_270, - 7: Image.TRANSVERSE, - 8: Image.ROTATE_90, - }.get(orientation) - - if method is not None: - return image.transpose(method) - return image - - -def read_image(file_name, format=None): - """ - Read an image into the given format. - Will apply rotation and flipping if the image has such exif information. - - Args: - file_name (str): image file path - format (str): one of the supported image modes in PIL, or "BGR" or "YUV-BT.601". - - Returns: - image (np.ndarray): - an HWC image in the given format, which is 0-255, uint8 for - supported image modes in PIL or "BGR"; float (0-1 for Y) for YUV-BT.601. - """ - with PathManager.open(file_name, "rb") as f: - image = Image.open(f) - - # work around this bug: https://github.com/python-pillow/Pillow/issues/3973 - image = _apply_exif_orientation(image) - return convert_PIL_to_numpy(image, format) - - -def check_image_size(dataset_dict, image): - """ - Raise an error if the image does not match the size specified in the dict. - """ - if "width" in dataset_dict or "height" in dataset_dict: - image_wh = (image.shape[1], image.shape[0]) - expected_wh = (dataset_dict["width"], dataset_dict["height"]) - if not image_wh == expected_wh: - raise SizeMismatchError( - "Mismatched image shape{}, got {}, expect {}.".format( - " for image " + dataset_dict["file_name"] - if "file_name" in dataset_dict - else "", - image_wh, - expected_wh, - ) - + " Please check the width/height in your annotation." - ) - - # To ensure bbox always remap to original image size - if "width" not in dataset_dict: - dataset_dict["width"] = image.shape[1] - if "height" not in dataset_dict: - dataset_dict["height"] = image.shape[0] - - -def transform_proposals(dataset_dict, image_shape, transforms, *, proposal_topk, min_box_size=0): - """ - Apply transformations to the proposals in dataset_dict, if any. - - Args: - dataset_dict (dict): a dict read from the dataset, possibly - contains fields "proposal_boxes", "proposal_objectness_logits", "proposal_bbox_mode" - image_shape (tuple): height, width - transforms (TransformList): - proposal_topk (int): only keep top-K scoring proposals - min_box_size (int): proposals with either side smaller than this - threshold are removed - - The input dict is modified in-place, with abovementioned keys removed. A new - key "proposals" will be added. Its value is an `Instances` - object which contains the transformed proposals in its field - "proposal_boxes" and "objectness_logits". - """ - if "proposal_boxes" in dataset_dict: - # Transform proposal boxes - boxes = transforms.apply_box( - BoxMode.convert( - dataset_dict.pop("proposal_boxes"), - dataset_dict.pop("proposal_bbox_mode"), - BoxMode.XYXY_ABS, - ) - ) - boxes = Boxes(boxes) - objectness_logits = torch.as_tensor( - dataset_dict.pop("proposal_objectness_logits").astype("float32") - ) - - boxes.clip(image_shape) - keep = boxes.nonempty(threshold=min_box_size) - boxes = boxes[keep] - objectness_logits = objectness_logits[keep] - - proposals = Instances(image_shape) - proposals.proposal_boxes = boxes[:proposal_topk] - proposals.objectness_logits = objectness_logits[:proposal_topk] - dataset_dict["proposals"] = proposals - - -def transform_instance_annotations( - annotation, transforms, image_size, *, keypoint_hflip_indices=None -): - """ - Apply transforms to box, segmentation and keypoints annotations of a single instance. - - It will use `transforms.apply_box` for the box, and - `transforms.apply_coords` for segmentation polygons & keypoints. - If you need anything more specially designed for each data structure, - you'll need to implement your own version of this function or the transforms. - - Args: - annotation (dict): dict of instance annotations for a single instance. - It will be modified in-place. - transforms (TransformList or list[Transform]): - image_size (tuple): the height, width of the transformed image - keypoint_hflip_indices (ndarray[int]): see `create_keypoint_hflip_indices`. - - Returns: - dict: - the same input dict with fields "bbox", "segmentation", "keypoints" - transformed according to `transforms`. - The "bbox_mode" field will be set to XYXY_ABS. - """ - if isinstance(transforms, (tuple, list)): - transforms = T.TransformList(transforms) - # bbox is 1d (per-instance bounding box) - bbox = BoxMode.convert(annotation["bbox"], annotation["bbox_mode"], BoxMode.XYXY_ABS) - # clip transformed bbox to image size - bbox = transforms.apply_box(np.array([bbox]))[0].clip(min=0) - annotation["bbox"] = np.minimum(bbox, list(image_size + image_size)[::-1]) - annotation["bbox_mode"] = BoxMode.XYXY_ABS - - if "segmentation" in annotation: - # each instance contains 1 or more polygons - segm = annotation["segmentation"] - if isinstance(segm, list): - # polygons - polygons = [np.asarray(p).reshape(-1, 2) for p in segm] - annotation["segmentation"] = [ - p.reshape(-1) for p in transforms.apply_polygons(polygons) - ] - elif isinstance(segm, dict): - # RLE - mask = mask_util.decode(segm) - mask = transforms.apply_segmentation(mask) - assert tuple(mask.shape[:2]) == image_size - annotation["segmentation"] = mask - else: - raise ValueError( - "Cannot transform segmentation of type '{}'!" - "Supported types are: polygons as list[list[float] or ndarray]," - " COCO-style RLE as a dict.".format(type(segm)) - ) - - if "keypoints" in annotation: - keypoints = transform_keypoint_annotations( - annotation["keypoints"], transforms, image_size, keypoint_hflip_indices - ) - annotation["keypoints"] = keypoints - - return annotation - - -def transform_keypoint_annotations(keypoints, transforms, image_size, keypoint_hflip_indices=None): - """ - Transform keypoint annotations of an image. - If a keypoint is transformed out of image boundary, it will be marked "unlabeled" (visibility=0) - - Args: - keypoints (list[float]): Nx3 float in Detectron2's Dataset format. - Each point is represented by (x, y, visibility). - transforms (TransformList): - image_size (tuple): the height, width of the transformed image - keypoint_hflip_indices (ndarray[int]): see `create_keypoint_hflip_indices`. - When `transforms` includes horizontal flip, will use the index - mapping to flip keypoints. - """ - # (N*3,) -> (N, 3) - keypoints = np.asarray(keypoints, dtype="float64").reshape(-1, 3) - keypoints_xy = transforms.apply_coords(keypoints[:, :2]) - - # Set all out-of-boundary points to "unlabeled" - inside = (keypoints_xy >= np.array([0, 0])) & (keypoints_xy <= np.array(image_size[::-1])) - inside = inside.all(axis=1) - keypoints[:, :2] = keypoints_xy - keypoints[:, 2][~inside] = 0 - - # This assumes that HorizFlipTransform is the only one that does flip - do_hflip = sum(isinstance(t, T.HFlipTransform) for t in transforms.transforms) % 2 == 1 - - # Alternative way: check if probe points was horizontally flipped. - # probe = np.asarray([[0.0, 0.0], [image_width, 0.0]]) - # probe_aug = transforms.apply_coords(probe.copy()) - # do_hflip = np.sign(probe[1][0] - probe[0][0]) != np.sign(probe_aug[1][0] - probe_aug[0][0]) # noqa - - # If flipped, swap each keypoint with its opposite-handed equivalent - if do_hflip: - if keypoint_hflip_indices is None: - raise ValueError("Cannot flip keypoints without providing flip indices!") - if len(keypoints) != len(keypoint_hflip_indices): - raise ValueError( - "Keypoint data has {} points, but metadata " - "contains {} points!".format(len(keypoints), len(keypoint_hflip_indices)) - ) - keypoints = keypoints[np.asarray(keypoint_hflip_indices, dtype=np.int32), :] - - # Maintain COCO convention that if visibility == 0 (unlabeled), then x, y = 0 - keypoints[keypoints[:, 2] == 0] = 0 - return keypoints - - -def annotations_to_instances(annos, image_size, mask_format="polygon"): - """ - Create an :class:`Instances` object used by the models, - from instance annotations in the dataset dict. - - Args: - annos (list[dict]): a list of instance annotations in one image, each - element for one instance. - image_size (tuple): height, width - - Returns: - Instances: - It will contain fields "gt_boxes", "gt_classes", - "gt_masks", "gt_keypoints", if they can be obtained from `annos`. - This is the format that builtin models expect. - """ - boxes = ( - np.stack( - [BoxMode.convert(obj["bbox"], obj["bbox_mode"], BoxMode.XYXY_ABS) for obj in annos] - ) - if len(annos) - else np.zeros((0, 4)) - ) - target = Instances(image_size) - target.gt_boxes = Boxes(boxes) - - classes = [int(obj["category_id"]) for obj in annos] - classes = torch.tensor(classes, dtype=torch.int64) - target.gt_classes = classes - - if len(annos) and "segmentation" in annos[0]: - segms = [obj["segmentation"] for obj in annos] - if mask_format == "polygon": - try: - masks = PolygonMasks(segms) - except ValueError as e: - raise ValueError( - "Failed to use mask_format=='polygon' from the given annotations!" - ) from e - else: - assert mask_format == "bitmask", mask_format - masks = [] - for segm in segms: - if isinstance(segm, list): - # polygon - masks.append(polygons_to_bitmask(segm, *image_size)) - elif isinstance(segm, dict): - # COCO RLE - masks.append(mask_util.decode(segm)) - elif isinstance(segm, np.ndarray): - assert segm.ndim == 2, "Expect segmentation of 2 dimensions, got {}.".format( - segm.ndim - ) - # mask array - masks.append(segm) - else: - raise ValueError( - "Cannot convert segmentation of type '{}' to BitMasks!" - "Supported types are: polygons as list[list[float] or ndarray]," - " COCO-style RLE as a dict, or a binary segmentation mask " - " in a 2D numpy array of shape HxW.".format(type(segm)) - ) - # torch.from_numpy does not support array with negative stride. - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x)) for x in masks]) - ) - target.gt_masks = masks - - if len(annos) and "keypoints" in annos[0]: - kpts = [obj.get("keypoints", []) for obj in annos] - target.gt_keypoints = Keypoints(kpts) - - return target - - -def annotations_to_instances_rotated(annos, image_size): - """ - Create an :class:`Instances` object used by the models, - from instance annotations in the dataset dict. - Compared to `annotations_to_instances`, this function is for rotated boxes only - - Args: - annos (list[dict]): a list of instance annotations in one image, each - element for one instance. - image_size (tuple): height, width - - Returns: - Instances: - Containing fields "gt_boxes", "gt_classes", - if they can be obtained from `annos`. - This is the format that builtin models expect. - """ - boxes = [obj["bbox"] for obj in annos] - target = Instances(image_size) - boxes = target.gt_boxes = RotatedBoxes(boxes) - boxes.clip(image_size) - - classes = [obj["category_id"] for obj in annos] - classes = torch.tensor(classes, dtype=torch.int64) - target.gt_classes = classes - - return target - - -def filter_empty_instances( - instances, by_box=True, by_mask=True, box_threshold=1e-5, return_mask=False -): - """ - Filter out empty instances in an `Instances` object. - - Args: - instances (Instances): - by_box (bool): whether to filter out instances with empty boxes - by_mask (bool): whether to filter out instances with empty masks - box_threshold (float): minimum width and height to be considered non-empty - return_mask (bool): whether to return boolean mask of filtered instances - - Returns: - Instances: the filtered instances. - tensor[bool], optional: boolean mask of filtered instances - """ - assert by_box or by_mask - r = [] - if by_box: - r.append(instances.gt_boxes.nonempty(threshold=box_threshold)) - if instances.has("gt_masks") and by_mask: - r.append(instances.gt_masks.nonempty()) - - # TODO: can also filter visible keypoints - - if not r: - return instances - m = r[0] - for x in r[1:]: - m = m & x - if return_mask: - return instances[m], m - return instances[m] - - -def create_keypoint_hflip_indices(dataset_names: Union[str, List[str]]) -> List[int]: - """ - Args: - dataset_names: list of dataset names - - Returns: - list[int]: a list of size=#keypoints, storing the - horizontally-flipped keypoint indices. - """ - if isinstance(dataset_names, str): - dataset_names = [dataset_names] - - check_metadata_consistency("keypoint_names", dataset_names) - check_metadata_consistency("keypoint_flip_map", dataset_names) - - meta = MetadataCatalog.get(dataset_names[0]) - names = meta.keypoint_names - # TODO flip -> hflip - flip_map = dict(meta.keypoint_flip_map) - flip_map.update({v: k for k, v in flip_map.items()}) - flipped_names = [i if i not in flip_map else flip_map[i] for i in names] - flip_indices = [names.index(i) for i in flipped_names] - return flip_indices - - -def gen_crop_transform_with_instance(crop_size, image_size, instance): - """ - Generate a CropTransform so that the cropping region contains - the center of the given instance. - - Args: - crop_size (tuple): h, w in pixels - image_size (tuple): h, w - instance (dict): an annotation dict of one instance, in Detectron2's - dataset format. - """ - crop_size = np.asarray(crop_size, dtype=np.int32) - bbox = BoxMode.convert(instance["bbox"], instance["bbox_mode"], BoxMode.XYXY_ABS) - center_yx = (bbox[1] + bbox[3]) * 0.5, (bbox[0] + bbox[2]) * 0.5 - assert ( - image_size[0] >= center_yx[0] and image_size[1] >= center_yx[1] - ), "The annotation bounding box is outside of the image!" - assert ( - image_size[0] >= crop_size[0] and image_size[1] >= crop_size[1] - ), "Crop size is larger than image size!" - - min_yx = np.maximum(np.floor(center_yx).astype(np.int32) - crop_size, 0) - max_yx = np.maximum(np.asarray(image_size, dtype=np.int32) - crop_size, 0) - max_yx = np.minimum(max_yx, np.ceil(center_yx).astype(np.int32)) - - y0 = np.random.randint(min_yx[0], max_yx[0] + 1) - x0 = np.random.randint(min_yx[1], max_yx[1] + 1) - return T.CropTransform(x0, y0, crop_size[1], crop_size[0]) - - -def check_metadata_consistency(key, dataset_names): - """ - Check that the datasets have consistent metadata. - - Args: - key (str): a metadata key - dataset_names (list[str]): a list of dataset names - - Raises: - AttributeError: if the key does not exist in the metadata - ValueError: if the given datasets do not have the same metadata values defined by key - """ - if len(dataset_names) == 0: - return - logger = logging.getLogger(__name__) - entries_per_dataset = [getattr(MetadataCatalog.get(d), key) for d in dataset_names] - for idx, entry in enumerate(entries_per_dataset): - if entry != entries_per_dataset[0]: - logger.error( - "Metadata '{}' for dataset '{}' is '{}'".format(key, dataset_names[idx], str(entry)) - ) - logger.error( - "Metadata '{}' for dataset '{}' is '{}'".format( - key, dataset_names[0], str(entries_per_dataset[0]) - ) - ) - raise ValueError("Datasets have different metadata '{}'!".format(key)) - - -def build_augmentation(cfg, is_train): - """ - Create a list of default :class:`Augmentation` from config. - Now it includes resizing and flipping. - - Returns: - list[Augmentation] - """ - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN - max_size = cfg.INPUT.MAX_SIZE_TRAIN - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)] - if is_train and cfg.INPUT.RANDOM_FLIP != "none": - augmentation.append( - T.RandomFlip( - horizontal=cfg.INPUT.RANDOM_FLIP == "horizontal", - vertical=cfg.INPUT.RANDOM_FLIP == "vertical", - ) - ) - return augmentation - - -build_transform_gen = build_augmentation -""" -Alias for backward-compatibility. -""" diff --git a/spaces/Banbri/zcvzcv/src/app/queries/predictWithOpenAI.ts b/spaces/Banbri/zcvzcv/src/app/queries/predictWithOpenAI.ts deleted file mode 100644 index a006de9667ce47ed4bf35e08b2ef46fc78c988c1..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/queries/predictWithOpenAI.ts +++ /dev/null @@ -1,33 +0,0 @@ -"use server" - -import type { ChatCompletionMessage } from "openai/resources/chat" -import OpenAI from "openai" - -export async function predict(inputs: string): Promise { - const openaiApiKey = `${process.env.AUTH_OPENAI_API_KEY || ""}` - const openaiApiBaseUrl = `${process.env.LLM_OPENAI_API_BASE_URL || "https://api.openai.com/v1"}` - const openaiApiModel = `${process.env.LLM_OPENAI_API_MODEL || "gpt-3.5-turbo"}` - - const openai = new OpenAI({ - apiKey: openaiApiKey, - baseURL: openaiApiBaseUrl, - }) - - const messages: ChatCompletionMessage[] = [ - { role: "system", content: inputs }, - ] - - try { - const res = await openai.chat.completions.create({ - messages: messages, - stream: false, - model: openaiApiModel, - temperature: 0.8 - }) - - return res.choices[0].message.content || "" - } catch (err) { - console.error(`error during generation: ${err}`) - return "" - } -} \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/lib/infer_pack/attentions.py b/spaces/Bart92/RVC_HF/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Benson/text-generation/Examples/Asus Rt-n56u Firmware Download.md b/spaces/Benson/text-generation/Examples/Asus Rt-n56u Firmware Download.md deleted file mode 100644 index 9a5040d9069b80f71788377021294b97540735f8..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Asus Rt-n56u Firmware Download.md +++ /dev/null @@ -1,93 +0,0 @@ -
-

Cómo descargar y actualizar el firmware del router ASUS RT-N56U

-

Firmware es un programa de software que controla las funciones de hardware de su router. Es responsable de proporcionar varias características y configuraciones, como red inalámbrica, seguridad, control parental, red de invitados, etc. El firmware también afecta el rendimiento y la estabilidad de su enrutador, por lo que es importante mantenerlo actualizado regularmente.

-

asus rt-n56u firmware download


Download Zip –––––>>> https://bltlly.com/2v6MUQ



-

Actualizar el firmware puede brindarle muchos beneficios, como mejorar la velocidad y la confiabilidad de su conexión inalámbrica, corregir errores y problemas de seguridad, agregar nuevas características y funciones y mejorar la compatibilidad con otros dispositivos. Sin embargo, actualizar el firmware también implica algunos riesgos, como perder la configuración actual, causar errores o mal funcionamiento, o incluso bloquear el router si algo sale mal.

-

Por lo tanto, antes de actualizar el firmware de su router ASUS RT-N56U, necesita conocer alguna información básica y precauciones. En este artículo, lo guiaremos a través de los pasos de descarga y actualización del firmware de su enrutador, así como de restablecimiento y solución de problemas. Siga estos pasos cuidadosamente y podrá disfrutar de una experiencia inalámbrica mejor y más segura con su enrutador.

-

Lo que necesita saber antes de actualizar el firmware

-

Antes de comenzar a actualizar el firmware de su router, debe preparar algunas cosas y tomar algunas precauciones. Estos son algunos consejos que debes seguir:

-
    -
  • Asegúrese de que su enrutador esté conectado a una fuente de alimentación estable y no lo apague o desenchufe durante el proceso de actualización.
  • -
  • Asegúrese de que su computadora esté conectada a su enrutador a través de una conexión por cable o inalámbrica. No utilice un servicio VPN o proxy que pueda interferir con el proceso de actualización.
  • - -
  • Asegúrese de que ha descargado el archivo de firmware correcto para su modelo de enrutador desde el sitio web oficial de ASUS. No utilice archivos de firmware de terceros o no oficiales que puedan dañar su enrutador.
  • -
  • Asegúrese de que tiene suficiente espacio libre en su computadora o unidad flash USB para almacenar el archivo de firmware. El tamaño del archivo puede variar dependiendo de la versión del firmware.
  • -
  • Asegúrese de que ha leído y entendido las instrucciones y advertencias en el sitio web de ASUS antes de actualizar el firmware. Sígalos cuidadosamente y no se salte ningún paso.
  • -
-

Cómo comprobar la versión actual del firmware de su router

-

Para comprobar la versión actual del firmware de su router, necesita acceder a su interfaz web. La interfaz web es una interfaz gráfica de usuario (GUI) que le permite administrar y configurar la configuración de su router. Para acceder a ella, siga estos pasos:

-

-
    -
  1. Abra un navegador web en su computadora e ingrese la dirección IP o URL de su enrutador en la barra de direcciones. La dirección IP LAN por defecto es 192.168.1.1 y la URL por defecto es http://www.asusrouter.com.
  2. -
  3. Ingrese su nombre de usuario y contraseña de inicio de sesión en la página de inicio de sesión y luego haga clic en [Iniciar sesión]. El nombre de usuario y la contraseña predeterminados son admin. Si los has cambiado, usa los que hayas establecido.
  4. -
  5. En la interfaz web, haga clic en [Administración] en el menú de la izquierda y luego haga clic en [Actualización de firmware] en el menú superior.
  6. -
  7. En la página de actualización de firmware, verá la versión de firmware actual de su router y la última versión de firmware disponible en el sitio web de ASUS. También puede comprobar las notas de la versión y el historial de actualizaciones del firmware.
  8. -
-

Si la versión de firmware de su router ya está actualizada, no necesita actualizarla. Sin embargo, si hay una versión más reciente disponible, puede descargarla y actualizarla a través de WebGUI o manualmente.

-

Cómo descargar la última versión del firmware desde el sitio web de ASUS

- -
    -
  1. Ir al sitio web oficial de ASUS en https://www.asus.com.
  2. -
  3. Haga clic en [Soporte] en el menú superior y luego haga clic en [Controladores y herramientas] en el menú desplegable.
  4. -
  5. Introduzca el nombre del modelo del router (RT-N56U) en el cuadro de búsqueda y haga clic en [Buscar].
  6. -
  7. Seleccione su modelo de enrutador de los resultados de búsqueda y luego haga clic en [Driver & Utility] en el menú de la izquierda.
  8. -
  9. Seleccione su sistema operativo desde el menú desplegable y luego haga clic en [Mostrar todo].
  10. -
  11. Encuentre la última versión de firmware para su router y luego haga clic en [Descargar].
  12. -
  13. Guarde el archivo de firmware (.zip) en su computadora o unidad flash USB. Recuerde su ubicación y nombre.
  14. -
-

Ahora ha descargado la última versión de firmware para su router. Puede actualizarlo a través de WebGUI o manualmente.

-

Cómo actualizar el firmware a través de WebGUI

-

Para actualizar el firmware de su router a través de WebGUI, siga estos pasos:

-
    -
  1. Acceda a la interfaz web de su router como se describe en la sección anterior.
  2. -
  3. En la interfaz web, haga clic en [Administración] en el menú de la izquierda y luego haga clic en [Actualización de firmware] en el menú superior.
  4. -
  5. En la página de actualización de firmware, haga clic en [Elegir archivo] y luego seleccione el archivo de firmware (.zip) que ha descargado del sitio web de ASUS.
  6. -
  7. Haga clic en [Subir] y espere a que se complete la carga. No apague ni desenchufe el router durante este proceso.
  8. -
  9. Después de completar la carga, haga clic en [OK] para iniciar el proceso de actualización. No apague ni desenchufe el router durante este proceso.
  10. -
  11. Espere unos 5 minutos hasta que se complete el proceso de actualización. Su enrutador se reiniciará automáticamente después de la actualización.
  12. -
-

Felicidades! Ha actualizado con éxito el firmware de su router a través de WebGUI. Puede comprobar la nueva versión del firmware en la interfaz web.

Cómo actualizar el firmware manualmente

- -
    -
  1. Descomprima el archivo de firmware (.zip) que ha descargado del sitio web de ASUS. Obtendrá un archivo de firmware (.trx) y un archivo readme (.txt).
  2. -
  3. Conecte su computadora a su enrutador a través de un cable LAN. No use una conexión inalámbrica.
  4. -
  5. Asigne una dirección IP estática a su computadora. La dirección IP debe estar en la misma subred que la dirección IP LAN de su enrutador. Por ejemplo, si la dirección IP de su enrutador es 192.168.1.1, puede asignar 192.168.1.10 a su computadora.
  6. -
  7. Deshabilita cualquier software de firewall o antivirus en tu computadora que pueda bloquear el proceso de actualización.
  8. -
  9. Abra un navegador web en su computadora e ingrese 192.168.1.1 en la barra de direcciones. Verá una página de recuperación de su enrutador.
  10. -
  11. Haga clic en [Examinar] y luego seleccione el archivo de firmware (.trx) que ha descomprimido desde el sitio web de ASUS.
  12. -
  13. Haga clic en [Subir] y espere a que se complete la carga. No apague ni desenchufe el router durante este proceso.
  14. -
  15. Después de completar la carga, espere unos 5 minutos hasta que se complete el proceso de actualización. El router se reiniciará automáticamente después de la actualización.
  16. -
-

Felicidades! Ha actualizado el firmware de su router manualmente. Puede comprobar la nueva versión del firmware en la interfaz web.

-

Cómo restablecer el router después de actualizar el firmware

-

Después de actualizar el firmware de su enrutador, es posible que deba restablecerlo a la configuración predeterminada de fábrica y configurarlo de nuevo. Esto puede ayudarle a evitar posibles problemas o conflictos causados por la actualización del firmware. Para restablecer el router después de actualizar el firmware, siga estos pasos:

-
    -
  1. Localice el botón de reinicio en la parte posterior de su router. Es un pequeño agujero que puede presionar con un pin o un clip de papel.
  2. -
  3. Mantenga pulsado el botón de reinicio durante unos 10 segundos hasta que el led de encendido parpadee.
  4. -
  5. Suelte el botón de reinicio y espere unos 2 minutos hasta que el enrutador se reinicie.
  6. - -
-

Nota: Restablecer el router borrará todos sus ajustes y configuración actuales, así que asegúrese de tener una copia de seguridad de ellos antes de reiniciar.

Cómo solucionar problemas comunes de actualización de firmware

-

A veces, puede encontrar algunos problemas o errores durante o después del proceso de actualización del firmware. Aquí hay algunos problemas y soluciones comunes que puedes intentar solucionar:

-
    -
  • Si no puede acceder a la interfaz web de su enrutador después de la actualización, es posible que necesite borrar la caché y las cookies de su navegador, o usar un navegador o dispositivo diferente.
  • -
  • Si no puede conectarse a Internet después de la actualización, es posible que deba verificar la configuración de su WAN y asegurarse de que sea correcta. También puede intentar reiniciar su router y módem, o ponerse en contacto con su ISP para obtener ayuda.
  • -
  • Si su red inalámbrica no está funcionando correctamente después de la actualización, es posible que tenga que comprobar la configuración inalámbrica y asegurarse de que son correctas. También puede intentar cambiar el canal inalámbrico, el modo o la seguridad, o buscar redes inalámbricas cercanas y evitar interferencias.
  • -
  • Si su enrutador no responde o está atascado en un bucle después de la actualización, es posible que deba restablecerlo a la configuración predeterminada de fábrica y configurarlo de nuevo. También puede intentar flashearlo manualmente con el archivo de firmware.
  • -
  • Si su router está bloqueado o dañado después de la actualización, es posible que tenga que ponerse en contacto con el soporte de ASUS para obtener ayuda. También puede intentar usar el modo de rescate o la herramienta de recuperación para restaurar su router.
  • -
-

Si ninguna de estas soluciones funciona para usted, puede buscar más información y ayuda en el sitio web o en el foro de ASUS, o ponerse en contacto directamente con el soporte de ASUS.

-

Conclusión

- -

Si tiene alguna pregunta o comentario sobre este artículo, no dude en dejar un comentario a continuación. Nos encantaría saber de usted y ayudarle con cualquier problema. Gracias por leer y navegar feliz!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas y respuestas frecuentes sobre la actualización del firmware:

-
    -
  1. ¿Cuál es la última versión de firmware para el router ASUS RT-N56U?
    La última versión de firmware para el router ASUS RT-N56U a partir de junio de 2023 es 3.0.0.4.382_52288. Puede comprobarlo en el sitio web de ASUS o en la interfaz web de su router.
  2. -
  3. ¿Con qué frecuencia debo actualizar el firmware de mi router?
    No hay una regla fija sobre la frecuencia con la que debe actualizar el firmware de su router. Depende de sus necesidades y preferencias. Sin embargo, se recomienda comprobar si hay nuevas versiones de firmware con regularidad y actualizarlas cuando estén disponibles. Esto puede ayudarte a mantener tu router actualizado y seguro.
  4. -
  5. ¿Puedo bajar el firmware de mi router?
    Sí, puede bajar el firmware de su router si no está satisfecho con la nueva versión o encuentra algún problema. Sin embargo, esto no se recomienda, ya que puede causar algunos problemas o conflictos con la configuración y las funciones del router. Si desea bajar el firmware de su enrutador, debe seguir los mismos pasos que actualizarlo manualmente, pero use un archivo de firmware anterior en su lugar.
  6. -
  7. ¿Puedo usar firmware personalizado en mi router?
    Sí, puede usar firmware personalizado en su enrutador si desea tener más características y opciones que no están disponibles en el firmware oficial. Sin embargo, esto no se recomienda, ya que puede anular su garantía, dañar su enrutador o causar riesgos de seguridad. Si desea utilizar firmware personalizado en su router, debe tener cuidado y seguir las instrucciones del desarrollador de firmware personalizado.
  8. - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/roi_heads.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/roi_heads.py deleted file mode 100644 index f2be28b91afad643c624e3b09f60ef9c7b7062e2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/roi_heads.py +++ /dev/null @@ -1,728 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import logging -import numpy as np -from typing import Dict, List, Optional, Tuple, Union -import torch -from torch import nn - -from detectron2.layers import ShapeSpec -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry - -from ..backbone.resnet import BottleneckBlock, make_stage -from ..matcher import Matcher -from ..poolers import ROIPooler -from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals -from ..sampling import subsample_labels -from .box_head import build_box_head -from .fast_rcnn import FastRCNNOutputLayers -from .keypoint_head import build_keypoint_head -from .mask_head import build_mask_head - -ROI_HEADS_REGISTRY = Registry("ROI_HEADS") -ROI_HEADS_REGISTRY.__doc__ = """ -Registry for ROI heads in a generalized R-CNN model. -ROIHeads take feature maps and region proposals, and -perform per-region computation. - -The registered object will be called with `obj(cfg, input_shape)`. -The call is expected to return an :class:`ROIHeads`. -""" - -logger = logging.getLogger(__name__) - - -def build_roi_heads(cfg, input_shape): - """ - Build ROIHeads defined by `cfg.MODEL.ROI_HEADS.NAME`. - """ - name = cfg.MODEL.ROI_HEADS.NAME - return ROI_HEADS_REGISTRY.get(name)(cfg, input_shape) - - -def select_foreground_proposals( - proposals: List[Instances], bg_label: int -) -> Tuple[List[Instances], List[torch.Tensor]]: - """ - Given a list of N Instances (for N images), each containing a `gt_classes` field, - return a list of Instances that contain only instances with `gt_classes != -1 && - gt_classes != bg_label`. - - Args: - proposals (list[Instances]): A list of N Instances, where N is the number of - images in the batch. - bg_label: label index of background class. - - Returns: - list[Instances]: N Instances, each contains only the selected foreground instances. - list[Tensor]: N boolean vector, correspond to the selection mask of - each Instances object. True for selected instances. - """ - assert isinstance(proposals, (list, tuple)) - assert isinstance(proposals[0], Instances) - assert proposals[0].has("gt_classes") - fg_proposals = [] - fg_selection_masks = [] - for proposals_per_image in proposals: - gt_classes = proposals_per_image.gt_classes - fg_selection_mask = (gt_classes != -1) & (gt_classes != bg_label) - fg_idxs = fg_selection_mask.nonzero().squeeze(1) - fg_proposals.append(proposals_per_image[fg_idxs]) - fg_selection_masks.append(fg_selection_mask) - return fg_proposals, fg_selection_masks - - -def select_proposals_with_visible_keypoints( - proposals: List[Instances], -) -> List[Instances]: - """ - Args: - proposals (list[Instances]): a list of N Instances, where N is the - number of images. - - Returns: - proposals: only contains proposals with at least one visible keypoint. - - Note that this is still slightly different from Detectron. - In Detectron, proposals for training keypoint head are re-sampled from - all the proposals with IOU>threshold & >=1 visible keypoint. - - Here, the proposals are first sampled from all proposals with - IOU>threshold, then proposals with no visible keypoint are filtered out. - This strategy seems to make no difference on Detectron and is easier to implement. - """ - ret = [] - all_num_fg = [] - for proposals_per_image in proposals: - # If empty/unannotated image (hard negatives), skip filtering for train - if len(proposals_per_image) == 0: - ret.append(proposals_per_image) - continue - gt_keypoints = proposals_per_image.gt_keypoints.tensor - # #fg x K x 3 - vis_mask = gt_keypoints[:, :, 2] >= 1 - xs, ys = gt_keypoints[:, :, 0], gt_keypoints[:, :, 1] - proposal_boxes = proposals_per_image.proposal_boxes.tensor.unsqueeze( - dim=1 - ) # #fg x 1 x 4 - kp_in_box = ( - (xs >= proposal_boxes[:, :, 0]) - & (xs <= proposal_boxes[:, :, 2]) - & (ys >= proposal_boxes[:, :, 1]) - & (ys <= proposal_boxes[:, :, 3]) - ) - selection = (kp_in_box & vis_mask).any(dim=1) - selection_idxs = torch.nonzero(selection).squeeze(1) - all_num_fg.append(selection_idxs.numel()) - ret.append(proposals_per_image[selection_idxs]) - - storage = get_event_storage() - storage.put_scalar("keypoint_head/num_fg_samples", np.mean(all_num_fg)) - return ret - - -class ROIHeads(torch.nn.Module): - """ - ROIHeads perform all per-region computation in an R-CNN. - - It contains logic of cropping the regions, extract per-region features, - and make per-region predictions. - - It can have many variants, implemented as subclasses of this class. - """ - - def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]): - super(ROIHeads, self).__init__() - # fmt: off - self.batch_size_per_image = cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE - self.positive_sample_fraction = cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION - self.in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES - self.num_classes = cfg.MODEL.ROI_HEADS.NUM_CLASSES - self.proposal_append_gt = cfg.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT - # fmt: on - - # Matcher to assign box proposals to gt boxes - self.proposal_matcher = Matcher( - cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS, - cfg.MODEL.ROI_HEADS.IOU_LABELS, - allow_low_quality_matches=False, - ) - - def _sample_proposals( - self, - matched_idxs: torch.Tensor, - matched_labels: torch.Tensor, - gt_classes: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Based on the matching between N proposals and M groundtruth, - sample the proposals and set their classification labels. - - Args: - matched_idxs (Tensor): a vector of length N, each is the best-matched - gt index in [0, M) for each proposal. - matched_labels (Tensor): a vector of length N, the matcher's label - (one of cfg.MODEL.ROI_HEADS.IOU_LABELS) for each proposal. - gt_classes (Tensor): a vector of length M. - - Returns: - Tensor: a vector of indices of sampled proposals. Each is in [0, N). - Tensor: a vector of the same length, the classification label for - each sampled proposal. Each sample is labeled as either a category in - [0, num_classes) or the background (num_classes). - """ - has_gt = gt_classes.numel() > 0 - # Get the corresponding GT for each proposal - if has_gt: - gt_classes = gt_classes[matched_idxs] - # Label unmatched proposals (0 label from matcher) as background (label=num_classes) - gt_classes[matched_labels == 0] = self.num_classes - # Label ignore proposals (-1 label) - gt_classes[matched_labels == -1] = -1 - else: - gt_classes = torch.zeros_like(matched_idxs) + self.num_classes - - sampled_fg_idxs, sampled_bg_idxs = subsample_labels( - gt_classes, - self.batch_size_per_image, - self.positive_sample_fraction, - self.num_classes, - ) - - sampled_idxs = torch.cat([sampled_fg_idxs, sampled_bg_idxs], dim=0) - return sampled_idxs, gt_classes[sampled_idxs] - - @torch.no_grad() - def label_and_sample_proposals( - self, proposals: List[Instances], targets: List[Instances] - ) -> List[Instances]: - """ - Prepare some proposals to be used to train the ROI heads. - It performs box matching between `proposals` and `targets`, and assigns - training labels to the proposals. - It returns ``self.batch_size_per_image`` random samples from proposals and groundtruth - boxes, with a fraction of positives that is no larger than - ``self.positive_sample_fraction``. - - Args: - See :meth:`ROIHeads.forward` - - Returns: - list[Instances]: - length `N` list of `Instances`s containing the proposals - sampled for training. Each `Instances` has the following fields: - - - proposal_boxes: the proposal boxes - - gt_boxes: the ground-truth box that the proposal is assigned to - (this is only meaningful if the proposal has a label > 0; if label = 0 - then the ground-truth box is random) - - Other fields such as "gt_classes", "gt_masks", that's included in `targets`. - """ - gt_boxes = [x.gt_boxes for x in targets] - # Augment proposals with ground-truth boxes. - # In the case of learned proposals (e.g., RPN), when training starts - # the proposals will be low quality due to random initialization. - # It's possible that none of these initial - # proposals have high enough overlap with the gt objects to be used - # as positive examples for the second stage components (box head, - # cls head, mask head). Adding the gt boxes to the set of proposals - # ensures that the second stage components will have some positive - # examples from the start of training. For RPN, this augmentation improves - # convergence and empirically improves box AP on COCO by about 0.5 - # points (under one tested configuration). - if self.proposal_append_gt: - proposals = add_ground_truth_to_proposals(gt_boxes, proposals) - - proposals_with_gt = [] - - num_fg_samples = [] - num_bg_samples = [] - for proposals_per_image, targets_per_image in zip(proposals, targets): - has_gt = len(targets_per_image) > 0 - match_quality_matrix = pairwise_iou( - targets_per_image.gt_boxes, proposals_per_image.proposal_boxes - ) - matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix) - sampled_idxs, gt_classes = self._sample_proposals( - matched_idxs, matched_labels, targets_per_image.gt_classes - ) - - # Set target attributes of the sampled proposals: - proposals_per_image = proposals_per_image[sampled_idxs] - proposals_per_image.gt_classes = gt_classes - - # We index all the attributes of targets that start with "gt_" - # and have not been added to proposals yet (="gt_classes"). - if has_gt: - sampled_targets = matched_idxs[sampled_idxs] - # NOTE: here the indexing waste some compute, because heads - # like masks, keypoints, etc, will filter the proposals again, - # (by foreground/background, or number of keypoints in the image, etc) - # so we essentially index the data twice. - for (trg_name, trg_value) in targets_per_image.get_fields().items(): - if trg_name.startswith("gt_") and not proposals_per_image.has( - trg_name - ): - proposals_per_image.set(trg_name, trg_value[sampled_targets]) - else: - gt_boxes = Boxes( - targets_per_image.gt_boxes.tensor.new_zeros((len(sampled_idxs), 4)) - ) - proposals_per_image.gt_boxes = gt_boxes - - num_bg_samples.append((gt_classes == self.num_classes).sum().item()) - num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1]) - proposals_with_gt.append(proposals_per_image) - - # Log the number of fg/bg samples that are selected for training ROI heads - storage = get_event_storage() - storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples)) - storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples)) - - return proposals_with_gt - - def forward( - self, - images: ImageList, - features: Dict[str, torch.Tensor], - proposals: List[Instances], - targets: Optional[List[Instances]] = None, - ) -> Tuple[List[Instances], Dict[str, torch.Tensor]]: - """ - Args: - images (ImageList): - features (dict[str,Tensor]): input data as a mapping from feature - map name to tensor. Axis 0 represents the number of images `N` in - the input data; axes 1-3 are channels, height, and width, which may - vary between feature maps (e.g., if a feature pyramid is used). - proposals (list[Instances]): length `N` list of `Instances`. The i-th - `Instances` contains object proposals for the i-th input image, - with fields "proposal_boxes" and "objectness_logits". - targets (list[Instances], optional): length `N` list of `Instances`. The i-th - `Instances` contains the ground-truth per-instance annotations - for the i-th input image. Specify `targets` during training only. - It may have the following fields: - - - gt_boxes: the bounding box of each instance. - - gt_classes: the label for each instance with a category ranging in [0, #class]. - - gt_masks: PolygonMasks or BitMasks, the ground-truth masks of each instance. - - gt_keypoints: NxKx3, the groud-truth keypoints for each instance. - - Returns: - list[Instances]: length `N` list of `Instances` containing the - detected instances. Returned during inference only; may be [] during training. - - dict[str->Tensor]: - mapping from a named loss to a tensor storing the loss. Used during training only. - """ - raise NotImplementedError() - - -@ROI_HEADS_REGISTRY.register() -class Res5ROIHeads(ROIHeads): - """ - The ROIHeads in a typical "C4" R-CNN model, where - the box and mask head share the cropping and - the per-region feature computation by a Res5 block. - """ - - def __init__(self, cfg, input_shape): - super().__init__(cfg, input_shape) - - assert len(self.in_features) == 1 - - # fmt: off - pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION - pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE - pooler_scales = (1.0 / input_shape[self.in_features[0]].stride, ) - sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO - self.mask_on = cfg.MODEL.MASK_ON - # fmt: on - assert not cfg.MODEL.KEYPOINT_ON - - self.pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type=pooler_type, - ) - - self.res5, out_channels = self._build_res5_block(cfg) - self.box_predictor = FastRCNNOutputLayers( - cfg, ShapeSpec(channels=out_channels, height=1, width=1) - ) - - if self.mask_on: - self.mask_head = build_mask_head( - cfg, - ShapeSpec( - channels=out_channels, - width=pooler_resolution, - height=pooler_resolution, - ), - ) - - def _build_res5_block(self, cfg): - # fmt: off - stage_channel_factor = 2 ** 3 # res5 is 8x res2 - num_groups = cfg.MODEL.RESNETS.NUM_GROUPS - width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group * stage_channel_factor - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * stage_channel_factor - stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 - norm = cfg.MODEL.RESNETS.NORM - assert not cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE[-1], \ - "Deformable conv is not yet supported in res5 head." - # fmt: on - - blocks = make_stage( - BottleneckBlock, - 3, - first_stride=2, - in_channels=out_channels // 2, - bottleneck_channels=bottleneck_channels, - out_channels=out_channels, - num_groups=num_groups, - norm=norm, - stride_in_1x1=stride_in_1x1, - ) - return nn.Sequential(*blocks), out_channels - - def _shared_roi_transform(self, features, boxes): - x = self.pooler(features, boxes) - return self.res5(x) - - def forward(self, images, features, proposals, targets=None): - """ - See :meth:`ROIHeads.forward`. - """ - del images - - if self.training: - assert targets - proposals = self.label_and_sample_proposals(proposals, targets) - del targets - - proposal_boxes = [x.proposal_boxes for x in proposals] - box_features = self._shared_roi_transform( - [features[f] for f in self.in_features], proposal_boxes - ) - predictions = self.box_predictor(box_features.mean(dim=[2, 3])) - - if self.training: - del features - losses = self.box_predictor.losses(predictions, proposals) - if self.mask_on: - proposals, fg_selection_masks = select_foreground_proposals( - proposals, self.num_classes - ) - # Since the ROI feature transform is shared between boxes and masks, - # we don't need to recompute features. The mask loss is only defined - # on foreground proposals, so we need to select out the foreground - # features. - mask_features = box_features[torch.cat(fg_selection_masks, dim=0)] - del box_features - losses.update(self.mask_head(mask_features, proposals)) - return [], losses - else: - pred_instances, _ = self.box_predictor.inference(predictions, proposals) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - return pred_instances, {} - - def forward_with_given_boxes(self, features, instances): - """ - Use the given boxes in `instances` to produce other (non-box) per-ROI outputs. - - Args: - features: same as in `forward()` - instances (list[Instances]): instances to predict other outputs. Expect the keys - "pred_boxes" and "pred_classes" to exist. - - Returns: - instances (Instances): - the same `Instances` object, with extra - fields such as `pred_masks` or `pred_keypoints`. - """ - assert not self.training - assert instances[0].has("pred_boxes") and instances[0].has("pred_classes") - - if self.mask_on: - features = [features[f] for f in self.in_features] - x = self._shared_roi_transform(features, [x.pred_boxes for x in instances]) - return self.mask_head(x, instances) - else: - return instances - - -@ROI_HEADS_REGISTRY.register() -class StandardROIHeads(ROIHeads): - """ - It's "standard" in a sense that there is no ROI transform sharing - or feature sharing between tasks. - The cropped rois go to separate branches (boxes and masks) directly. - This way, it is easier to make separate abstractions for different branches. - - This class is used by most models, such as FPN and C5. - To implement more models, you can subclass it and implement a different - :meth:`forward()` or a head. - """ - - def __init__(self, cfg, input_shape): - super(StandardROIHeads, self).__init__(cfg, input_shape) - self._init_box_head(cfg, input_shape) - self._init_mask_head(cfg, input_shape) - self._init_keypoint_head(cfg, input_shape) - - def _init_box_head(self, cfg, input_shape): - # fmt: off - pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION - pooler_scales = tuple(1.0 / input_shape[k].stride for k in self.in_features) - sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO - pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE - self.train_on_pred_boxes = cfg.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES - # fmt: on - - # If StandardROIHeads is applied on multiple feature maps (as in FPN), - # then we share the same predictors and therefore the channel counts must be the same - in_channels = [input_shape[f].channels for f in self.in_features] - # Check all channel counts are equal - assert len(set(in_channels)) == 1, in_channels - in_channels = in_channels[0] - - self.box_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type=pooler_type, - ) - # Here we split "box head" and "box predictor", which is mainly due to historical reasons. - # They are used together so the "box predictor" layers should be part of the "box head". - # New subclasses of ROIHeads do not need "box predictor"s. - self.box_head = build_box_head( - cfg, - ShapeSpec( - channels=in_channels, height=pooler_resolution, width=pooler_resolution - ), - ) - self.box_predictor = FastRCNNOutputLayers(cfg, self.box_head.output_shape) - - def _init_mask_head(self, cfg, input_shape): - # fmt: off - self.mask_on = cfg.MODEL.MASK_ON - if not self.mask_on: - return - pooler_resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION - pooler_scales = tuple(1.0 / input_shape[k].stride for k in self.in_features) - sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO - pooler_type = cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE - # fmt: on - - in_channels = [input_shape[f].channels for f in self.in_features][0] - - self.mask_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type=pooler_type, - ) - self.mask_head = build_mask_head( - cfg, - ShapeSpec( - channels=in_channels, width=pooler_resolution, height=pooler_resolution - ), - ) - - def _init_keypoint_head(self, cfg, input_shape): - # fmt: off - self.keypoint_on = cfg.MODEL.KEYPOINT_ON - if not self.keypoint_on: - return - pooler_resolution = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION - pooler_scales = tuple(1.0 / input_shape[k].stride for k in self.in_features) # noqa - sampling_ratio = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO - pooler_type = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE - # fmt: on - - in_channels = [input_shape[f].channels for f in self.in_features][0] - - self.keypoint_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type=pooler_type, - ) - self.keypoint_head = build_keypoint_head( - cfg, - ShapeSpec( - channels=in_channels, width=pooler_resolution, height=pooler_resolution - ), - ) - - def forward( - self, - images: ImageList, - features: Dict[str, torch.Tensor], - proposals: List[Instances], - targets: Optional[List[Instances]] = None, - ) -> Tuple[List[Instances], Dict[str, torch.Tensor]]: - """ - See :class:`ROIHeads.forward`. - """ - del images - if self.training: - assert targets - proposals = self.label_and_sample_proposals(proposals, targets) - del targets - - if self.training: - losses = self._forward_box(features, proposals) - # Usually the original proposals used by the box head are used by the mask, keypoint - # heads. But when `self.train_on_pred_boxes is True`, proposals will contain boxes - # predicted by the box head. - losses.update(self._forward_mask(features, proposals)) - losses.update(self._forward_keypoint(features, proposals)) - return proposals, losses - else: - pred_instances, box_features = self._forward_box(features, proposals) - # During inference cascaded prediction is used: the mask and keypoints heads are only - # applied to the top scoring box detections. - pred_instances = self.forward_with_given_boxes(features, pred_instances) - return pred_instances, box_features - - def forward_with_given_boxes( - self, features: Dict[str, torch.Tensor], instances: List[Instances] - ) -> List[Instances]: - """ - Use the given boxes in `instances` to produce other (non-box) per-ROI outputs. - - This is useful for downstream tasks where a box is known, but need to obtain - other attributes (outputs of other heads). - Test-time augmentation also uses this. - - Args: - features: same as in `forward()` - instances (list[Instances]): instances to predict other outputs. Expect the keys - "pred_boxes" and "pred_classes" to exist. - - Returns: - instances (list[Instances]): - the same `Instances` objects, with extra - fields such as `pred_masks` or `pred_keypoints`. - """ - assert not self.training - assert instances[0].has("pred_boxes") and instances[0].has("pred_classes") - - instances = self._forward_mask(features, instances) - instances = self._forward_keypoint(features, instances) - return instances - - def _forward_box( - self, features: Dict[str, torch.Tensor], proposals: List[Instances] - ) -> Union[Dict[str, torch.Tensor], List[Instances]]: - """ - Forward logic of the box prediction branch. If `self.train_on_pred_boxes is True`, - the function puts predicted boxes in the `proposal_boxes` field of `proposals` argument. - - Args: - features (dict[str, Tensor]): mapping from feature map names to tensor. - Same as in :meth:`ROIHeads.forward`. - proposals (list[Instances]): the per-image object proposals with - their matching ground truth. - Each has fields "proposal_boxes", and "objectness_logits", - "gt_classes", "gt_boxes". - - Returns: - In training, a dict of losses. - In inference, a list of `Instances`, the predicted instances. - """ - features = [features[f] for f in self.in_features] - box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals]) - box_features = self.box_head(box_features) - predictions = self.box_predictor(box_features) - # del box_features - - if self.training: - if self.train_on_pred_boxes: - with torch.no_grad(): - pred_boxes = self.box_predictor.predict_boxes_for_gt_classes( - predictions, proposals - ) - for proposals_per_image, pred_boxes_per_image in zip( - proposals, pred_boxes - ): - proposals_per_image.proposal_boxes = Boxes(pred_boxes_per_image) - return self.box_predictor.losses(predictions, proposals) - else: - pred_instances, keep = self.box_predictor.inference(predictions, proposals) - box_features = box_features[keep] - return pred_instances, box_features - - def _forward_mask( - self, features: Dict[str, torch.Tensor], instances: List[Instances] - ) -> Union[Dict[str, torch.Tensor], List[Instances]]: - """ - Forward logic of the mask prediction branch. - - Args: - features (dict[str, Tensor]): mapping from feature map names to tensor. - Same as in :meth:`ROIHeads.forward`. - instances (list[Instances]): the per-image instances to train/predict masks. - In training, they can be the proposals. - In inference, they can be the predicted boxes. - - Returns: - In training, a dict of losses. - In inference, update `instances` with new fields "pred_masks" and return it. - """ - if not self.mask_on: - return {} if self.training else instances - - features = [features[f] for f in self.in_features] - - if self.training: - # The loss is only defined on positive proposals. - proposals, _ = select_foreground_proposals(instances, self.num_classes) - proposal_boxes = [x.proposal_boxes for x in proposals] - mask_features = self.mask_pooler(features, proposal_boxes) - return self.mask_head(mask_features, proposals) - else: - pred_boxes = [x.pred_boxes for x in instances] - mask_features = self.mask_pooler(features, pred_boxes) - return self.mask_head(mask_features, instances) - - def _forward_keypoint( - self, features: Dict[str, torch.Tensor], instances: List[Instances] - ) -> Union[Dict[str, torch.Tensor], List[Instances]]: - """ - Forward logic of the keypoint prediction branch. - - Args: - features (dict[str, Tensor]): mapping from feature map names to tensor. - Same as in :meth:`ROIHeads.forward`. - instances (list[Instances]): the per-image instances to train/predict keypoints. - In training, they can be the proposals. - In inference, they can be the predicted boxes. - - Returns: - In training, a dict of losses. - In inference, update `instances` with new fields "pred_keypoints" and return it. - """ - if not self.keypoint_on: - return {} if self.training else instances - - features = [features[f] for f in self.in_features] - - if self.training: - # The loss is defined on positive proposals with at >=1 visible keypoints. - proposals, _ = select_foreground_proposals(instances, self.num_classes) - proposals = select_proposals_with_visible_keypoints(proposals) - proposal_boxes = [x.proposal_boxes for x in proposals] - - keypoint_features = self.keypoint_pooler(features, proposal_boxes) - return self.keypoint_head(keypoint_features, proposals) - else: - pred_boxes = [x.pred_boxes for x in instances] - keypoint_features = self.keypoint_pooler(features, pred_boxes) - return self.keypoint_head(keypoint_features, instances) diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config/exec_check_disable.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config/exec_check_disable.h deleted file mode 100644 index 114ca3853a9e148a5b52161c4409d52873dc5b3d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/config/exec_check_disable.h +++ /dev/null @@ -1,43 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file exec_check_disable.h - * \brief Defines __thrust_exec_check_disable__ - */ - -#pragma once - -#include - -// #pragma nv_exec_check_disable is only recognized by NVCC. Having a macro -// expand to a #pragma (rather than _Pragma) only works with NVCC's compilation -// model, not with other compilers. -#if defined(__CUDACC__) && !defined(__NVCOMPILER_CUDA__) && \ - !(defined(__CUDA__) && defined(__clang__)) - -#if THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC -#define __thrust_exec_check_disable__ __pragma("nv_exec_check_disable") -#else // MSVC -#define __thrust_exec_check_disable__ _Pragma("nv_exec_check_disable") -#endif // MSVC - -#else - -#define __thrust_exec_check_disable__ - -#endif - - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/guarded_cuda_runtime_api.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/guarded_cuda_runtime_api.h deleted file mode 100644 index 5b0f345a74a4aa3b69027774be52fd9e0a5d09cd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/guarded_cuda_runtime_api.h +++ /dev/null @@ -1,39 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to check for the existence of macros -// such as __host__ and __device__, which may already be defined by thrust -// and to undefine them before entering cuda_runtime_api.h (which will redefine them) - -// we only try to do this stuff if cuda/include/host_defines.h has been included -#if !defined(__HOST_DEFINES_H__) - -#ifdef __host__ -#undef __host__ -#endif // __host__ - -#ifdef __device__ -#undef __device__ -#endif // __device__ - -#endif // __HOST_DEFINES_H__ - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/binary_search.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/binary_search.h deleted file mode 100644 index 8cd85c63f30b2484d7d9c2111d6e51f957a8a282..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/binary_search.h +++ /dev/null @@ -1,174 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file binary_search.h - * \brief Generic implementations of binary search functions. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ -ForwardIterator lower_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - const T& value); - -template -__host__ __device__ -ForwardIterator lower_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - const T& value, - StrictWeakOrdering comp); - - -template -__host__ __device__ -ForwardIterator upper_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - const T& value); - -template -__host__ __device__ -ForwardIterator upper_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - const T& value, - StrictWeakOrdering comp); - - -template -__host__ __device__ -bool binary_search(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - const T& value); - -template -__host__ __device__ -bool binary_search(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - const T& value, - StrictWeakOrdering comp); - - -template -__host__ __device__ -OutputIterator lower_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - InputIterator values_begin, - InputIterator values_end, - OutputIterator output); - - -template -__host__ __device__ -OutputIterator lower_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - InputIterator values_begin, - InputIterator values_end, - OutputIterator output, - StrictWeakOrdering comp); - - -template -__host__ __device__ -OutputIterator upper_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - InputIterator values_begin, - InputIterator values_end, - OutputIterator output); - - -template -__host__ __device__ -OutputIterator upper_bound(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - InputIterator values_begin, - InputIterator values_end, - OutputIterator output, - StrictWeakOrdering comp); - - -template -__host__ __device__ -OutputIterator binary_search(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - InputIterator values_begin, - InputIterator values_end, - OutputIterator output); - - -template -__host__ __device__ -OutputIterator binary_search(thrust::execution_policy &exec, - ForwardIterator begin, - ForwardIterator end, - InputIterator values_begin, - InputIterator values_end, - OutputIterator output, - StrictWeakOrdering comp); - - -template -__host__ __device__ -thrust::pair -equal_range(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable &value); - - -template -__host__ __device__ -thrust::pair -equal_range(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable &value, - StrictWeakOrdering comp); - - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/v-doc_abstractive_mac/preprocess.py b/spaces/CVPR/v-doc_abstractive_mac/preprocess.py deleted file mode 100644 index fd8bc317bb235137901b1db88a0d9c728dc58f27..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/preprocess.py +++ /dev/null @@ -1,551 +0,0 @@ -import time -import os -import random -import json -import pickle -import numpy as np -from tqdm import tqdm -from termcolor import colored -from program_translator import ProgramTranslator # -from config import config - - -# Print bold tex -def bold(txt): - return colored(str(txt), attrs=["bold"]) - - -# Print bold and colored text -def bcolored(txt, color): - return colored(str(txt), color, attrs=["bold"]) - - -# Write a line to file -def writeline(f, line): - f.write(str(line) + "\n") - - -# Write a list to file -def writelist(f, l): - writeline(f, ",".join(map(str, l))) - - -# 2d list to numpy -def vectorize2DList(items, minX=0, minY=0, dtype=np.int): - maxX = max(len(items), minX) - maxY = max([len(item) for item in items] + [minY]) - t = np.zeros((maxX, maxY), dtype=dtype) - tLengths = np.zeros((maxX,), dtype=np.int) - for i, item in enumerate(items): - t[i, 0:len(item)] = np.array(item, dtype=dtype) - tLengths[i] = len(item) - return t, tLengths - - -# 3d list to numpy -def vectorize3DList(items, minX=0, minY=0, minZ=0, dtype=np.int): - maxX = max(len(items), minX) - maxY = max([len(item) for item in items] + [minY]) - maxZ = max([len(subitem) for item in items for subitem in item] + [minZ]) - t = np.zeros((maxX, maxY, maxZ), dtype=dtype) - tLengths = np.zeros((maxX, maxY), dtype=np.int) - for i, item in enumerate(items): - for j, subitem in enumerate(item): - t[i, j, 0:len(subitem)] = np.array(subitem, dtype=dtype) - tLengths[i, j] = len(subitem) - return t, tLengths - - -''' -Encodes text into integers. Keeps dictionary between string words (symbols) -and their matching integers. Supports encoding and decoding. -''' - - -class SymbolDict(object): - def __init__(self, empty=False): - self.padding = "" - self.unknown = "" - self.start = "" - self.end = "" - - self.invalidSymbols = [self.padding, self.unknown, self.start, self.end] - - if empty: - self.sym2id = {} - self.id2sym = [] - else: - self.sym2id = {self.padding: 0, self.unknown: 1, self.start: 2, self.end: 3} - self.id2sym = [self.padding, self.unknown, self.start, self.end] - self.allSeqs = [] - - def getNumSymbols(self): - return len(self.sym2id) - - def isPadding(self, enc): - return enc == 0 - - def isUnknown(self, enc): - return enc == 1 - - def isStart(self, enc): - return enc == 2 - - def isEnd(self, enc): - return enc == 3 - - def isValid(self, enc): - return enc < self.getNumSymbols() and enc >= len(self.invalidSymbols) - - def resetSeqs(self): - self.allSeqs = [] - - def addSeq(self, seq): - self.allSeqs += seq - - # Call to create the words-to-integers vocabulary after (reading word sequences with addSeq). - def createVocab(self, minCount=0): - counter = {} - for symbol in self.allSeqs: - counter[symbol] = counter.get(symbol, 0) + 1 - for symbol in counter: - if counter[symbol] > minCount and (symbol not in self.sym2id): - self.sym2id[symbol] = self.getNumSymbols() - self.id2sym.append(symbol) - - # Encodes a symbol. Returns the matching integer. - def encodeSym(self, symbol): - if symbol not in self.sym2id: - symbol = self.unknown - return self.sym2id[symbol] - - ''' - Encodes a sequence of symbols. - Optionally add start, or end symbols. - Optionally reverse sequence - ''' - - def encodeSequence(self, decoded, addStart=False, addEnd=False, reverse=False): - if reverse: - decoded.reverse() - if addStart: - decoded = [self.start] + decoded - if addEnd: - decoded = decoded + [self.end] - encoded = [self.encodeSym(symbol) for symbol in decoded] - return encoded - - # Decodes an integer into its symbol - def decodeId(self, enc): - return self.id2sym[enc] if enc < self.getNumSymbols() else self.unknown - - ''' - Decodes a sequence of integers into their symbols. - If delim is given, joins the symbols using delim, - Optionally reverse the resulted sequence - ''' - - def decodeSequence(self, encoded, delim=None, reverse=False, stopAtInvalid=True): - length = 0 - for i in range(len(encoded)): - if not self.isValid(encoded[i]) and stopAtInvalid: - break - length += 1 - encoded = encoded[:length] - - decoded = [self.decodeId(enc) for enc in encoded] - if reverse: - decoded.reverse() - - if delim is not None: - return delim.join(decoded) - - return decoded - - -''' -Preprocesses a given dataset into numpy arrays. -By calling preprocess, the class: -1. Reads the input data files into dictionary. -2. Saves the results jsons in files and loads them instead of parsing input if files exist/ -3. Initializes word embeddings to random / GloVe. -4. Optionally filters data according to given filters. -5. Encodes and vectorize the data into numpy arrays. -6. Buckets the data according to the instances length. -''' - - -class Preprocesser(object): - def __init__(self): - self.questionDict = SymbolDict() - self.answerDict = SymbolDict(empty=True) - self.qaDict = SymbolDict() - - self.specificDatasetDicts = None - - self.programDict = SymbolDict() - self.programTranslator = ProgramTranslator(self.programDict, 2) - - ''' - Tokenizes string into list of symbols. - - Args: - text: raw string to tokenize. - ignorePuncts: punctuation to ignore - keptPunct: punctuation to keep (as symbol) - endPunct: punctuation to remove if appears at the end - delim: delimiter between symbols - clean: True to replace text in string - replacelistPre: dictionary of replacement to perform on the text before tokanization - replacelistPost: dictionary of replacement to perform on the text after tokanization - ''' - # sentence tokenizer - allPunct = ["?", "!", "\\", "/", ")", "(", ".", ",", ";", ":"] - - def tokenize(self, text, ignoredPuncts=["?", "!", "\\", "/", ")", "("], - keptPuncts=[".", ",", ";", ":"], endPunct=[">", "<", ":"], delim=" ", - clean=False, replacelistPre=dict(), replacelistPost=dict()): - - if clean: - for word in replacelistPre: - origText = text - text = text.replace(word, replacelistPre[word]) - if (origText != text): - print(origText) - print(text) - print("") - - for punct in endPunct: - if text[-1] == punct: - print(text) - text = text[:-1] - print(text) - print("") - - for punct in keptPuncts: - text = text.replace(punct, delim + punct + delim) - - for punct in ignoredPuncts: - text = text.replace(punct, "") - - ret = text.lower().split(delim) - - if clean: - origRet = ret - ret = [replacelistPost.get(word, word) for word in ret] - if origRet != ret: - print(origRet) - print(ret) - - ret = [t for t in ret if t != ""] - return ret - - # Read class' generated files. - # files interface - def readFiles(self, instancesFilename): - with open(instancesFilename, "r") as inFile: - instances = json.load(inFile) - - with open(config.questionDictFile(), "rb") as inFile: - self.questionDict = pickle.load(inFile) - - with open(config.answerDictFile(), "rb") as inFile: - self.answerDict = pickle.load(inFile) - - with open(config.qaDictFile(), "rb") as inFile: - self.qaDict = pickle.load(inFile) - - return instances - - ''' - Generate class' files. Save json representation of instances and - symbols-to-integers dictionaries. - ''' - - def writeFiles(self, instances, instancesFilename): - with open(instancesFilename, "w") as outFile: - json.dump(instances, outFile) - - with open(config.questionDictFile(), "wb") as outFile: - pickle.dump(self.questionDict, outFile) - - with open(config.answerDictFile(), "wb") as outFile: - pickle.dump(self.answerDict, outFile) - - with open(config.qaDictFile(), "wb") as outFile: - pickle.dump(self.qaDict, outFile) - - # Write prediction json to file and optionally a one-answer-per-line output file - def writePreds(self, res, tier, suffix=""): - if res is None: - return - preds = res["preds"] - sortedPreds = sorted(preds, key=lambda instance: instance["index"]) - with open(config.predsFile(tier + suffix), "w") as outFile: - outFile.write(json.dumps(sortedPreds)) - with open(config.answersFile(tier + suffix), "w") as outFile: - for instance in sortedPreds: - writeline(outFile, instance["prediction"]) - - def readPDF(self, instancesFilename): - instances = [] - - if os.path.exists(instancesFilename): - instances = self.readFiles(instancesFilename) - - return instances - - def readData(self, datasetFilename, instancesFilename, train): - # data extraction - datasetReader = { - "PDF": self.readPDF - } - - return datasetReader[config.dataset](datasetFilename, instancesFilename, train) - - def vectorizeData(self, data): - # if "SHARED" tie symbol representations in questions and answers - if config.ansEmbMod == "SHARED": - qDict = self.qaDict - else: - qDict = self.questionDict - - encodedQuestion = [qDict.encodeSequence(d["questionSeq"]) for d in data] - question, questionL = vectorize2DList(encodedQuestion) - - # pass the whole instances? if heavy then not good - imageId = [d["imageId"] for d in data] - instance = data - - return {"question": question, - "questionLength": questionL, - "imageId": imageId - } - - # Separates data based on a field length - def lseparator(self, key, lims): - maxI = len(lims) - - def separatorFn(x): - v = x[key] - for i, lim in enumerate(lims): - if len(v) < lim: - return i - return maxI - - return {"separate": separatorFn, "groupsNum": maxI + 1} - - # Buckets data to groups using a separator - def bucket(self, instances, separator): - buckets = [[] for i in range(separator["groupsNum"])] - for instance in instances: - bucketI = separator["separate"](instance) - buckets[bucketI].append(instance) - return [bucket for bucket in buckets if len(bucket) > 0] - - # Re-buckets bucket list given a seperator - def rebucket(self, buckets, separator): - res = [] - for bucket in buckets: - res += self.bucket(bucket, separator) - return res - - # Buckets data based on question / program length - def bucketData(self, data, noBucket=False): - if noBucket: - buckets = [data] - else: - if config.noBucket: - buckets = [data] - elif config.noRebucket: - questionSep = self.lseparator("questionSeq", config.questionLims) - buckets = self.bucket(data, questionSep) - else: - programSep = self.lseparator("programSeq", config.programLims) - questionSep = self.lseparator("questionSeq", config.questionLims) - buckets = self.bucket(data, programSep) - buckets = self.rebucket(buckets, questionSep) - return buckets - - ''' - Prepares data: - 1. Filters data according to above arguments. - 2. Takes only a subset of the data based on config.trainedNum / config.testedNum - 3. Buckets data according to question / program length - 4. Vectorizes data into numpy arrays - ''' - - def prepareData(self, data, train, filterKey=None, noBucket=False): - filterDefault = {"maxQLength": 0, "maxPLength": 0, "onlyChain": False, "filterOp": 0} - - filterTrain = {"maxQLength": config.tMaxQ, "maxPLength": config.tMaxP, - "onlyChain": config.tOnlyChain, "filterOp": config.tFilterOp} - - filterVal = {"maxQLength": config.vMaxQ, "maxPLength": config.vMaxP, - "onlyChain": config.vOnlyChain, "filterOp": config.vFilterOp} - - filters = {"train": filterTrain, "evalTrain": filterTrain, - "val": filterVal, "test": filterDefault} - - if filterKey is None: - fltr = filterDefault - else: - fltr = filters[filterKey] - - # split data when finetuning on validation set - if config.trainExtra and config.extraVal and (config.finetuneNum > 0): - if train: - data = data[:config.finetuneNum] - else: - data = data[config.finetuneNum:] - - typeFilter = config.typeFilters[fltr["filterOp"]] - # filter specific settings - if fltr["onlyChain"]: - data = [d for d in data if all((len(inputNum) < 2) for inputNum in d["programInputs"])] - if fltr["maxQLength"] > 0: - data = [d for d in data if len(d["questionSeq"]) <= fltr["maxQLength"]] - if fltr["maxPLength"] > 0: - data = [d for d in data if len(d["programSeq"]) <= fltr["maxPLength"]] - if len(typeFilter) > 0: - data = [d for d in data if d["programSeq"][-1] not in typeFilter] - - # run on subset of the data. If 0 then use all data - num = config.trainedNum if train else config.testedNum - # retainVal = True to retain same clevr_sample of validation across runs - if (not train) and (not config.retainVal): - random.shuffle(data) - if num > 0: - data = data[:num] - # set number to match dataset size - if train: - config.trainedNum = len(data) - else: - config.testedNum = len(data) - - # bucket - buckets = self.bucketData(data, noBucket=noBucket) - - # vectorize - return [self.vectorizeData(bucket) for bucket in buckets] - - # Prepares all the tiers of a dataset. See prepareData method for further details. - def prepareDataset(self, dataset, noBucket=False): - if dataset is None: - return None - - for tier in dataset: - if dataset[tier] is not None: - dataset[tier]["data"] = self.prepareData(dataset[tier]["instances"], - train=dataset[tier]["train"], filterKey=tier, - noBucket=noBucket) - - for tier in dataset: - if dataset[tier] is not None: - del dataset[tier]["instances"] - - return dataset - - # Initializes word embeddings to random uniform / random normal / GloVe. - def initializeWordEmbeddings(self, wordsDict=None, noPadding=False): - # default dictionary to use for embeddings - if wordsDict is None: - wordsDict = self.questionDict - - # uniform initialization - if config.wrdEmbUniform: - lowInit = -1.0 * config.wrdEmbScale - highInit = 1.0 * config.wrdEmbScale - embeddings = np.random.uniform(low=lowInit, high=highInit, - size=(wordsDict.getNumSymbols(), config.wrdEmbDim)) - # normal initialization - else: - embeddings = config.wrdEmbScale * np.random.randn(wordsDict.getNumSymbols(), - config.wrdEmbDim) - - # if wrdEmbRandom = False, use GloVE - counter = 0 - if (not config.wrdEmbRandom): - with open(config.wordVectorsFile, 'r') as inFile: - for line in inFile: - line = line.strip().split() - word = line[0].lower() - vector = [float(x) for x in line[1:]] - index = wordsDict.sym2id.get(word) - if index is not None: - embeddings[index] = vector - counter += 1 - - print(counter) - print(self.questionDict.sym2id) - print(len(self.questionDict.sym2id)) - print(self.answerDict.sym2id) - print(len(self.answerDict.sym2id)) - print(self.qaDict.sym2id) - print(len(self.qaDict.sym2id)) - - if noPadding: - return embeddings # no embedding for padding symbol - else: - return embeddings[1:] - - ''' - Initializes words embeddings for question words and optionally for answer words - (when config.ansEmbMod == "BOTH"). If config.ansEmbMod == "SHARED", tie embeddings for - question and answer same symbols. - ''' - - def initializeQAEmbeddings(self): - # use same embeddings for questions and answers - if config.ansEmbMod == "SHARED": - qaEmbeddings = self.initializeWordEmbeddings(self.qaDict) - ansMap = np.array([self.qaDict.sym2id[sym] for sym in self.answerDict.id2sym]) - embeddings = {"qa": qaEmbeddings, "ansMap": ansMap} - # use different embeddings for questions and answers - else: - qEmbeddings = self.initializeWordEmbeddings(self.questionDict) - aEmbeddings = None - if config.ansEmbMod == "BOTH": - aEmbeddings = self.initializeWordEmbeddings(self.answerDict, noPadding=True) - embeddings = {"q": qEmbeddings, "a": aEmbeddings} - return embeddings - - ''' - Preprocesses a given dataset into numpy arrays: - 1. Reads the input data files into dictionary. - 2. Saves the results jsons in files and loads them instead of parsing input if files exist/ - 3. Initializes word embeddings to random / GloVe. - 4. Optionally filters data according to given filters. - 5. Encodes and vectorize the data into numpy arrays. - 5. Buckets the data according to the instances length. - ''' - - def preprocessData(self, question, debug=False): - # Read data into json and symbols' dictionaries - print(bold("Loading data...")) - start = time.time() - with open(config.questionDictFile(), "rb") as inFile: - self.questionDict = pickle.load(inFile) - with open(config.qaDictFile(), "rb") as inFile: - self.qaDict = pickle.load(inFile) - with open(config.answerDictFile(), "rb") as inFile: - self.answerDict = pickle.load(inFile) - question = question.replace('?', '').replace(', ', '').lower().split() - encodedQuestion = self.questionDict.encodeSequence(question) - data = {'question': np.array([encodedQuestion]), 'questionLength': np.array([len(encodedQuestion)])} - print("took {:.2f} seconds".format(time.time() - start)) - - # Initialize word embeddings (random / glove) - print(bold("Loading word vectors...")) - start = time.time() - embeddings = self.initializeQAEmbeddings() - print("took {:.2f} seconds".format(time.time() - start)) - - answer = 'yes' # DUMMY_ANSWER - self.answerDict.addSeq([answer]) - self.qaDict.addSeq([answer]) - - config.questionWordsNum = self.questionDict.getNumSymbols() - config.answerWordsNum = self.answerDict.getNumSymbols() - - return data, embeddings, self.answerDict diff --git a/spaces/CactiStaccingCrane/OpenAssistant-oasst-sft-1-pythia-12b/app.py b/spaces/CactiStaccingCrane/OpenAssistant-oasst-sft-1-pythia-12b/app.py deleted file mode 100644 index a0d8e22130498331805cdcf3e2d08e9eb90549b4..0000000000000000000000000000000000000000 --- a/spaces/CactiStaccingCrane/OpenAssistant-oasst-sft-1-pythia-12b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/OpenAssistant/oasst-sft-1-pythia-12b").launch() \ No newline at end of file diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/__init__.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/__init__.py deleted file mode 100644 index 168f9979a4623806934b0ff1102ac166704e7dec..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved diff --git a/spaces/CarlDennis/HYTTS/commons.py b/spaces/CarlDennis/HYTTS/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/HYTTS/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/Chitranshu/Dashboard-Zomato/Dockerfile b/spaces/Chitranshu/Dashboard-Zomato/Dockerfile deleted file mode 100644 index c48c4ece862fcc2970b330f60f14ba6c578f67fc..0000000000000000000000000000000000000000 --- a/spaces/Chitranshu/Dashboard-Zomato/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt -RUN python3 -m pip install --no-cache-dir --upgrade pip -RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["panel", "serve", "/code/app.py", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "*"] - -RUN mkdir /.cache -RUN chmod 777 /.cache -RUN mkdir .chroma -RUN chmod 777 .chroma diff --git a/spaces/CoWork/dreambooth-training-public/convertosd.py b/spaces/CoWork/dreambooth-training-public/convertosd.py deleted file mode 100644 index 1211d34edf018b7c402a765c5a7ecdb684cc28e3..0000000000000000000000000000000000000000 --- a/spaces/CoWork/dreambooth-training-public/convertosd.py +++ /dev/null @@ -1,302 +0,0 @@ -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. - -import argparse -import os.path as osp -import re - -import torch -import gc - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - print("Converting to CKPT ...") - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - print(f"Reshaping {k} for SD format") - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# - - -textenc_conversion_lst = [ - # (stable-diffusion, HF Diffusers) - ("resblocks.", "text_model.encoder.layers."), - ("ln_1", "layer_norm1"), - ("ln_2", "layer_norm2"), - (".c_fc.", ".fc1."), - (".c_proj.", ".fc2."), - (".attn", ".self_attn"), - ("ln_final.", "transformer.text_model.final_layer_norm."), - ("token_embedding.weight", "transformer.text_model.embeddings.token_embedding.weight"), - ("positional_embedding", "transformer.text_model.embeddings.position_embedding.weight"), -] -protected = {re.escape(x[1]): x[0] for x in textenc_conversion_lst} -textenc_pattern = re.compile("|".join(protected.keys())) - -# Ordering is from https://github.com/pytorch/pytorch/blob/master/test/cpp/api/modules.cpp -code2idx = {"q": 0, "k": 1, "v": 2} - - -def convert_text_enc_state_dict_v20(text_enc_dict): - new_state_dict = {} - capture_qkv_weight = {} - capture_qkv_bias = {} - for k, v in text_enc_dict.items(): - if ( - k.endswith(".self_attn.q_proj.weight") - or k.endswith(".self_attn.k_proj.weight") - or k.endswith(".self_attn.v_proj.weight") - ): - k_pre = k[: -len(".q_proj.weight")] - k_code = k[-len("q_proj.weight")] - if k_pre not in capture_qkv_weight: - capture_qkv_weight[k_pre] = [None, None, None] - capture_qkv_weight[k_pre][code2idx[k_code]] = v - continue - - if ( - k.endswith(".self_attn.q_proj.bias") - or k.endswith(".self_attn.k_proj.bias") - or k.endswith(".self_attn.v_proj.bias") - ): - k_pre = k[: -len(".q_proj.bias")] - k_code = k[-len("q_proj.bias")] - if k_pre not in capture_qkv_bias: - capture_qkv_bias[k_pre] = [None, None, None] - capture_qkv_bias[k_pre][code2idx[k_code]] = v - continue - - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k) - new_state_dict[relabelled_key] = v - - for k_pre, tensors in capture_qkv_weight.items(): - if None in tensors: - raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing") - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre) - new_state_dict[relabelled_key + ".in_proj_weight"] = torch.cat(tensors) - - for k_pre, tensors in capture_qkv_bias.items(): - if None in tensors: - raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing") - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre) - new_state_dict[relabelled_key + ".in_proj_bias"] = torch.cat(tensors) - - return new_state_dict - - -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -def convert(model_path, checkpoint_path): - unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location="cpu") - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location="cpu") - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location="cpu") - - # Easiest way to identify v2.0 model seems to be that the text encoder (OpenCLIP) is deeper - is_v20_model = "text_model.encoder.layers.22.layer_norm2.bias" in text_enc_dict - - if is_v20_model: - # Need to add the tag 'transformer' in advance so we can knock it out from the final layer-norm - text_enc_dict = {"transformer." + k: v for k, v in text_enc_dict.items()} - text_enc_dict = convert_text_enc_state_dict_v20(text_enc_dict) - text_enc_dict = {"cond_stage_model.model." + k: v for k, v in text_enc_dict.items()} - else: - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - state_dict = {k: v.half() for k, v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, checkpoint_path) - del state_dict, text_enc_dict, vae_state_dict, unet_state_dict - torch.cuda.empty_cache() - gc.collect() - \ No newline at end of file diff --git a/spaces/CofAI/chat.b4/server/backend.py b/spaces/CofAI/chat.b4/server/backend.py deleted file mode 100644 index 9d2d56fac8bf5dc5ed7ca9b8dd147cfceb039f85..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/server/backend.py +++ /dev/null @@ -1,177 +0,0 @@ -import re -from datetime import datetime -from g4f import ChatCompletion -from flask import request, Response, stream_with_context -from requests import get -from server.config import special_instructions - - -class Backend_Api: - def __init__(self, bp, config: dict) -> None: - """ - Initialize the Backend_Api class. - :param app: Flask application instance - :param config: Configuration dictionary - """ - self.bp = bp - self.routes = { - '/backend-api/v2/conversation': { - 'function': self._conversation, - 'methods': ['POST'] - } - } - - def _conversation(self): - """ - Handles the conversation route. - - :return: Response object containing the generated conversation stream - """ - conversation_id = request.json['conversation_id'] - - try: - jailbreak = request.json['jailbreak'] - model = request.json['model'] - messages = build_messages(jailbreak) - - # Generate response - response = ChatCompletion.create( - model=model, - stream=True, - chatId=conversation_id, - messages=messages - ) - - return Response(stream_with_context(generate_stream(response, jailbreak)), mimetype='text/event-stream') - - except Exception as e: - print(e) - print(e.__traceback__.tb_next) - - return { - '_action': '_ask', - 'success': False, - "error": f"an error occurred {str(e)}" - }, 400 - - -def build_messages(jailbreak): - """ - Build the messages for the conversation. - - :param jailbreak: Jailbreak instruction string - :return: List of messages for the conversation - """ - _conversation = request.json['meta']['content']['conversation'] - internet_access = request.json['meta']['content']['internet_access'] - prompt = request.json['meta']['content']['parts'][0] - - # Add the existing conversation - conversation = _conversation - - # Add web results if enabled - if internet_access: - current_date = datetime.now().strftime("%Y-%m-%d") - query = f'Current date: {current_date}. ' + prompt["content"] - search_results = fetch_search_results(query) - conversation.extend(search_results) - - # Add jailbreak instructions if enabled - if jailbreak_instructions := getJailbreak(jailbreak): - conversation.extend(jailbreak_instructions) - - # Add the prompt - conversation.append(prompt) - - # Reduce conversation size to avoid API Token quantity error - if len(conversation) > 3: - conversation = conversation[-4:] - - return conversation - - -def fetch_search_results(query): - """ - Fetch search results for a given query. - - :param query: Search query string - :return: List of search results - """ - search = get('https://ddg-api.herokuapp.com/search', - params={ - 'query': query, - 'limit': 3, - }) - - snippets = "" - for index, result in enumerate(search.json()): - snippet = f'[{index + 1}] "{result["snippet"]}" URL:{result["link"]}.' - snippets += snippet - - response = "Here are some updated web searches. Use this to improve user response:" - response += snippets - - return [{'role': 'system', 'content': response}] - - -def generate_stream(response, jailbreak): - """ - Generate the conversation stream. - - :param response: Response object from ChatCompletion.create - :param jailbreak: Jailbreak instruction string - :return: Generator object yielding messages in the conversation - """ - if getJailbreak(jailbreak): - response_jailbreak = '' - jailbroken_checked = False - for message in response: - response_jailbreak += message - if jailbroken_checked: - yield message - else: - if response_jailbroken_success(response_jailbreak): - jailbroken_checked = True - if response_jailbroken_failed(response_jailbreak): - yield response_jailbreak - jailbroken_checked = True - else: - yield from response - - -def response_jailbroken_success(response: str) -> bool: - """Check if the response has been jailbroken. - - :param response: Response string - :return: Boolean indicating if the response has been jailbroken - """ - act_match = re.search(r'ACT:', response, flags=re.DOTALL) - return bool(act_match) - - -def response_jailbroken_failed(response): - """ - Check if the response has not been jailbroken. - - :param response: Response string - :return: Boolean indicating if the response has not been jailbroken - """ - return False if len(response) < 4 else not (response.startswith("GPT:") or response.startswith("ACT:")) - - -def getJailbreak(jailbreak): - """ - Check if jailbreak instructions are provided. - - :param jailbreak: Jailbreak instruction string - :return: Jailbreak instructions if provided, otherwise None - """ - if jailbreak != "default": - special_instructions[jailbreak][0]['content'] += special_instructions['two_responses_instruction'] - if jailbreak in special_instructions: - special_instructions[jailbreak] - return special_instructions[jailbreak] - else: - return None - else: - return None diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/_version.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/_version.py deleted file mode 100644 index 1fc7f7334aa447852807572682a757a342003312..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/_version.py +++ /dev/null @@ -1,2 +0,0 @@ -# Master version for Pillow -__version__ = "10.0.0" diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/datastructures.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/datastructures.py deleted file mode 100644 index 3c96c56c70e1170cb5eaf0296d63daac8deaea7e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/datastructures.py +++ /dev/null @@ -1,83 +0,0 @@ -from typing import Any, Callable, Dict, Iterable, Type, TypeVar, cast - -from fastapi._compat import ( - PYDANTIC_V2, - CoreSchema, - GetJsonSchemaHandler, - JsonSchemaValue, - general_plain_validator_function, -) -from starlette.datastructures import URL as URL # noqa: F401 -from starlette.datastructures import Address as Address # noqa: F401 -from starlette.datastructures import FormData as FormData # noqa: F401 -from starlette.datastructures import Headers as Headers # noqa: F401 -from starlette.datastructures import QueryParams as QueryParams # noqa: F401 -from starlette.datastructures import State as State # noqa: F401 -from starlette.datastructures import UploadFile as StarletteUploadFile - - -class UploadFile(StarletteUploadFile): - @classmethod - def __get_validators__(cls: Type["UploadFile"]) -> Iterable[Callable[..., Any]]: - yield cls.validate - - @classmethod - def validate(cls: Type["UploadFile"], v: Any) -> Any: - if not isinstance(v, StarletteUploadFile): - raise ValueError(f"Expected UploadFile, received: {type(v)}") - return v - - @classmethod - def _validate(cls, __input_value: Any, _: Any) -> "UploadFile": - if not isinstance(__input_value, StarletteUploadFile): - raise ValueError(f"Expected UploadFile, received: {type(__input_value)}") - return cast(UploadFile, __input_value) - - if not PYDANTIC_V2: - - @classmethod - def __modify_schema__(cls, field_schema: Dict[str, Any]) -> None: - field_schema.update({"type": "string", "format": "binary"}) - - @classmethod - def __get_pydantic_json_schema__( - cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler - ) -> JsonSchemaValue: - return {"type": "string", "format": "binary"} - - @classmethod - def __get_pydantic_core_schema__( - cls, source: Type[Any], handler: Callable[[Any], CoreSchema] - ) -> CoreSchema: - return general_plain_validator_function(cls._validate) - - -class DefaultPlaceholder: - """ - You shouldn't use this class directly. - - It's used internally to recognize when a default value has been overwritten, even - if the overridden default value was truthy. - """ - - def __init__(self, value: Any): - self.value = value - - def __bool__(self) -> bool: - return bool(self.value) - - def __eq__(self, o: object) -> bool: - return isinstance(o, DefaultPlaceholder) and o.value == self.value - - -DefaultType = TypeVar("DefaultType") - - -def Default(value: DefaultType) -> DefaultType: - """ - You shouldn't use this function directly. - - It's used internally to recognize when a default value has been overwritten, even - if the overridden default value was truthy. - """ - return DefaultPlaceholder(value) # type: ignore diff --git a/spaces/Dagfinn1962/diffusers-gallery/Dockerfile b/spaces/Dagfinn1962/diffusers-gallery/Dockerfile deleted file mode 100644 index 0ba18d346de09532882673442ee72107556a887d..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/diffusers-gallery/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM nginxinc/nginx-unprivileged:alpine -COPY . /usr/share/nginx/html \ No newline at end of file diff --git a/spaces/DaleChen/AutoGPT/CONTRIBUTING.md b/spaces/DaleChen/AutoGPT/CONTRIBUTING.md deleted file mode 100644 index 79169a0c1951853303f73ffa1fddb3518685606a..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/CONTRIBUTING.md +++ /dev/null @@ -1,105 +0,0 @@ -# Contributing to ProjectName - -First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request. - -This document provides guidelines and best practices to help you contribute effectively. - -## Table of Contents - -- [Code of Conduct](#code-of-conduct) -- [Getting Started](#getting-started) -- [How to Contribute](#how-to-contribute) - - [Reporting Bugs](#reporting-bugs) - - [Suggesting Enhancements](#suggesting-enhancements) - - [Submitting Pull Requests](#submitting-pull-requests) -- [Style Guidelines](#style-guidelines) - - [Code Formatting](#code-formatting) - - [Pre-Commit Hooks](#pre-commit-hooks) - -## Code of Conduct - -By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. - -## 📢 A Quick Word -Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT. - -However, you absolutely can still add these commands to Auto-GPT in the form of plugins. Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template). -> ⚠️ Plugin support is expected to ship within the week. You can follow PR #757 for more updates! - -## Getting Started - -To start contributing, follow these steps: - -1. Fork the repository and clone your fork. -2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`). -3. Make your changes in the new branch. -4. Test your changes thoroughly. -5. Commit and push your changes to your fork. -6. Create a pull request following the guidelines in the [Submitting Pull Requests](#submitting-pull-requests) section. - -## How to Contribute - -### Reporting Bugs - -If you find a bug in the project, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A description of the problem, including steps to reproduce the issue. -- Any relevant logs, screenshots, or other supporting information. - -### Suggesting Enhancements - -If you have an idea for a new feature or improvement, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A detailed description of the proposed enhancement, including any benefits and potential drawbacks. -- Any relevant examples, mockups, or supporting information. - -### Submitting Pull Requests - -When submitting a pull request, please ensure that your changes meet the following criteria: - -- Your pull request should be atomic and focus on a single change. -- Your pull request should include tests for your change. -- You should have thoroughly tested your changes with multiple different prompts. -- You should have considered potential risks and mitigations for your changes. -- You should have documented your changes clearly and comprehensively. -- You should not include any unrelated or "extra" small tweaks or changes. - -## Style Guidelines - -### Code Formatting - -We use the `black` code formatter to maintain a consistent coding style across the project. Please ensure that your code is formatted using `black` before submitting a pull request. You can install `black` using `pip`: - -```bash -pip install black -``` - -To format your code, run the following command in the project's root directory: - -```bash -black . -``` -### Pre-Commit Hooks -We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: - -Install the pre-commit package using pip: -```bash -pip install pre-commit -``` - -Run the following command in the project's root directory to install the pre-commit hooks: -```bash -pre-commit install -``` - -Now, the pre-commit hooks will run automatically before each commit, checking your code formatting and other requirements. - -If you encounter any issues or have questions, feel free to reach out to the maintainers or open a new issue on GitHub. We're here to help and appreciate your efforts to contribute to the project. - -Happy coding, and once again, thank you for your contributions! - -Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: - -https://github.com/Torantulino/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ \ No newline at end of file diff --git a/spaces/Dao3/SuperChatGPT/app.py b/spaces/Dao3/SuperChatGPT/app.py deleted file mode 100644 index e827159820fe2ca23c1f5f09ab64cb406d29a9d5..0000000000000000000000000000000000000000 --- a/spaces/Dao3/SuperChatGPT/app.py +++ /dev/null @@ -1,177 +0,0 @@ -# -*- coding:utf-8 -*- -import gradio as gr -import os -import logging -import sys -import argparse -from utils import * -from presets import * - -logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -my_api_key = "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" # 在这里输入你的 API 密钥 - -#if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - #auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if not my_api_key and os.path.exists("api_key.txt") and os.path.getsize("api_key.txt"): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess - -with gr.Blocks(css=customCSS,) as demo: - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - # gr.HTML(""" - #
- # """) - gr.HTML(title) - - with gr.Row(scale=1).style(equal_height=True): - - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot().style(height=1200) # .style(color_map=("#1D51EE", "#585A5B")) - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - with gr.Row(scale=1): - emptyBtn = gr.Button("🧹 新的对话",) - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - - - with gr.Column(): - with gr.Column(min_width=50,scale=1): - status_display = gr.Markdown("status: ready") - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox(show_label=True, placeholder=f"OpenAI API-key...",value=my_api_key, type="password", visible=not HIDE_MY_KEY, label="API-Key") - model_select_dropdown = gr.Dropdown(label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0]) - with gr.Accordion("参数", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, - step=0.1, interactive=True, label="Temperature",) - use_streaming_checkbox = gr.Checkbox(label="实时传输回答", value=True, visible=enable_streaming_option) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件", choices=get_template_names(plain=True), multiselect=False, value=get_template_names(plain=True)[0]) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template(get_template_names(plain=True)[0], mode=1), multiselect=False, value=load_template(get_template_names(plain=True)[0], mode=1)[0]) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False, value=get_history_names(plain=True)[0]) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - - - - gr.HTML(""" -
- """) - gr.Markdown(description) - - - user_input.submit(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown, use_websearch_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown, use_websearch_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click(reset_state, outputs=[chatbot, history, token_count, status_display], show_progress=True) - - retryBtn.click(retry, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown], [chatbot, history, status_display, token_count], show_progress=True) - - delLastBtn.click(delete_last_conversation, [chatbot, history, token_count], [ - chatbot, history, token_count, status_display], show_progress=True) - - reduceTokenBtn.click(reduce_token_size, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown], [chatbot, history, status_display, token_count], show_progress=True) - - saveHistoryBtn.click(save_chat_history, [ - saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True) - - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyFileSelectDropdown.change(load_chat_history, [historyFileSelectDropdown, systemPromptTxt, history, chatbot], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True) - - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - - templateFileSelectDropdown.change(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True) - - templateSelectDropdown.change(get_template_content, [promptTemplates, templateSelectDropdown, systemPromptTxt], [systemPromptTxt], show_progress=True) - -logging.info(colorama.Back.GREEN + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" + colorama.Style.RESET_ALL) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "可以搜索网页的ChatGPT 🚀" - -if __name__ == "__main__": - #if running in Docker - if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) - #if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password)) - else: - demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - #demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/Dao3/image-to-video/README.md b/spaces/Dao3/image-to-video/README.md deleted file mode 100644 index 0ce7463830dfb23e3dbac22c17408a04461becef..0000000000000000000000000000000000000000 --- a/spaces/Dao3/image-to-video/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Images to Video -emoji: 👁 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: unknown -duplicated_from: Mishyface/image-to-video-film-3-kazuk-hugorowan-mishyface ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DataDoggo/Visionary/app.py b/spaces/DataDoggo/Visionary/app.py deleted file mode 100644 index 68036c07954523a4c68f4775d017d18619702423..0000000000000000000000000000000000000000 --- a/spaces/DataDoggo/Visionary/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import requests -import tensorflow as tf -from tensorflow import keras -from keras.models import Sequential, load_model -from tensorflow.keras.models import Sequential -from tensorflow.keras.layers import Activation, Dense, BatchNormalization, Conv2D, MaxPool2D, Dropout, Flatten -from tensorflow.keras.optimizers import Adam -from tensorflow.keras.metrics import categorical_crossentropy -from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, array_to_img, img_to_array -from tensorflow.keras import datasets, layers, models - -import pandas as pd -import numpy as np - -import gradio as gr - -load_file = 'densenet256x256_weighted_tuned.hdf5' -model=load_model(load_file) - - -img_resize = keras.Sequential( - [ - layers.experimental.preprocessing.Resizing(256, 256, interpolation='bilinear') - ] -) - -def classify_image(inp): - inp = img_resize(inp) - img_array = keras.preprocessing.image.img_to_array(inp) - img_array = tf.expand_dims(img_array, 0) - - prediction = model.predict(img_array).flatten() - return {'Probability of Diabetic Retinopathy:': float(np.exp(prediction)/(1+np.exp(prediction)))} #{labels[i]: float(prediction[i]) for i in range(1)} - -content_image_input = gr.inputs.Image(label="Content Image") -style_image_input = gr.inputs.Image(shape=(256, 256), label="Style Image") - -image = gr.inputs.Image(label = 'Image') -label = gr.outputs.Label(num_top_classes=1) - -explanation = 'Page 1 examples both have DR and the model confidently predicts it correctly. Page 2 images are examples without DR and the model confidently predicts correctly. Page 3 are the type of images the model predicts poorly on. The first image on Page 3 has DR, but the model guesses incorrectly. The 2nd image on Page 3 does not have DR, but the model guesses incorrectly.' - -gr.Interface( - fn=classify_image, - inputs= image, - title = 'Prediction of Diabetic Retinopathy (DR)', - examples_per_page = 2, - examples = ['DR100.jpeg', 'DR95.jpeg', 'Norm5.jpeg', 'Norm16.jpeg', 'DR8.jpeg', 'Norm95.jpeg' ], - description = 'Demo for predicting the probability of having Diabetic Retinopathy with DenseNet Model.', - article = explanation, - outputs=label, - theme = "peach" -).launch() \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/app/interface/display/index.tsx b/spaces/Detomo/ai-comic-generation/src/app/interface/display/index.tsx deleted file mode 100644 index 26ba8d02a6afd446981aeca6c1c24b267ab467f1..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/app/interface/display/index.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import { RenderedScene } from "@/types" - -export function Display ({ rendered }: { rendered: RenderedScene }) { - return ( - <> - - - ) -} \ No newline at end of file diff --git a/spaces/Djacon/emotion_detection/youtube.py b/spaces/Djacon/emotion_detection/youtube.py deleted file mode 100644 index 39f37fe2d221c80be57fb1a159828050684cb248..0000000000000000000000000000000000000000 --- a/spaces/Djacon/emotion_detection/youtube.py +++ /dev/null @@ -1,41 +0,0 @@ -import re -from youtube_transcript_api import YouTubeTranscriptApi -from youtube_transcript_api._errors import TranscriptsDisabled - -MAX_SIZE = 20_000 -YT_REGEX = r'^((http)s?:\/\/)?((www\.)|(m\.))?youtube.com\/watch\?([^\?]*&)?v=.+$' # noqa -YT_REGEX_SHORT = r'^((http)s?:\/\/)?youtu.be\/([^\?=]+)(\?[^?]+)?$' - - -def _extract_video_id(url: str) -> str: - if not re.match(YT_REGEX, url): - if not re.match(YT_REGEX_SHORT, url): - return '' - - ind = url.find('?') - ind = len(url) if ind == -1 else ind - return url[url.find('e/')+2:ind] - - res = url.split('v=') - ind = res[1].find('&') - ind = len(res[1]) if ind == -1 else ind - return res[1][:ind] - - -def get_youtube_caption(url: str) -> str: - try: - video_id = _extract_video_id(url) - if not video_id: - return '' - - res, size = [], 0 - for transcript in YouTubeTranscriptApi.get_transcript(video_id): - res.append(transcript['text']) - size += len(transcript['text']) - if size >= MAX_SIZE: - return '\n'.join(res)[:MAX_SIZE] - return '\n'.join(res)[:MAX_SIZE] - except TranscriptsDisabled: - return 'no-cap' - except Exception: - return 'err' diff --git a/spaces/DmitriiKhizbullin/camel-data-explorer/apps/data_explorer/data_explorer.py b/spaces/DmitriiKhizbullin/camel-data-explorer/apps/data_explorer/data_explorer.py deleted file mode 100644 index bcc6f7ef52fe3714ede280be1ea4b211c99d0dd6..0000000000000000000000000000000000000000 --- a/spaces/DmitriiKhizbullin/camel-data-explorer/apps/data_explorer/data_explorer.py +++ /dev/null @@ -1,335 +0,0 @@ -""" -Gradio-based web UI to explore the Camel dataset. -""" - -import argparse -import random -from typing import Dict, List, Optional, Tuple - -import gradio as gr - -from apps.data_explorer.loader import Datasets, load_datasets - - -def parse_arguments(): - """ Get command line arguments. """ - - parser = argparse.ArgumentParser("Camel data explorer") - parser.add_argument( - '--data-path', type=str, default=None, - help='Path to the folder with ZIP datasets containing JSONs') - parser.add_argument('--default-dataset', type=str, default=None, - help='Default dataset name selected from ZIPs') - parser.add_argument('--share', type=bool, default=False, - help='Expose the web UI to Gradio') - parser.add_argument( - '--server-name', type=str, default="0.0.0.0", - help='localhost for local, 0.0.0.0 (default) for public') - parser.add_argument('--server-port', type=int, default=8080, - help='Port ot run the web page on') - parser.add_argument('--inbrowser', type=bool, default=False, - help='Open the web UI in the default browser on lunch') - parser.add_argument( - '--concurrency-count', type=int, default=10, - help='Number if concurrent threads at Gradio websocket queue. ' + - 'Increase to serve more requests but keep an eye on RAM usage.') - args, unknown = parser.parse_known_args() - if len(unknown) > 0: - print("Unknown args: ", unknown) - return args - - -def construct_ui(blocks, datasets: Datasets, default_dataset: str = None): - """ Build Gradio UI and populate with chat data from JSONs. - - Args: - blocks: Gradio blocks - datasets (Datasets): Several parsed - multi-JSON dataset with chats. - default_dataset (str): Default selection of the dataset. - - Returns: - None - """ - - if default_dataset is None: - default_dataset = "ai_society_chat" - - misalignment_set_names = {"misalignment"} - ordinary_datasets = [ - v for v in datasets.keys() if v not in misalignment_set_names - ] - misalignment_datasets = [ - v for v in datasets.keys() if v in misalignment_set_names - ] - default_dataset_name = default_dataset \ - if default_dataset in datasets.keys() \ - else ordinary_datasets[0] if len(ordinary_datasets) > 0 \ - else misalignment_datasets[0] if len(misalignment_datasets) > 0 \ - else "" - dataset_names = list(datasets.keys()) - - with gr.Row().style(): - with gr.Column(scale=2): - with gr.Row(): - dataset_dd = gr.Dropdown(dataset_names, label="Select dataset", - value="NODEFAULT", interactive=True) - with gr.Row(): - disclaimer_ta = gr.Markdown( - "## By clicking AGREE I consent to use the dataset " - "for purely educational and academic purposes and " - "not use it for any fraudulent activity; and I take " - "all the responsibility if the data is used in a " - "malicious application.", visible=False) - with gr.Row(): - with gr.Column(scale=1): - accept_disclaimer_bn = gr.Button("AGREE", visible=False) - with gr.Column(scale=1): - decline_disclaimer_bn = gr.Button("DECLINE", visible=False) - with gr.Row(): - with gr.Column(scale=3): - assistant_dd = gr.Dropdown([], label="ASSISTANT", value="", - interactive=True) - with gr.Column(scale=3): - user_dd = gr.Dropdown([], label="USER", value="", - interactive=True) - with gr.Column(scale=1): - gr.Markdown( - "## CAMEL: Communicative Agents for \"Mind\" Exploration" - " of Large Scale Language Model Society\n" - "Github repo: [https://github.com/lightaime/camel]" - "(https://github.com/lightaime/camel)\n" - '
' - 'Logo' - '
') - - task_dd = gr.Dropdown([], label="Original task", value="", - interactive=True) - specified_task_ta = gr.TextArea(label="Specified task", lines=2) - chatbot = gr.Chatbot() - accepted_st = gr.State(False) - - def set_default_dataset() -> Dict: - """ Trigger for app load. - - Returns: - Dict: Update dict for dataset_dd. - """ - return gr.update(value=default_dataset_name) - - def check_if_misalignment(dataset_name: str, accepted: bool) \ - -> Tuple[Dict, Dict, Dict]: - """ Display AGREE/DECLINE if needed. - - Returns: - Tuple: Visibility updates for the buttons. - """ - - if dataset_name == "misalignment" and not accepted: - return gr.update(visible=True), \ - gr.update(visible=True), gr.update(visible=True) - else: - return gr.update(visible=False), \ - gr.update(visible=False), gr.update(visible=False) - - def enable_misalignment() -> Tuple[bool, Dict, Dict, Dict]: - """ Update the state of the accepted disclaimer. - - Returns: - Tuple: New state and visibility updates for the buttons. - """ - - return True, gr.update(visible=False), \ - gr.update(visible=False), gr.update(visible=False) - - def disable_misalignment() -> Tuple[bool, Dict, Dict, Dict]: - """ Update the state of the accepted disclaimer. - - Returns: - Tuple: New state and visibility updates for the buttons. - """ - - return False, gr.update(visible=False), \ - gr.update(visible=False), gr.update(visible=False) - - def update_dataset_selection(dataset_name: str, - accepted: bool) -> Tuple[Dict, Dict]: - """ Update roles based on the selected dataset. - - Args: - dataset_name (str): Name of the loaded .zip dataset. - accepted (bool): If the disclaimer thas been accepted. - - Returns: - Tuple[Dict, Dict]: New Assistant and User roles. - """ - - if dataset_name == "misalignment" and not accepted: - # If used did not accept the misalignment policy, - # keep the old selection. - return (gr.update(value="N/A", - choices=[]), gr.update(value="N/A", choices=[])) - - dataset = datasets[dataset_name] - assistant_roles = dataset['assistant_roles'] - user_roles = dataset['user_roles'] - assistant_role = random.choice(assistant_roles) \ - if len(assistant_roles) > 0 else "" - user_role = random.choice(user_roles) if len(user_roles) > 0 else "" - return (gr.update(value=assistant_role, choices=assistant_roles), - gr.update(value=user_role, choices=user_roles)) - - def roles_dd_change(dataset_name: str, assistant_role: str, - user_role: str) -> Dict: - """ Update the displayed chat upon inputs change. - - Args: - assistant_role (str): Assistant dropdown value. - user_role (str): User dropdown value. - - Returns: - Dict: New original roles state dictionary. - """ - matrix = datasets[dataset_name]['matrix'] - if (assistant_role, user_role) in matrix: - record: Dict[str, Dict] = matrix[(assistant_role, user_role)] - original_task_options = list(record.keys()) - original_task = original_task_options[0] - else: - original_task = "N/A" - original_task_options = [] - - choices = gr.Dropdown.update(choices=original_task_options, - value=original_task, interactive=True) - return choices - - def build_chat_history(messages: Dict[int, Dict]) -> List[Tuple]: - """ Structures chatbot contents from the loaded data. - - Args: - messages (Dict[int, Dict]): Messages loaded from JSON. - - Returns: - List[Tuple]: Chat history in chatbot UI element format. - """ - history = [] - curr_qa = (None, None) - for k in sorted(messages.keys()): - msg = messages[k] - content = msg['content'] - if msg['role_type'] == "USER": - if curr_qa[0] is not None: - history.append(curr_qa) - curr_qa = (content, None) - else: - curr_qa = (content, None) - elif msg['role_type'] == "ASSISTANT": - curr_qa = (curr_qa[0], content) - history.append(curr_qa) - curr_qa = (None, None) - else: - pass - return history - - def task_dd_change(dataset_name: str, assistant_role: str, user_role: str, - original_task: str) -> Tuple[str, List]: - """ Load task details and chatbot history into UI elements. - - Args: - assistant_role (str): An assistan role. - user_role (str): An user role. - original_task (str): The original task. - - Returns: - Tuple[str, List]: New contents of the specified task - and chatbot history UI elements. - """ - - matrix = datasets[dataset_name]['matrix'] - if (assistant_role, user_role) in matrix: - task_dict: Dict[str, Dict] = matrix[(assistant_role, user_role)] - if original_task in task_dict: - chat = task_dict[original_task] - specified_task = chat['specified_task'] - history = build_chat_history(chat['messages']) - else: - specified_task = "N/A" - history = [] - else: - specified_task = "N/A" - history = [] - return specified_task, history - - dataset_dd.change(check_if_misalignment, [dataset_dd, accepted_st], - [disclaimer_ta, accept_disclaimer_bn, - decline_disclaimer_bn]) \ - .then(update_dataset_selection, - [dataset_dd, accepted_st], - [assistant_dd, user_dd]) - - accept_disclaimer_bn.click(enable_misalignment, None, [ - accepted_st, disclaimer_ta, accept_disclaimer_bn, decline_disclaimer_bn - ]) \ - .then(update_dataset_selection, - [dataset_dd, accepted_st], - [assistant_dd, user_dd]) - - decline_disclaimer_bn.click(disable_misalignment, None, [ - accepted_st, disclaimer_ta, accept_disclaimer_bn, decline_disclaimer_bn - ]) \ - .then(update_dataset_selection, - [dataset_dd, accepted_st], - [assistant_dd, user_dd]) - - func_args = (roles_dd_change, [dataset_dd, assistant_dd, user_dd], task_dd) - assistant_dd.change(*func_args) - user_dd.change(*func_args) - - task_dd.change(task_dd_change, - [dataset_dd, assistant_dd, user_dd, task_dd], - [specified_task_ta, chatbot]) - - blocks.load(set_default_dataset, None, dataset_dd) - - -def construct_blocks(data_path: str, default_dataset: Optional[str]): - """ Construct Blocs app but do not launch it. - - Args: - data_path (str): Path to the set of ZIP datasets. - default_dataset (Optional[str]): Name of the default dataset, - without extension. - - Returns: - gr.Blocks: Blocks instance. - """ - - print("Loading the dataset...") - datasets = load_datasets(data_path) - print("Dataset is loaded") - - print("Getting Data Explorer web server online...") - - with gr.Blocks() as blocks: - construct_ui(blocks, datasets, default_dataset) - - return blocks - - -def main(): - """ Entry point. """ - - args = parse_arguments() - - blocks = construct_blocks(args.data_path, args.default_dataset) - - blocks.queue(args.concurrency_count) \ - .launch(share=args.share, inbrowser=args.inbrowser, - server_name=args.server_name, server_port=args.server_port) - - print("Exiting.") - - -if __name__ == "__main__": - main() diff --git a/spaces/DuckyPolice/StormDrainMega/app.py b/spaces/DuckyPolice/StormDrainMega/app.py deleted file mode 100644 index 0b046f896326d502bc436801544421a2bef8f383..0000000000000000000000000000000000000000 --- a/spaces/DuckyPolice/StormDrainMega/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.16/xformers-0.0.16+814314d.d20230118-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.16/xformers-0.0.16+814314d.d20230118-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") -print("Fetching Prerequisites...") -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/inputs=\[component\],/&\\n queue=False,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/outputs=\[token_counter\]/outputs=[token_counter], queue=False/g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -#os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -#os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -os.system(f"wget -q https://huggingface.co/Alsebay/PeachMixs/resolve/main/PeachTachyonMixs/PeachTachyon2.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PeachTachyon2.safetensors") -os.system(f"wget -q https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt") -os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") -if "IS_SHARED_UI" in os.environ: - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - os.system(f"python launch.py --use-cpu all --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --skip-torch-cuda-test") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - #os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - print("Fetching Extensions...") - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - os.system(f"wget -q https://github.com/vladmandic/sd-extension-steps-animation/tree/90663eb7450c3487b693cf20e76ec4d7edd78cd5 -O /home/user/app/stable-diffusion-webui/extentions/video-gen") - os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper /home/user/app/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper") - os.system(f"git clone https://github.com/kohya-ss/sd-webui-additional-networks /home/user/app/stable-diffusion-webui/extensions/sd-webui-additional-networks") - os.system(f"wget -q https://huggingface.co/qewadszcx132/hyperbreasts/resolve/main/hyperbreasts_v4.ckpt -O /home/user/app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/lora/hyperbreasts_v4.ckpt") - os.system(f"wget -q https://huggingface.co/Osmond141319/Hyperbreasts/resolve/main/hyperbreasts_v5Lora.ckpt -O /home/user/app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/lora/hyperbreasts_v5.ckpt") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - os.system(f"git clone https://github.com/bbc-mc/sdweb-merge-block-weighted-gui /home/user/app/stable-diffusion-webui/extentions/model-merger") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - print("Fetching Models...") - os.system(f"wget -q https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.5-pruned.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-v4.5-pruned.safetensors") - #os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AbyssOrangeMix3.safetensors") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/stable-diffusion-2-1.ckpt") - os.system(f"wget -q https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/dreamlike-diffusion-1.0.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/dreamlike-diffusion-1.0.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - os.system(f"EXPOSE 7860") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - #os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - # os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - os.system(f"python launch.py --precision full --no-half --use-cpu all --listen --administrator --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --skip-torch-cuda-test") diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/psgtr/psgtr_r50.py b/spaces/ECCV2022/PSG/OpenPSG/configs/psgtr/psgtr_r50.py deleted file mode 100644 index c8827bbb9461a34a9d894c2aee9fb6286503898d..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/psgtr/psgtr_r50.py +++ /dev/null @@ -1,82 +0,0 @@ -model = dict( - type='PSGTr', - backbone=dict(type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='pytorch', - init_cfg=dict(type='Pretrained', - checkpoint='torchvision://resnet50')), - bbox_head=dict(type='PSGTrHead', - num_classes=80, - num_relations=117, - in_channels=2048, - transformer=dict( - type='Transformer', - encoder=dict(type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=[ - dict(type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1) - ], - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', - 'ffn', 'norm'))), - decoder=dict( - type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict(type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', - 'cross_attn', 'norm', 'ffn', - 'norm')), - )), - positional_encoding=dict(type='SinePositionalEncoding', - num_feats=128, - normalize=True), - sub_loss_cls=dict(type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - sub_loss_bbox=dict(type='L1Loss', loss_weight=5.0), - sub_loss_iou=dict(type='GIoULoss', loss_weight=2.0), - sub_focal_loss=dict(type='BCEFocalLoss', loss_weight=2.0), - sub_dice_loss=dict(type='psgtrDiceLoss', loss_weight=2.0), - obj_loss_cls=dict(type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - obj_loss_bbox=dict(type='L1Loss', loss_weight=5.0), - obj_loss_iou=dict(type='GIoULoss', loss_weight=2.0), - obj_focal_loss=dict(type='BCEFocalLoss', loss_weight=2.0), - obj_dice_loss=dict(type='psgtrDiceLoss', loss_weight=2.0), - rel_loss_cls=dict(type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=2.0, - class_weight=1.0)), - # training and testing settings - train_cfg=dict(assigner=dict( - type='HTriMatcher', - s_cls_cost=dict(type='ClassificationCost', weight=1.), - s_reg_cost=dict(type='BBoxL1Cost', weight=5.0), - s_iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0), - o_cls_cost=dict(type='ClassificationCost', weight=1.), - o_reg_cost=dict(type='BBoxL1Cost', weight=5.0), - o_iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0), - r_cls_cost=dict(type='ClassificationCost', weight=2.))), - test_cfg=dict(max_per_img=100)) diff --git a/spaces/ElisR/spherical_harmonics_visualisation/app.py b/spaces/ElisR/spherical_harmonics_visualisation/app.py deleted file mode 100644 index 2ffe3cbdf452f914e20fbb7d396286d7524d8743..0000000000000000000000000000000000000000 --- a/spaces/ElisR/spherical_harmonics_visualisation/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import gradio as gr -import plotly.graph_objects as go - -import numpy as np -from scipy.special import sph_harm - - -def simple_sph_plot(l, m): - # Fix the input if invalid - if abs(m) > l: - m = 0 - - resolution = 200 - - # Pick uniform angles - theta = np.linspace(0, np.pi, resolution) - phi = np.linspace(0, 2 * np.pi, resolution) - theta, phi = np.meshgrid(theta, phi) - - # The spherical harmonic function to plot - Y_lm = sph_harm(m, l, phi, theta).real # Original - #if m == 0: - # Y_lm = sph_harm(m, l, phi, theta).real - #elif m < 0: - # Y_lm = np.sqrt(2) * (-1)**m * sph_harm(-m, l, phi, theta).imag - #else: - # Y_lm = np.sqrt(2) * (-1)**m * sph_harm(m, l, phi, theta).real - - # Convert to Cartesian coordinates - x = np.sin(theta) * np.cos(phi) * np.abs(Y_lm) - y = np.sin(theta) * np.sin(phi) * np.abs(Y_lm) - z = np.cos(theta) * np.abs(Y_lm) - - surf = go.Surface( - x=x, y=y, z=z, surfacecolor=Y_lm, colorscale="RdBu", showscale=False - ) - - fig = go.Figure(data=[surf]) - - # Make axis invisible - invisible_axis = dict( - showbackground=False, - showline=False, - zeroline=False, - showgrid=False, - showticklabels=False, - title="", - ) - - camera = { - "up": {"x": 0, "y": 0, "z": 1}, - "center": {"x": 0, "y": 0, "z": 0}, - "eye": {"x": 1.25, "y": 1.25, "z": 1.25}, - } - - fig.update_layout( - **{ - f"scene": { - "xaxis": invisible_axis, - "yaxis": invisible_axis, - "zaxis": invisible_axis, - }, - "scene_camera": camera, - } - ) - - return fig - - -outputs = gr.Plot() -inputs = [ - gr.Number(value=1, minimum=0, info="Angular Momentum"), - gr.Number(value=1, info="z-Component"), -] - -iface = gr.Interface(fn=simple_sph_plot, inputs=inputs, outputs=outputs) - -iface.launch() diff --git a/spaces/Eriberto/chatGPT/app.py b/spaces/Eriberto/chatGPT/app.py deleted file mode 100644 index b2afc7218a4571f87e5e9a86328d445fcec013b0..0000000000000000000000000000000000000000 --- a/spaces/Eriberto/chatGPT/app.py +++ /dev/null @@ -1,306 +0,0 @@ -import openai -import gradio as gr -import os, sys, json -from loguru import logger -import random - -openai.api_key = os.environ.get('SessionToken') -logger.info(f"session_token_: {openai.api_key}") - -conversation = "" -user_name = "MH" -bot_name = "bbDemo" - -def get_response_from_chatgpt(text): - #try: - response_api = openai.Completion.create(engine='text-davinci-003', prompt=str(text), max_tokens=50) - response_str = response_api["choices"][0]["text"].replace("\n", "") - response_str = response_str.split(user_name + ": ", 1)[0].split(bot_name + ": ", 1)[0] - print("RESPONSE response_api", response_api) - print("RESPONSE response_str ", response_str) - response = response_str - logger.info(f"Response: [{response_api}]") - logger.info(f"conversation_id_: [{response}]") - - return response - -start_work = """async() => { - function isMobile() { - try { - document.createEvent("TouchEvent"); return true; - } catch(e) { - return false; - } - } - function getClientHeight() - { - var clientHeight=0; - if(document.body.clientHeight&&document.documentElement.clientHeight) { - var clientHeight = (document.body.clientHeightdocument.documentElement.clientHeight)?document.body.clientHeight:document.documentElement.clientHeight; - } - return clientHeight; - } - - function setNativeValue(element, value) { - const valueSetter = Object.getOwnPropertyDescriptor(element.__proto__, 'value').set; - const prototype = Object.getPrototypeOf(element); - const prototypeValueSetter = Object.getOwnPropertyDescriptor(prototype, 'value').set; - - if (valueSetter && valueSetter !== prototypeValueSetter) { - prototypeValueSetter.call(element, value); - } else { - valueSetter.call(element, value); - } - } - function save_conversation(chatbot) { - var conversations = new Array(); - for (var i = 0; i < chatbot.children.length; i++) { - conversations[i] = chatbot.children[i].innerHTML; - } - var json_str = JSON.stringify(conversations); - localStorage.setItem('chatgpt_conversations', json_str); - } - function load_conversation(chatbot) { - var json_str = localStorage.getItem('chatgpt_conversations'); - if (json_str) { - conversations = JSON.parse(json_str); - for (var i = 0; i < conversations.length; i++) { - var new_div = document.createElement("div"); - if((i%2)===0){ - new_div.className = "px-3 py-2 rounded-[22px] rounded-br-none text-white text-sm chat-message svelte-rct66g"; - new_div.style.backgroundColor = "#16a34a"; - } else { - new_div.className = "px-3 py-2 rounded-[22px] rounded-bl-none place-self-start text-white text-sm chat-message svelte-rct66g"; - new_div.style.backgroundColor = "#2563eb"; - if (conversations[i].indexOf(" gradio-app').shadowRoot; - if (!gradioEl) { - gradioEl = document.querySelector('body > gradio-app'); - } - - if (typeof window['gradioEl'] === 'undefined') { - window['gradioEl'] = gradioEl; - - const page1 = window['gradioEl'].querySelectorAll('#page_1')[0]; - const page2 = window['gradioEl'].querySelectorAll('#page_2')[0]; - - page1.style.display = "none"; - page2.style.display = "block"; - window['div_count'] = 0; - window['chat_bot'] = window['gradioEl'].querySelectorAll('#chat_bot')[0]; - window['chat_bot1'] = window['gradioEl'].querySelectorAll('#chat_bot1')[0]; - chat_row = window['gradioEl'].querySelectorAll('#chat_row')[0]; - prompt_row = window['gradioEl'].querySelectorAll('#prompt_row')[0]; - window['chat_bot1'].children[1].textContent = ''; - - clientHeight = getClientHeight(); - if (isMobile()) { - output_htmls = window['gradioEl'].querySelectorAll('.output-html'); - for (var i = 0; i < output_htmls.length; i++) { - output_htmls[i].style.display = "none"; - } - new_height = (clientHeight - 250) + 'px'; - } else { - new_height = (clientHeight - 350) + 'px'; - } - chat_row.style.height = new_height; - window['chat_bot'].style.height = new_height; - window['chat_bot'].children[2].style.height = new_height; - window['chat_bot1'].style.height = new_height; - window['chat_bot1'].children[2].style.height = new_height; - prompt_row.children[0].style.flex = 'auto'; - prompt_row.children[0].style.width = '100%'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.flex = 'auto'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.width = '100%'; - prompt_row.children[0].setAttribute('style','flex-direction: inherit; flex: 1 1 auto; width: 100%;border-color: green;border-width: 1px !important;') - window['chat_bot1'].children[1].setAttribute('style', 'border-bottom-right-radius:0;top:unset;bottom:0;padding-left:0.1rem'); - window['gradioEl'].querySelectorAll('#btns_row')[0].children[0].setAttribute('style', 'min-width: min(10px, 100%); flex-grow: 1'); - window['gradioEl'].querySelectorAll('#btns_row')[0].children[1].setAttribute('style', 'min-width: min(10px, 100%); flex-grow: 1'); - - load_conversation(window['chat_bot1'].children[2].children[0]); - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - - window['gradioEl'].querySelectorAll('#clear-btn')[0].onclick = function(e){ - if (confirm('Clear all outputs?')==true) { - window['chat_bot1'].children[2].children[0].innerHTML = ''; - save_conversation(window['chat_bot1'].children[2].children[0]); - } - } - - window['prevPrompt'] = ''; - window['doCheckPrompt'] = 0; - window['prevImgSrc'] = ''; - window['checkChange'] = function checkChange() { - try { - if (window['gradioEl'].querySelectorAll('.gr-radio')[0].checked) { - if (window['chat_bot'].children[2].children[0].children.length > window['div_count']) { - new_len = window['chat_bot'].children[2].children[0].children.length - window['div_count']; - for (var i = 0; i < new_len; i++) { - new_div = window['chat_bot'].children[2].children[0].children[window['div_count'] + i].cloneNode(true); - window['chat_bot1'].children[2].children[0].appendChild(new_div); - } - window['div_count'] = chat_bot.children[2].children[0].children.length; - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - save_conversation(window['chat_bot1'].children[2].children[0]); - } - if (window['chat_bot'].children[0].children.length > 1) { - window['chat_bot1'].children[1].textContent = window['chat_bot'].children[0].children[1].textContent; - } else { - window['chat_bot1'].children[1].textContent = ''; - } - } else { - texts = window['gradioEl'].querySelectorAll('textarea'); - text0 = texts[0]; - text1 = texts[1]; - img_index = 0; - text_value = text1.value; - if (window['doCheckPrompt'] === 0 && window['prevPrompt'] !== text_value) { - console.log('_____new prompt___[' + text_value + ']_'); - window['doCheckPrompt'] = 1; - window['prevPrompt'] = text_value; - - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - for (var i = 0; i < tabitems.length; i++) { - inputText = tabitems[i].children[0].children[1].children[0].querySelectorAll('.gr-text-input')[0]; - setNativeValue(inputText, text_value); - inputText.dispatchEvent(new Event('input', { bubbles: true })); - } - setTimeout(function() { - btns = window['gradioEl'].querySelectorAll('button'); - for (var i = 0; i < btns.length; i++) { - if (['Generate image','Run'].includes(btns[i].innerText)) { - btns[i].click(); - } - } - window['doCheckPrompt'] = 0; - }, 10); - } - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - imgs = tabitems[img_index].children[0].children[1].children[1].querySelectorAll("img"); - if (imgs.length > 0) { - if (window['prevImgSrc'] !== imgs[0].src) { - var user_div = document.createElement("div"); - user_div.className = "px-3 py-2 rounded-[22px] rounded-br-none text-white text-sm chat-message svelte-rct66g"; - user_div.style.backgroundColor = "#16a34a"; - user_div.innerHTML = "

" + text0.value + "

"; - window['chat_bot1'].children[2].children[0].appendChild(user_div); - var bot_div = document.createElement("div"); - bot_div.className = "px-3 py-2 rounded-[22px] rounded-bl-none place-self-start text-white text-sm chat-message svelte-rct66g"; - bot_div.style.backgroundColor = "#2563eb"; - bot_div.style.width = "80%"; - bot_div.style.padding = "0.2rem"; - bot_div.appendChild(imgs[0].cloneNode(true)); - window['chat_bot1'].children[2].children[0].appendChild(bot_div); - - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - window['prevImgSrc'] = imgs[0].src; - save_conversation(window['chat_bot1'].children[2].children[0]); - } - } - if (tabitems[img_index].children[0].children[1].children[1].children[0].children.length > 1) { - window['chat_bot1'].children[1].textContent = tabitems[img_index].children[0].children[1].children[1].children[0].textContent; - } else { - window['chat_bot1'].children[1].textContent = ''; - } - } - - } catch(e) { - } - } - window['checkChange_interval'] = window.setInterval("window.checkChange()", 500); - } - - return false; -}""" - -space_ids = { - "spaces/stabilityai/stable-diffusion":"Stable Diffusion 2.1", - } - -tab_actions = [] -tab_titles = [] - -for space_id in space_ids.keys(): - print(space_id, space_ids[space_id]) - try: - tab = gr.Interface.load(space_id) - tab_actions.append(tab) - tab_titles.append(space_ids[space_id]) - except Exception as e: - logger.info(f"load_fail__{space_id}_{e}") - -def chat(input0, input1, chat_radio, chat_history): - out_chat = [] - if chat_history != '': - out_chat = json.loads(chat_history) - #if chat_radio == "Talk to chatGPT": - response = get_response_from_chatgpt(input0) - out_chat.append((input0, response)) - chat_history = json.dumps(out_chat) - - logger.info(f"out_chat_input0 and input1 {input0} -- {input1}") - logger.info(f"chat history {chat_history}") - return out_chat, input1, chat_history - -article = """ - -""" - -with gr.Blocks(title='Talk to chatGPT') as demo: - gr.HTML("

This is a demo of the GPT-3 model applied in the chatbot context.

") - gr.HTML("

Ask questions and be surprised by the answers

") - with gr.Group(elem_id="page_1", visible=True) as page_1: - with gr.Box(): - with gr.Row(): - start_button = gr.Button("Click here to chat!", elem_id="start-btn", visible=True) - start_button.click(fn=None, inputs=[], outputs=[], _js=start_work) - - with gr.Group(elem_id="page_2", visible=False) as page_2: - with gr.Row(elem_id="chat_row"): - chatbot = gr.Chatbot(elem_id="chat_bot", visible=False).style(color_map=("green", "blue")) - chatbot1 = gr.Chatbot(elem_id="chat_bot1").style(color_map=("green", "blue")) - with gr.Row(elem_id="prompt_row"): - prompt_input0 = gr.Textbox(lines=2, label="prompt",show_label=False) - prompt_input1 = gr.Textbox(lines=4, label="prompt", visible=False) - chat_history = gr.Textbox(lines=4, label="prompt", visible=False) - chat_radio = gr.Radio(["Talk to chatGPT", "Text to Image"], elem_id="chat_radio",value="Talk to chatGPT", show_label=False) - with gr.Row(elem_id="btns_row"): - with gr.Column(id="submit_col"): - submit_btn = gr.Button(value = "submit",elem_id="submit-btn").style( - margin=True, - rounded=(True, True, True, True), - width=100 - ) - with gr.Column(id="clear_col"): - clear_btn = gr.Button(value = "clear outputs", elem_id="clear-btn").style( - margin=True, - rounded=(True, True, True, True), - width=100 - ) - - submit_btn.click(fn=chat, - inputs=[prompt_input0, prompt_input1, chat_radio, chat_history], - outputs=[chatbot, prompt_input1, chat_history], - ) - with gr.Row(elem_id='tab_img', visible=False).style(height=5): - tab_img = gr.TabbedInterface(tab_actions, tab_titles) - - gr.HTML(article) -demo.launch(debug = True) - diff --git a/spaces/EuroPython2022/README/README.md b/spaces/EuroPython2022/README/README.md deleted file mode 100644 index 6b69575b4e5e17e1082f1a74b34fc55a00de90c3..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/README/README.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: README -emoji: ⚡ -colorFrom: pink -colorTo: green -sdk: static -pinned: false ---- - -
-

Announcement: deadline to submit demos was extended to July 24th!

-

EuroPython 2022

-

EuroPython Dublin, You're invited!

-

-Welcome to the 21st EuroPython. We're the oldest and longest running volunteer-led Python programming conference on the planet! Join us in July in the beautiful and vibrant city of Dublin. We'll be together, face to face and online, to celebrate our shared passion for Python and its community!

- -

Hugging Face Gradio Hackathon 🤗

-

-Come Join us from July 13th to 24th for a Hackathon in person and online using Gradio and Hugging Face to build and host Machine Learning demos. Find tutorial on getting started with Gradio on Hugging Face here and to get started with the new Gradio Blocks API here. Once the gradio demo is setup, see how to add it to Hugging Face Spaces here. Come see the talk on How to craft awesome Machine Learning demos with Python in Liffey Hall 2 on 13 July 2022 at 14:00 by Omar Sanseviero

- -

Sprint July 16 and 17

-
    -
  • Number of people: 3 maintainers + anyone willing to join -
  • -
  • Build Machine Learning demos. You can also join if you don't know much about ML!
  • -
  • Liffey Hall 2
  • -
  • Python Level: any
  • -
- -

Join organization by clicking here

- -Europython Banner - -

Potential ideas for creating spaces:

- - -

Hugging Face Prizes

- - -

LeaderBoard for Most Popular EuroPython Spaces

-

See the EuroPython Leaderboard

-

Hugging Face Spaces & Gradio for Showcasing your EuroPython ‘22 Demo -

-

- In this tutorial, we will demonstrate how to showcase your demo with an easy to use web interface using the Gradio Python library and host it on Hugging Face Spaces so that conference attendees can easily find and try out your demos. Also, see https://gradio.app/introduction_to_blocks/, for a more flexible way to build Gradio Demos -

-

🚀 Create a Gradio Demo from your Model -

-

-The first step is to create a web demo from your model. As an example, we will be creating a demo from an image classification model (called model) which we will be uploading to Spaces. The full code for steps 1-4 can be found in this colab notebook. -


- -

1. Install the gradio library -

-

-All you need to do is to run this in the terminal: pip install gradio -

-
-

2. Define a function in your Python code that performs inference with your model on a data point and returns the prediction -

-

-Here’s we define our image classification model prediction function in PyTorch (any framework, like TensorFlow, scikit-learn, JAX, or a plain Python will work as well): -

-
-def predict(inp):
-
-        inp = Image.fromarray(inp.astype('uint8'), 'RGB')
-      
-        inp = transforms.ToTensor()(inp).unsqueeze(0)
-      
-        with torch.no_grad():
-
-          prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
-
-        return {labels[i]: float(prediction[i]) for i in range(1000)}
-
-
-

- -

3. Then create a Gradio Interface using the function and the appropriate input and output types -

-

-For the image classification model from Step 2, it would like like this: -

-
-
-inputs = gr.inputs.Image()
-
-outputs = gr.outputs.Label(num_top_classes=3)
-
-io = gr.Interface(fn=predict, inputs=inputs, outputs=outputs)
-
-
-

-If you need help creating a Gradio Interface for your model, check out the Gradio Getting Started guide. -

- -

4. Then launch() you Interface to confirm that it runs correctly locally (or wherever you are running Python) -

-
-
-io.launch() 
-
-
-

-You should see a web interface like the following where you can drag and drop your data points and see the predictions: -

-Gradio Interface -
- - - - - - - diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/ContentVec256L9_Onnx.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/ContentVec256L9_Onnx.py deleted file mode 100644 index fae2b928252801795b038f51451b234e007f6f03..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/ContentVec256L9_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec256L9_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-256-layer-9.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec768L9_Onnx.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec768L9_Onnx.py deleted file mode 100644 index 7cdac4cd93478d3ddddb4b76dd9d9ccc5d1af2d4..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec768L9_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec768L9_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-768-layer-9.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 768 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/Frorozcol/music_recommedation/README.md b/spaces/Frorozcol/music_recommedation/README.md deleted file mode 100644 index f14c72d8ed9c66ffd67a50b60f32ab9873ed493b..0000000000000000000000000000000000000000 --- a/spaces/Frorozcol/music_recommedation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Music Recommedation -emoji: 👀 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/metas/MetasStore.py b/spaces/GaenKoki/voicevox/voicevox_engine/metas/MetasStore.py deleted file mode 100644 index 88a7bc37daad4ab70f1e7af07d7beab7eaa06e46..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/metas/MetasStore.py +++ /dev/null @@ -1,72 +0,0 @@ -import json -from pathlib import Path -from typing import TYPE_CHECKING, Dict, List, Tuple - -from voicevox_engine.metas.Metas import CoreSpeaker, EngineSpeaker, Speaker, StyleInfo - -if TYPE_CHECKING: - from voicevox_engine.synthesis_engine.synthesis_engine_base import ( - SynthesisEngineBase, - ) - - -class MetasStore: - """ - 話者やスタイルのメタ情報を管理する - """ - - def __init__(self, engine_speakers_path: Path) -> None: - self._engine_speakers_path = engine_speakers_path - self._loaded_metas: Dict[str, EngineSpeaker] = { - folder.name: EngineSpeaker( - **json.loads((folder / "metas.json").read_text(encoding="utf-8")) - ) - for folder in engine_speakers_path.iterdir() - } - - def speaker_engine_metas(self, speaker_uuid: str) -> EngineSpeaker: - return self.loaded_metas[speaker_uuid] - - def combine_metas(self, core_metas: List[CoreSpeaker]) -> List[Speaker]: - """ - 与えられたmetaにエンジンのコア情報を付加して返す - core_metas: コアのmetas()が返すJSONのModel - """ - - return [ - Speaker( - **self.speaker_engine_metas(speaker_meta.speaker_uuid).dict(), - **speaker_meta.dict(), - ) - for speaker_meta in core_metas - ] - - # FIXME: engineではなくList[CoreSpeaker]を渡す形にすることで - # SynthesisEngineBaseによる循環importを修正する - def load_combined_metas(self, engine: "SynthesisEngineBase") -> List[Speaker]: - """ - 与えられたエンジンから、コア・エンジン両方の情報を含んだMetasを返す - """ - - core_metas = [CoreSpeaker(**speaker) for speaker in json.loads(engine.speakers)] - return self.combine_metas(core_metas) - - @property - def engine_speakers_path(self) -> Path: - return self._engine_speakers_path - - @property - def loaded_metas(self) -> Dict[str, EngineSpeaker]: - return self._loaded_metas - - -def construct_lookup(speakers: List[Speaker]) -> Dict[int, Tuple[Speaker, StyleInfo]]: - """ - `{style.id: StyleInfo}`の変換テーブル - """ - - lookup_table = dict() - for speaker in speakers: - for style in speaker.styles: - lookup_table[style.id] = (speaker, style) - return lookup_table diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/README.md deleted file mode 100644 index 46dee5eadad2f92144c1ddc3d0d2dc06925adf86..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/README.md +++ /dev/null @@ -1,58 +0,0 @@ -# DetectoRS - -## Introduction - -[ALGORITHM] - -We provide the config files for [DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution](https://arxiv.org/pdf/2006.02334.pdf). - -```BibTeX -@article{qiao2020detectors, - title={DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution}, - author={Qiao, Siyuan and Chen, Liang-Chieh and Yuille, Alan}, - journal={arXiv preprint arXiv:2006.02334}, - year={2020} -} -``` - -## Dataset - -DetectoRS requires COCO and [COCO-stuff](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip) dataset for training. You need to download and extract it in the COCO dataset path. -The directory should be like this. - -```none -mmdetection -├── mmdet -├── tools -├── configs -├── data -│ ├── coco -│ │ ├── annotations -│ │ ├── train2017 -│ │ ├── val2017 -│ │ ├── test2017 -| | ├── stuffthingmaps -``` - -## Results and Models - -DetectoRS includes two major components: - -- Recursive Feature Pyramid (RFP). -- Switchable Atrous Convolution (SAC). - -They can be used independently. -Combining them together results in DetectoRS. -The results on COCO 2017 val are shown in the below table. - -| Method | Detector | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -|:------:|:--------:|:-------:|:--------:|:--------------:|:------:|:-------:|:------:|:--------:| -| RFP | Cascade + ResNet-50 | 1x | 7.5 | - | 44.8 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detectors/cascade_rcnn_r50_rfp_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detectors/cascade_rcnn_r50_rfp_1x_coco/cascade_rcnn_r50_rfp_1x_coco-8cf51bfd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detectors/cascade_rcnn_r50_rfp_1x_coco/cascade_rcnn_r50_rfp_1x_coco_20200624_104126.log.json) | -| SAC | Cascade + ResNet-50 | 1x | 5.6 | - | 45.0| | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detectors/cascade_rcnn_r50_sac_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detectors/cascade_rcnn_r50_sac_1x_coco/cascade_rcnn_r50_sac_1x_coco-24bfda62.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detectors/cascade_rcnn_r50_sac_1x_coco/cascade_rcnn_r50_sac_1x_coco_20200624_104402.log.json) | -| DetectoRS | Cascade + ResNet-50 | 1x | 9.9 | - | 47.4 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detectors/detectors_cascade_rcnn_r50_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detectors/detectors_cascade_rcnn_r50_1x_coco/detectors_cascade_rcnn_r50_1x_coco-32a10ba0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detectors/detectors_cascade_rcnn_r50_1x_coco/detectors_cascade_rcnn_r50_1x_coco_20200706_001203.log.json) | -| RFP | HTC + ResNet-50 | 1x | 11.2 | - | 46.6 | 40.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detectors/htc_r50_rfp_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detectors/htc_r50_rfp_1x_coco/htc_r50_rfp_1x_coco-8ff87c51.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detectors/htc_r50_rfp_1x_coco/htc_r50_rfp_1x_coco_20200624_103053.log.json) | -| SAC | HTC + ResNet-50 | 1x | 9.3 | - | 46.4 | 40.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detectors/htc_r50_sac_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detectors/htc_r50_sac_1x_coco/htc_r50_sac_1x_coco-bfa60c54.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detectors/htc_r50_sac_1x_coco/htc_r50_sac_1x_coco_20200624_103111.log.json) | -| DetectoRS | HTC + ResNet-50 | 1x | 13.6 | - | 49.1 | 42.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detectors/detectors_htc_r50_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detectors/detectors_htc_r50_1x_coco/detectors_htc_r50_1x_coco-329b1453.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detectors/detectors_htc_r50_1x_coco/detectors_htc_r50_1x_coco_20200624_103659.log.json) | - -*Note*: This is a re-implementation based on MMDetection-V2. -The original implementation is based on MMDetection-V1. diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/README.md deleted file mode 100644 index d43fc6da65ee84c7025ae61fe2bb1e264e6b06ec..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks - -## Introduction - -[ALGORITHM] - -```latex -@article{Ren_2017, - title={Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - publisher={Institute of Electrical and Electronics Engineers (IEEE)}, - author={Ren, Shaoqing and He, Kaiming and Girshick, Ross and Sun, Jian}, - year={2017}, - month={Jun}, -} -``` - -## Results and models - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: | -| R-50-DC5 | caffe | 1x | - | - | 37.2 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_1x_coco/faster_rcnn_r50_caffe_dc5_1x_coco_20201030_151909-531f0f43.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_1x_coco/faster_rcnn_r50_caffe_dc5_1x_coco_20201030_151909.log.json) | -| R-50-FPN | caffe | 1x | 3.8 | | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco/faster_rcnn_r50_caffe_fpn_1x_coco_bbox_mAP-0.378_20200504_180032-c5925ee5.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco/faster_rcnn_r50_caffe_fpn_1x_coco_20200504_180032.log.json) | -| R-50-FPN | pytorch | 1x | 4.0 | 21.4 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130_204655.log.json) | -| R-50-FPN | pytorch | 2x | - | - | 38.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_2x_coco/faster_rcnn_r50_fpn_2x_coco_bbox_mAP-0.384_20200504_210434-a5d8aa15.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_2x_coco/faster_rcnn_r50_fpn_2x_coco_20200504_210434.log.json) | -| R-101-FPN | caffe | 1x | 5.7 | | 39.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r101_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_caffe_fpn_1x_coco/faster_rcnn_r101_caffe_fpn_1x_coco_bbox_mAP-0.398_20200504_180057-b269e9dd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_caffe_fpn_1x_coco/faster_rcnn_r101_caffe_fpn_1x_coco_20200504_180057.log.json) | -| R-101-FPN | pytorch | 1x | 6.0 | 15.6 | 39.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_1x_coco/faster_rcnn_r101_fpn_1x_coco_20200130-f513f705.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_1x_coco/faster_rcnn_r101_fpn_1x_coco_20200130_204655.log.json) | -| R-101-FPN | pytorch | 2x | - | - | 39.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r101_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_2x_coco/faster_rcnn_r101_fpn_2x_coco_bbox_mAP-0.398_20200504_210455-1d2dac9c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_2x_coco/faster_rcnn_r101_fpn_2x_coco_20200504_210455.log.json) | -| X-101-32x4d-FPN | pytorch | 1x | 7.2 | 13.8 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco/faster_rcnn_x101_32x4d_fpn_1x_coco_20200203-cff10310.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco/faster_rcnn_x101_32x4d_fpn_1x_coco_20200203_000520.log.json) | -| X-101-32x4d-FPN | pytorch | 2x | - | - | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco/faster_rcnn_x101_32x4d_fpn_2x_coco_bbox_mAP-0.412_20200506_041400-64a12c0b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco/faster_rcnn_x101_32x4d_fpn_2x_coco_20200506_041400.log.json) | -| X-101-64x4d-FPN | pytorch | 1x | 10.3 | 9.4 | 42.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_1x_coco/faster_rcnn_x101_64x4d_fpn_1x_coco_20200204-833ee192.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_1x_coco/faster_rcnn_x101_64x4d_fpn_1x_coco_20200204_134340.log.json) | -| X-101-64x4d-FPN | pytorch | 2x | - | - | 41.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco/faster_rcnn_x101_64x4d_fpn_2x_coco_20200512_161033-5961fa95.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco/faster_rcnn_x101_64x4d_fpn_2x_coco_20200512_161033.log.json) | - -## Different regression loss - -We trained with R-50-FPN pytorch style backbone for 1x schedule. - -| Backbone | Loss type | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :-------------: | :-------: | :------: | :------------: | :----: | :------: | :--------: | -| R-50-FPN | L1Loss | 4.0 | 21.4 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130_204655.log.json) | -| R-50-FPN | IoULoss | | | 37.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_iou_1x_coco-fdd207f3.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_iou_1x_coco_20200506_095954.log.json) | -| R-50-FPN | GIoULoss | | | 37.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_giou_1x_coco-0eada910.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_giou_1x_coco_20200505_161120.log.json) | -| R-50-FPN | BoundedIoULoss | | | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_bounded_iou_1x_coco-98ad993b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_bounded_iou_1x_coco_20200505_160738.log.json) | - -## Pre-trained Models - -We also train some models with longer schedules and multi-scale training. The users could finetune them for downstream tasks. - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: | -| [R-50-DC5](./faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py) | caffe | 1x | - | | 37.4 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco_20201028_233851-b33d21b9.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco_20201028_233851.log.json) -| [R-50-DC5](./faster_rcnn_r50_caffe_dc5_mstrain_3x_coco.py) | caffe | 3x | - | | 38.7 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco_20201028_002107-34a53b2c.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco_20201028_002107.log.json) -| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_2x_coco.py) | caffe | 2x | 4.3 | | 39.7 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_20200504_231813.log.json) -| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | caffe | 3x | 4.3 | | 40.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_20200504_163323.log.json) - -We further finetune some pre-trained models on the COCO subsets, which only contain only a few of the 80 categories. - -| Backbone | Style | Class name | Pre-traind model | Mem (GB) | box AP | Config | Download | -| ------------------------------------------------------------ | ----- | ------------------ | ------------------------------------------------------------ | -------- | ------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person.py) | caffe | person | [R-50-FPN-Caffe-3x](./faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | 3.7 | 55.8 | [config](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person/faster_rcnn_r50_fpn_1x_coco-person_20201216_175929-d022e227.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person/faster_rcnn_r50_fpn_1x_coco-person_20201216_175929.log.json) | -| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py) | caffe | person-bicycle-car | [R-50-FPN-Caffe-3x](./faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | 3.7 | 44.1 | [config](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car_20201216_173117-6eda6d92.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car_20201216_173117.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pascal_voc/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/pascal_voc/README.md deleted file mode 100644 index f730242b7768390c28ea984718cae9aa56811bbc..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pascal_voc/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# PASCAL VOC Dataset - -[DATASET] - -``` -@Article{Everingham10, - author = "Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.", - title = "The Pascal Visual Object Classes (VOC) Challenge", - journal = "International Journal of Computer Vision", - volume = "88", - year = "2010", - number = "2", - month = jun, - pages = "303--338", -} -``` - -## Results and Models - -| Architecture | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:------------:|:---------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| Faster R-CNN | R-50 | pytorch | 1x | 2.6 | - | 79.5 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712/faster_rcnn_r50_fpn_1x_voc0712_20200624-c9895d40.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712/20200623_015208.log.json) | -| Retinanet | R-50 | pytorch | 1x | 2.1 | - | 77.3 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/pascal_voc/retinanet_r50_fpn_1x_voc0712.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/pascal_voc/retinanet_r50_fpn_1x_voc0712/retinanet_r50_fpn_1x_voc0712_20200617-47cbdd0e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/pascal_voc/retinanet_r50_fpn_1x_voc0712/retinanet_r50_fpn_1x_voc0712_20200616_014642.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/max_iou_assigner.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/max_iou_assigner.py deleted file mode 100644 index 5cf4c4b4b450f87dfb99c3d33d8ed83d3e5cfcb3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/max_iou_assigner.py +++ /dev/null @@ -1,212 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class MaxIoUAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, or a semi-positive integer - indicating the ground truth index. - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow low quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. Details are demonstrated in Step 4. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to bboxes. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, or a semi-positive number. -1 means negative - sample, semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to the background - 2. assign proposals whose iou with all gts < neg_iou_thr to 0 - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - - Example: - >>> self = MaxIoUAssigner(0.5, 0.5) - >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) - >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]]) - >>> assign_result = self.assign(bboxes, gt_bboxes) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - gt_bboxes.shape[0] > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = bboxes.device - bboxes = bboxes.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, bboxes, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result - - def assign_wrt_overlaps(self, overlaps, gt_labels=None): - """Assign w.r.t. the overlaps of bboxes with gts. - - Args: - overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes, - shape(k, n). - gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = overlaps.size(0), overlaps.size(1) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - # 2. assign negative: below - # the negative inds are set to be 0 - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps < self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, tuple): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0]) - & (max_overlaps < self.neg_iou_thr[1])] = 0 - - # 3. assign positive: above positive IoU threshold - pos_inds = max_overlaps >= self.pos_iou_thr - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - if self.match_low_quality: - # Low-quality matching will overwrite the assigned_gt_inds assigned - # in Step 3. Thus, the assigned gt might not be the best one for - # prediction. - # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2, - # bbox 1 will be assigned as the best target for bbox A in step 3. - # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's - # assigned_gt_inds will be overwritten to be bbox B. - # This might be the reason that it is not used in ROI Heads. - for i in range(num_gts): - if gt_max_overlaps[i] >= self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = overlaps[i, :] == gt_max_overlaps[i] - assigned_gt_inds[max_iou_inds] = i + 1 - else: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/configs/base_config.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/configs/base_config.py deleted file mode 100644 index c636e847ded6aafc08c3f2e8e43af053c2c8f9e6..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/configs/base_config.py +++ /dev/null @@ -1,565 +0,0 @@ -from functools import partial -import os -from typing import Dict, Optional, Tuple, List -import torch -from torch import nn - -from ..experiment_manager import ExperimentManager -from ..run_type import RUN_TYPE, run_type -from ..utils import get_production_ckpt_path - -from .data_augmentation_config import DataAugmentationConfig - -class base_cfg: - def __init__( - self, - epoch: int, - experiment_name: str, - datasets_set: int, - run_type: str, - ): - self.experiment_name = experiment_name - self.datasets_set = datasets_set - self.run_type = run_type - if run_type == RUN_TYPE.KAGGLE: - self.base_working_dir = '/kaggle/working' - self.base_datasets_working_dir_path = '/kaggle/input' - - self.datasets_working_dir_path = os.path.join( - self.base_datasets_working_dir_path, f'rgbdsod-set{datasets_set}' - ) - - '''Source code''' - self.source_code_dir = os.path.join('/', 'kaggle', 'working', 'sources') - - '''Benchmark''' - self.sotas_root_working_dir = '/home/sotas' - elif run_type == RUN_TYPE.UBUNTU: - self.base_working_dir = '.' - self.base_datasets_working_dir_path = './datasets' - - self.datasets_working_dir_path = os.path.join( - self.base_datasets_working_dir_path, f'v{datasets_set}' - ) - - '''Source code''' - self.source_code_dir = './sources' - - '''Benchmark''' - self.sotas_root_working_dir = './sotas' - elif run_type in [RUN_TYPE.COLAB, RUN_TYPE.HUGGINGFACE]: - self.base_working_dir = '/content' - self.mount_path = '/content/drive' # GoogleDrive - self.datasets_dir_path = os.path.join( - self.mount_path, 'MyDrive', 'RGBD_SOD', 'datasets' - ) # GoogleDrive - self.base_datasets_working_dir_path = '/content/datasets' # Colab - self.datasets_working_dir_path = os.path.join( - self.base_datasets_working_dir_path, f'v{datasets_set}' - ) # Colab - - '''Source code''' - self.source_code_dir = os.path.join( - self.mount_path, 'MyDrive', 'RGBD_SOD', 'sources' - ) - - '''Benchmark''' - self.sotas_root_working_dir = '/content/sotas' - else: - raise Exception(f'Unsupported run type {run_type}') - - self.sotas_working_dir = os.path.join(self.sotas_root_working_dir, f'v{datasets_set}') - - if self.datasets_set == 1: - '''Set 1: COME15K - Train dataset contains 8,025 image pairs of RGB-D - We split our testing dataset to a moderate-level testing set ("Normal") and a - hard testing set ("Difficult") with 4,600 image pairs and - 3,000 pairs respectively''' - self.test_dataset_names = [ - 'COME-E', 'COME-H', - 'DES', 'DUT-RGBD', 'LFSD', 'NJU2K', 'NLPR', - 'ReDWeb-S', 'SIP', 'STERE' - ] - self.test_dataset_quantities = [ - 4600, 3000, - 135, 400, 100, 500, 300, - 1000, 929, 1000 - ] - self.sota_model_names = [ - 'A2Dele', - 'ATSA', - 'BBS-Net', - # 'BTSNet', - 'CDNet', - - 'CMINet', - 'CoNet', - 'DANet', - 'DCF', - 'DSA2F', - - # 'EFNet', - 'FRDT', - 'HAINet', - 'JLDCF', - 'PGAR', - - 'RD3D', - 'S2MA', - 'SSF', - 'UCNet' - ] - self.sotas_datasets = [self.test_dataset_names for _ in range(len(self.sota_model_names))] - self.mapping_test_dataset_names: List[Tuple[str, str]] = [ - ('COME-E', 'cascaded_rgbd_sod'), - ('COME-H', 'cascaded_rgbd_sod'), - ('DES', 'cheng2014depth'), - ('DUT-RGBD', 'piao2019depth'), - ('LFSD', 'li2014saliency'), - ('NJU2K', 'ju2014depth'), - ('NLPR', 'peng2014rgbd'), - ('ReDWeb-S', 'liu2020learning'), - ('SIP', 'fan2020rethinking'), - ('STERE', 'niu2012leveraging'), - ] - self.mapping_sota_model_names: List[Tuple[str, str]] = [ - ('A2Dele', 'piao2020a2dele'), - ('ATSA', 'zhang2020asymmetric'), - ('BBS-Net', 'fan2020bbs'), - # ('BTSNet', ''), - ('CDNet', 'jin2021cdnet'), - - ('CMINet', 'cascaded_rgbd_sod'), - ('CoNet', 'ji2020accurate'), - ('DANet', 'zhao2020single'), - ('DCF', 'Ji_2021_DCF'), - ('DSA2F', 'Sun2021DeepRS'), - - # ('EFNet', ''), - ('FRDT', 'zhang2020feature'), - ('HAINet', 'li2021hierarchical'), - ('JLDCF', 'fu2020jl'), - ('PGAR', 'chen2020progressively'), - - ('RD3D', 'chen2021rgb'), - ('S2MA', 'liu2020learning'), - ('SSF', 'zhang2020select'), - ('UCNet', 'zhang2020uc'), - ] - - # GoogleDrive - if run_type == RUN_TYPE.COLAB: - self.datasets_dir = os.path.join(self.datasets_dir_path, f'DatasetsV{self.datasets_set}') - self.train_dataset_zip_path = os.path.join(self.datasets_dir, 'train.zip') - self.test_datasets_dir_path = os.path.join(self.datasets_dir, 'test') - self.dev_dataset_zip_path: str = None - self.benchmark_dir_path = os.path.join(self.datasets_dir, 'benchmark') - - # Colab + Kaggle - self.train_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'train') - self.test_datasets_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test') - self.dev_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test', 'COME-E') - elif self.datasets_set == 2: - '''Set 2: NJU2K and NLPR''' - self.test_dataset_names = [ - 'DES', 'LFSD', 'NJU2K', 'NLPR', 'SIP', 'SSD', 'STERE' - ] - self.test_dataset_quantities = [ - 135, 100, 500, 300, 929, 80, 1000 - ] - self.sotas_datasets = [ - ["STERE", "DES", "NLPR", "SIP", "SSD", "NJU2K"], - ["DES", "NJU2K", "NLPR", "SIP", "STERE"], - ["STERE", "DES", "NLPR", "SIP", "SSD", "NJU2K", "LFSD"], - ["STERE", "NLPR", "SIP", "SSD", "NJU2K", "LFSD"], - ["STERE", "NLPR", "NJU2K", "LFSD"], - ["STERE", "DES", "NLPR", "SIP", "SSD", "NJU2K", "LFSD"], - ["STERE", "DES", "NLPR", "SIP", "SSD", "LFSD"], - ["STERE", "DES", "NLPR", "SIP", "SSD", "NJU2K", "LFSD"], - ["STERE", "DES", "NLPR", "SIP", "NJU2K", "LFSD"], - ] - self.sota_model_names = [ - 'SPNet', - 'SPSN', - 'C2DFNet', - 'DCF', - 'MVSalNet', - 'BBS-Net', - 'MobileSal_singletrain', - 'TriTransNet', - 'UCNet', - ] - self.mapping_test_dataset_names: List[Tuple[str, str]] = [ - ('DES', 'cheng2014depth'), - ('LFSD', 'li2014saliency'), - ('NJU2K', 'ju2014depth'), - ('NLPR', 'peng2014rgbd'), - ('SIP', 'fan2020rethinking'), - ('SSD', 'zhu2017three'), - ('STERE', 'niu2012leveraging'), - ] - self.mapping_sota_model_names: List[Tuple[str, str]] = [ - ('SPNet', 'zhou2021specificity'), - ('SPSN', 'lee2022spsn'), - ('C2DF-Net', 'zhang2022c'), - ('DCF', 'Ji_2021_DCF'), - ('MVSal-Net', 'zhou2022mvsalnet'), - ('BBS-Net', 'fan2020bbs'), - ('Mobile-Sal', 'wu2021mobilesal'), - ('TriTrans-Net', 'liu2021tritransnet'), - ('UCNet', 'zhang2020uc'), - ] - - # GoogleDrive - if run_type == RUN_TYPE.COLAB: - self.datasets_dir = os.path.join(self.datasets_dir_path, f'DatasetsV{self.datasets_set}') - self.train_dataset_zip_path = os.path.join(self.datasets_dir, 'train.zip') - self.test_datasets_dir_path = os.path.join(self.datasets_dir, 'test') - self.dev_dataset_zip_path: str = os.path.join(self.datasets_dir, 'dev.zip') - self.benchmark_dir_path = os.path.join(self.datasets_dir, 'benchmark') - - # Colab + Kaggle - self.train_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'train') - self.test_datasets_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test') - self.dev_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'dev') - elif self.datasets_set == 3: - '''Set 3: GO BIG! All posible RGBD-SOD datasets combined''' - self.test_dataset_names = ['dev_test'] - self.sota_model_names = [] - self.sotas_datasets = [self.test_dataset_names for _ in range(len(self.sota_model_names))] - self.test_dataset_quantities = [203] - - # GoogleDrive - if run_type == RUN_TYPE.COLAB: - self.datasets_dir = os.path.join(self.datasets_dir_path, f'DatasetsV{self.datasets_set}') - self.train_dataset_zip_path = os.path.join(self.datasets_dir, 'train.zip') - self.test_datasets_dir_path = os.path.join(self.datasets_dir, 'test') - self.dev_dataset_zip_path: str = None - self.benchmark_dir_path = os.path.join(self.datasets_dir, 'benchmark') - - # Colab + Kaggle - self.train_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'train') - self.test_datasets_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test') - self.dev_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test', 'dev_test') - elif self.datasets_set == 4: - '''Set 2: NJU2K, NLPR and DUT-RGBD''' - self.test_dataset_names = [ - 'DES', 'LFSD', 'NJU2K', 'NLPR', 'SIP', 'SSD', 'STERE', 'DUT-RGBD' - ] - self.test_dataset_quantities = [ - 135, 100, 500, 300, 929, 80, 1000, 400 - ] - self.sotas_datasets = [ - ['LFSD', 'NJU2K', 'NLPR', 'SSD', 'STERE', 'DUT-RGBD'], - ['DES', 'LFSD', 'NLPR', 'SSD', 'DUT-RGBD'], - ['DES', 'LFSD', 'NJU2K', 'NLPR', 'SIP', 'STERE', 'DUT-RGBD'], - ['DES', 'NJU2K', 'NLPR', 'SIP', 'STERE', 'DUT-RGBD'], - ] - self.sota_model_names = [ - 'DCMF', - 'DSA2F', - 'JL-DCF', - 'SSLSOD', - ] - self.mapping_test_dataset_names = [ - ('DES', 'cheng2014depth'), - ('LFSD', 'li2014saliency'), - ('NJU2K', 'ju2014depth'), - ('NLPR', 'peng2014rgbd'), - ('SIP', 'fan2020rethinking'), - ('SSD', 'zhu2017three'), - ('STERE', 'niu2012leveraging'), - ('DUT-RGBD', 'piao2019depth'), - ] - self.mapping_sota_model_names = [ - ('DCMF', 'wang2022learning'), - ('DSA2F', 'Sun2021DeepRS'), - ('JL-DCF', 'fu2020jl'), - ('SSLSOD', 'zhao2022self'), - ] - - # GoogleDrive - if run_type == RUN_TYPE.COLAB: - self.datasets_dir = os.path.join(self.datasets_dir_path, f'DatasetsV{self.datasets_set}') - self.train_dataset_zip_path = os.path.join(self.datasets_dir, 'train.zip') - self.test_datasets_dir_path = os.path.join(self.datasets_dir, 'test') - self.dev_dataset_zip_path: str = os.path.join(self.datasets_dir, 'dev.zip') - self.benchmark_dir_path = os.path.join(self.datasets_dir, 'benchmark') - - # Colab + Kaggle - self.train_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'train') - self.test_datasets_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test') - self.dev_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'dev') - elif self.datasets_set == 5: - '''Set 1: COME15K - Use ranking_show instead of gt_right - Train dataset contains 8,025 image pairs of RGB-D - ''' - self.test_dataset_names = [ - 'COME-E', 'COME-H', - 'DES', 'DUT-RGBD', 'LFSD', 'NJU2K', 'NLPR', - 'ReDWeb-S', 'SIP', 'STERE' - ] - self.test_dataset_quantities = [ - 4600, 3000, - 135, 400, 100, 500, 300, - 1000, 929, 1000 - ] - self.sota_model_names = [] - self.sotas_datasets = [self.test_dataset_names for _ in range(len(self.sota_model_names))] - self.mapping_sota_model_names = [] - self.mapping_test_dataset_names: List[Tuple[str, str]] = [ - ('COME-E', 'cascaded_rgbd_sod'), - ('COME-H', 'cascaded_rgbd_sod'), - ('DES', 'cheng2014depth'), - ('DUT-RGBD', 'piao2019depth'), - ('LFSD', 'li2014saliency'), - ('NJU2K', 'ju2014depth'), - ('NLPR', 'peng2014rgbd'), - ('ReDWeb-S', 'liu2020learning'), - ('SIP', 'fan2020rethinking'), - ('STERE', 'niu2012leveraging'), - ] - - # GoogleDrive - if run_type == RUN_TYPE.COLAB: - self.datasets_dir = os.path.join(self.datasets_dir_path, f'DatasetsV{self.datasets_set}') - self.train_dataset_zip_path = os.path.join(self.datasets_dir, 'train.zip') - self.test_datasets_dir_path = os.path.join(self.datasets_dir, 'test') - self.dev_dataset_zip_path: str = os.path.join(self.datasets_dir, 'dev.zip') - self.benchmark_dir_path = os.path.join(self.datasets_dir, 'benchmark') - - # Colab + Kaggle - self.train_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'train') - self.test_datasets_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test') - self.dev_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'dev') - elif self.datasets_set == 6: - '''Set 1: COME8K + NLPR + NJU2K + DUT-RGBD + ReDWeb-S - Train dataset contains 13189 image pairs of RGB-D - ''' - self.test_dataset_names = [ - 'COME-E', 'COME-H', - 'DES', 'DUT-RGBD', 'LFSD', 'NJU2K', 'NLPR', - 'ReDWeb-S', 'SIP', 'STERE' - ] - self.test_dataset_quantities = [ - 4600, 3000, - 135, 400, 100, 500, 300, - 1000, 929, 1000 - ] - self.sota_model_names = [] - self.sotas_datasets = [self.test_dataset_names for _ in range(len(self.sota_model_names))] - self.mapping_sota_model_names = [] - self.mapping_test_dataset_names: List[Tuple[str, str]] = [ - ('COME-E', 'cascaded_rgbd_sod'), - ('COME-H', 'cascaded_rgbd_sod'), - ('DES', 'cheng2014depth'), - ('DUT-RGBD', 'piao2019depth'), - ('LFSD', 'li2014saliency'), - ('NJU2K', 'ju2014depth'), - ('NLPR', 'peng2014rgbd'), - ('ReDWeb-S', 'liu2020learning'), - ('SIP', 'fan2020rethinking'), - ('STERE', 'niu2012leveraging'), - ] - - # GoogleDrive - if run_type == RUN_TYPE.COLAB: - self.datasets_dir = os.path.join(self.datasets_dir_path, f'DatasetsV{self.datasets_set}') - self.train_dataset_zip_path = os.path.join(self.datasets_dir, 'train.zip') - self.test_datasets_dir_path = os.path.join(self.datasets_dir, 'test') - self.dev_dataset_zip_path: str = os.path.join(self.datasets_dir, 'dev.zip') - self.benchmark_dir_path = os.path.join(self.datasets_dir, 'benchmark') - - # Colab + Kaggle - self.train_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'train') - self.test_datasets_working_dir_path = os.path.join(self.datasets_working_dir_path, 'test') - self.dev_dataset_working_dir_path = os.path.join(self.datasets_working_dir_path, 'dev') - else: - raise NotImplementedError() - - assert len(self.test_dataset_names) == len(self.test_dataset_quantities) == len(self.mapping_test_dataset_names), \ - 'Number of test_dataset_names must equal to the number of test_dataset_quantities' - assert len(self.sota_model_names) == len(self.sotas_datasets) == len(self.mapping_sota_model_names), \ - 'Number of sota_model_names must equal to the number of sotas_datasets' - - '''Upload to Kaggle dataset to continue training''' - self.continue_training_dir_path = os.path.join( - self.base_working_dir, 'continue_training' - ) - - '''Evaluation benchmark''' - self.benchmark_csv_dir_path = os.path.join(self.source_code_dir, 'csv') - - '''Deployment''' - self.deployment_dir_path = os.path.join(self.source_code_dir, 'deployment') - - '''Results of salient maps from SOTAs and my experiments''' - self.results_dir_path = os.path.join(self.source_code_dir, 'results') - - self.latex_dir_path = os.path.join(self.source_code_dir, 'latex') - self.qualitative_evaluation_latex_dir_path = os.path.join( - self.latex_dir_path, f'qualitative_evaluation_set{datasets_set}' - ) - self.quantitative_evaluation_latex_dir_path = os.path.join( - self.latex_dir_path, f'quantitative_evaluation_set{datasets_set}' - ) - - # ------------------------------------------------------------------------- - - '''Loggers''' - self.logs_dir = os.path.join(self.source_code_dir, 'logs') - - '''Experiment''' - self.experiment_dir_path = os.path.join(self.source_code_dir, 'experiment') - - '''Pickle''' - self.pickle_dir_path = os.path.join(self.source_code_dir, 'pickle') # Deprecated - - '''JSON''' - self.json_dir_path = os.path.join(self.source_code_dir, 'json') - - self.num_train_imgs: int - self.niters_per_epoch: int - self.test_image_size: int = 224 - self.image_size: int = 224 - - '''Train function version''' - self.train_function_version: int = 1 # choices=[1, 2] - - '''Gradient Accumulation''' - self.accum_iter = 1 - - '''Label smoothing''' - self.label_smoothing = 0. - - '''Wandb tracking''' - self.wandb_api_key = 'c3fc6b778d58b02a1519dec88b08f0dae1fd5b3b' # thinh.huynh.re@gmail.com - - '''Whether using fp16 instead of fp32 (default)''' - self.is_fp16: bool = True - - self.is_padding: bool = False # deprecated due to randomly switch between padding and non-padding - - '''Whether using padding for test''' - self.is_padding_for_test: bool = False - - '''Seed''' - self.seed: int = 2022 - - ''' MultiMAE ''' - self.decoder_depth: int = 4 - self.encoder_depth: int = 12 - self.is_inference_with_no_depth: bool = False - self.inputs = ['rgb', 'depth'] - self.outputs = ['semseg'] - self.decoder_main_tasks: List[List[str]] = [['rgb']] - self.learnable_pos_emb: bool = False - self.decoder_interpolate_mode: str = 'bilinear' # ['bilinear', 'nearest'] - self.dim_tokens: int = 768 - self.act_fn = partial(nn.ReLU, inplace=True) - self.num_heads: int = 12 - - '''Data Augmentation''' - self.data_augmentation_version: int = 2 - self.data_augmentation_config = DataAugmentationConfig() - - self.ckpt_path: Optional[str] = None - self.description: str = '' # Override this - self.embed_dim: int = 6144 - - '''Pretrained Backbone''' - self.pretrained_backbone: Optional[str] = 'multi-vit' # choices=['multi-vit', 'mae', 'vit', 'large-mae', 'huge-mae' None] - - '''Learning rate''' - self.lr: float - self.end_lr: float = 1e-11 - self.lr_scale: int - self.lr_power: float = 0.9 - - self.save_checkpoints_after_each_n_epochs: int = 10 - - self.weight_decay = 0.05 - self.num_workers = 2 - self.warmup_epoch = 100 - - self.betas: Tuple[float, float] = (0.9, 0.999) - - self.input_patch_size: int = 16 - self.output_patch_size: int = 16 - - '''Warmup batchsize''' - self.warmup_min_batch_size: Optional[int] = None - self.warmup_epoch_batch_size: Optional[int] = None - - self.batch_size: int - self.val_batch_size: int - self.test_batch_size: int = 100 - self.nepochs: int - - if run_type in [RUN_TYPE.COLAB, RUN_TYPE.KAGGLE, RUN_TYPE.UBUNTU]: - self.em = ExperimentManager( - self.experiment_name, - self.json_dir_path, - self.experiment_dir_path, - ) - # self.em.clean() - if self.em.latest_epoch is not None: - self.ckpt_path = os.path.join( - self.experiment_dir_path, - self.experiment_name, - f'checkpoint_{self.em.latest_epoch}.pt' - ) - - self.cpu_device = torch.device('cpu') - elif run_type == RUN_TYPE.HUGGINGFACE: - # when using this in production, please specify "epoch" - self.ckpt_path = get_production_ckpt_path(self.experiment_name, epoch) - -def get_train_cfg( - cfg: base_cfg, - no_params: Optional[int] = None, - gflops: Optional[float] = None, -) -> Dict: - '''Wandb run's configuration''' - return dict( - image_size = cfg.image_size, - test_image_size = cfg.test_image_size, - - accum_iter = cfg.accum_iter, - is_fp16 = cfg.is_fp16, - - lr = cfg.lr, - end_lr = cfg.end_lr, - lr_scale = cfg.lr_scale, - lr_power = cfg.lr_power, - - weight_decay = cfg.weight_decay, - batch_size = cfg.batch_size, - val_batch_size = cfg.val_batch_size, - nepochs = cfg.nepochs, - num_workers = cfg.num_workers, - warmup_epoch = cfg.warmup_epoch, - num_train_imgs = cfg.num_train_imgs if 'num_train_imgs' in cfg.__dict__ else None, - niters_per_epoch = cfg.niters_per_epoch if 'niters_per_epoch' in cfg.__dict__ else None, - betas = cfg.betas, - decoder_depth = cfg.decoder_depth, - description = cfg.description if 'description' in cfg.__dict__ is not None else None, - embed_dim = cfg.embed_dim, - inputs = cfg.inputs, - outputs = cfg.outputs, - decoder_main_tasks = cfg.decoder_main_tasks, - data_augmentation_version = cfg.data_augmentation_version, - - save_checkpoints_after_each_n_epochs = cfg.save_checkpoints_after_each_n_epochs, - datasets_set = cfg.datasets_set, - no_params = no_params, - gflops = gflops, - - train_function_version = cfg.train_function_version, - label_smoothing = cfg.label_smoothing, - - warmup_min_batch_size = cfg.warmup_min_batch_size, - warmup_epoch_batch_size = cfg.warmup_epoch_batch_size, - ) - diff --git a/spaces/HansSongBin/Hans/Dockerfile b/spaces/HansSongBin/Hans/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/HansSongBin/Hans/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py deleted file mode 100644 index 674b5849cba829cf4f07a69369e9cc6eed376d4c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import fileinput - -import sacrebleu - - -for line in fileinput.input(): - print(sacrebleu.tokenize_zh(line)) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/resampling_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/resampling_dataset.py deleted file mode 100644 index 3d3b993164dc3962df48bacff26714328e843e80..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/resampling_dataset.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np -from fairseq.data import BaseWrapperDataset, plasma_utils - - -logger = logging.getLogger(__name__) - - -class ResamplingDataset(BaseWrapperDataset): - """Randomly samples from a given dataset at each epoch. - - Sampling is done with or without replacement, depending on the "replace" - parameter. - - Optionally, the epoch size can be rescaled. This is potentially desirable - to increase per-epoch coverage of the base dataset (since sampling with - replacement means that many items in the dataset will be left out). In the - case of sampling without replacement, size_ratio should be strictly less - than 1. - - Args: - dataset (~torch.utils.data.Dataset): dataset on which to sample. - weights (List[float]): list of probability weights - (default: None, which corresponds to uniform sampling). - replace (bool): sampling mode; True for "with replacement", or False - for "without replacement" (default: True) - size_ratio (float): the ratio to subsample to; must be positive - (default: 1.0). - batch_by_size (bool): whether or not to batch by sequence length - (default: True). - seed (int): RNG seed to use (default: 0). - epoch (int): starting epoch number (default: 1). - """ - - def __init__( - self, - dataset, - weights=None, - replace=True, - size_ratio=1.0, - batch_by_size=True, - seed=0, - epoch=1, - ): - super().__init__(dataset) - - if weights is None: - self.weights = None - - else: - assert len(weights) == len(dataset) - weights_arr = np.array(weights, dtype=np.float64) - weights_arr /= weights_arr.sum() - self.weights = plasma_utils.PlasmaArray(weights_arr) - - self.replace = replace - - assert size_ratio > 0.0 - if not self.replace: - assert size_ratio < 1.0 - self.size_ratio = float(size_ratio) - self.actual_size = np.ceil(len(dataset) * self.size_ratio).astype(int) - - self.batch_by_size = batch_by_size - self.seed = seed - - self._cur_epoch = None - self._cur_indices = None - - self.set_epoch(epoch) - - def __getitem__(self, index): - return self.dataset[self._cur_indices.array[index]] - - def __len__(self): - return self.actual_size - - @property - def sizes(self): - if isinstance(self.dataset.sizes, list): - return [s[self._cur_indices.array] for s in self.dataset.sizes] - return self.dataset.sizes[self._cur_indices.array] - - def num_tokens(self, index): - return self.dataset.num_tokens(self._cur_indices.array[index]) - - def size(self, index): - return self.dataset.size(self._cur_indices.array[index]) - - def ordered_indices(self): - if self.batch_by_size: - order = [ - np.arange(len(self)), - self.sizes, - ] # No need to handle `self.shuffle == True` - return np.lexsort(order) - else: - return np.arange(len(self)) - - def prefetch(self, indices): - self.dataset.prefetch(self._cur_indices.array[indices]) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return False - - def set_epoch(self, epoch): - logger.debug("ResamplingDataset.set_epoch: {}".format(epoch)) - super().set_epoch(epoch) - - if epoch == self._cur_epoch: - return - - self._cur_epoch = epoch - - # Generate a weighted sample of indices as a function of the - # random seed and the current epoch. - - rng = np.random.RandomState( - [ - 42, # magic number - self.seed % (2 ** 32), # global seed - self._cur_epoch, # epoch index - ] - ) - self._cur_indices = plasma_utils.PlasmaArray( - rng.choice( - len(self.dataset), - self.actual_size, - replace=self.replace, - p=(None if self.weights is None else self.weights.array), - ) - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_collaters.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_collaters.py deleted file mode 100644 index 6a5029a48faea2426d7a0277655a2c7c08c1d16c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_collaters.py +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import numpy as np -import torch -from examples.speech_recognition.data.collaters import Seq2SeqCollater - - -class TestSeq2SeqCollator(unittest.TestCase): - def test_collate(self): - - eos_idx = 1 - pad_idx = 0 - collater = Seq2SeqCollater( - feature_index=0, label_index=1, pad_index=pad_idx, eos_index=eos_idx - ) - - # 2 frames in the first sample and 3 frames in the second one - frames1 = np.array([[7, 8], [9, 10]]) - frames2 = np.array([[1, 2], [3, 4], [5, 6]]) - target1 = np.array([4, 2, 3, eos_idx]) - target2 = np.array([3, 2, eos_idx]) - sample1 = {"id": 0, "data": [frames1, target1]} - sample2 = {"id": 1, "data": [frames2, target2]} - batch = collater.collate([sample1, sample2]) - - # collate sort inputs by frame's length before creating the batch - self.assertTensorEqual(batch["id"], torch.tensor([1, 0])) - self.assertEqual(batch["ntokens"], 7) - self.assertTensorEqual( - batch["net_input"]["src_tokens"], - torch.tensor( - [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [pad_idx, pad_idx]]] - ), - ) - self.assertTensorEqual( - batch["net_input"]["prev_output_tokens"], - torch.tensor([[eos_idx, 3, 2, pad_idx], [eos_idx, 4, 2, 3]]), - ) - self.assertTensorEqual(batch["net_input"]["src_lengths"], torch.tensor([3, 2])) - self.assertTensorEqual( - batch["target"], - torch.tensor([[3, 2, eos_idx, pad_idx], [4, 2, 3, eos_idx]]), - ) - self.assertEqual(batch["nsentences"], 2) - - def assertTensorEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertEqual(t1.ne(t2).long().sum(), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/setup.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/setup.py deleted file mode 100644 index 9d2c73345b8406195aaa6327cb3148bb92b65190..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/setup.py +++ /dev/null @@ -1,55 +0,0 @@ -from setuptools import setup, find_packages - -with open("README.md", "r") as f: - long_description = f.read() - -setup( - name="vakyansh-tts", - version="0.0.5", - description="Text to speech for Indic languages", - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/Open-Speech-EkStep/vakyansh-tts.git", - keywords="nlp, tts, Indic languages, deep learning, text to speech", - # package_dir={'': 'src'}, - # packages=find_packages(where='src'), - packages=["tts_infer"], - python_requires=">=3.7, <4", - install_requires=[ - "Cython==0.29.24", - "layers==0.1.5", - "librosa==0.8.1", - "matplotlib==3.3.4", - "numpy==1.20.2", - "scipy==1.5.4", - "tensorboardX==2.4", - "tensorboard==2.7.0", - "tqdm==4.62.3", - "fastapi==0.70.0", - "uvicorn==0.15.0", - "gradio==2.5.2", - "wavio==0.0.4", - "pydload==1.0.9", - "mosestokenizer==1.2.1", - "indic-nlp-library==0.81" - ], - classifiers=[ - # How mature is this project? Common values are - # 3 - Alpha - # 4 - Beta - # 5 - Production/Stable - "Development Status :: 3 - Alpha", - # Indicate who your project is intended for - "Intended Audience :: Developers", - "Intended Audience :: Education", - "Intended Audience :: Science/Research", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - "Topic :: Text Processing :: Linguistic", - # Pick your license as you wish (should match "license" above) - "License :: OSI Approved :: MIT License", - # Specify the Python versions you support here. In particular, ensure - # that you indicate whether you support Python 2, Python 3 or both. - "Programming Language :: Python :: 3.7", - ], - include_package_data=True, -) diff --git a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/tests/test_glossaries.py b/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/tests/test_glossaries.py deleted file mode 100644 index 2ff7da19fb00a8b8c9e7d33a67d6db4f0c72ef6c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/tests/test_glossaries.py +++ /dev/null @@ -1,137 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import unittest -import mock - -import os,sys,inspect -currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) -parentdir = os.path.dirname(currentdir) -sys.path.insert(0,parentdir) - -from apply_bpe import isolate_glossary, BPE - -class TestIsolateGlossaryFunction(unittest.TestCase): - - def setUp(self): - self.glossary = 'like' - - def _run_test_case(self, test_case): - orig, expected = test_case - out = isolate_glossary(orig, self.glossary) - self.assertEqual(out, expected) - - def test_empty_string(self): - orig = '' - exp = [''] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_no_glossary(self): - orig = 'word' - exp = ['word'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_isolated_glossary(self): - orig = 'like' - exp = ['like'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_word_one_side(self): - orig = 'likeword' - exp = ['like', 'word'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_words_both_sides(self): - orig = 'wordlikeword' - exp = ['word', 'like', 'word'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_back_to_back_glossary(self): - orig = 'likelike' - exp = ['like', 'like'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_multiple_glossaries(self): - orig = 'wordlikewordlike' - exp = ['word', 'like', 'word', 'like'] - test_case = (orig, exp) - self._run_test_case(test_case) - -class TestBPEIsolateGlossariesMethod(unittest.TestCase): - - def setUp(self): - - amock = mock.MagicMock() - amock.readline.return_value = 'something' - glossaries = ['like', 'Manuel', 'USA'] - self.bpe = BPE(amock, glossaries=glossaries) - - def _run_test_case(self, test_case): - orig, expected = test_case - out = self.bpe._isolate_glossaries(orig) - self.assertEqual(out, expected) - - def test_multiple_glossaries(self): - orig = 'wordlikeUSAwordManuelManuelwordUSA' - exp = ['word', 'like', 'USA', 'word', 'Manuel', 'Manuel', 'word', 'USA'] - test_case = (orig, exp) - self._run_test_case(test_case) - -class TestRegexIsolateGlossaries(unittest.TestCase): - - def setUp(self): - - amock = mock.MagicMock() - amock.readline.return_value = 'something' - glossaries = ["\w*", "\w*", "\d+"] - self.bpe = BPE(amock, glossaries=glossaries) - - def _run_test_case(self, test_case): - orig, expected = test_case - out = self.bpe._isolate_glossaries(orig) - self.assertEqual(out, expected) - - def test_regex_glossaries(self): - orig = 'wordlikeUSAword10001wordManuelwordUSA' - exp = ['wordlike', 'USA', 'word', '10001', 'word', 'Manuel', 'word', 'USA'] - test_case = (orig, exp) - self._run_test_case(test_case) - -def encode_mock(segment, x2, x3, x4, x5, x6, x7, glosses, dropout): - if glosses.match(segment): - return (segment,) - else: - l = len(segment) - return (segment[:l//2], segment[l//2:]) - -class TestBPESegmentMethod(unittest.TestCase): - - def setUp(self): - - amock = mock.MagicMock() - amock.readline.return_value = 'something' - glossaries = ['like', 'Manuel', 'USA'] - self.bpe = BPE(amock, glossaries=glossaries) - - @mock.patch('apply_bpe.encode', side_effect=encode_mock) - def _run_test_case(self, test_case, encode_function): - - orig, expected = test_case - out = self.bpe.segment(orig) - - self.assertEqual(out, expected) - - def test_multiple_glossaries(self): - orig = 'wordlikeword likeManuelword' - exp = 'wo@@ rd@@ like@@ wo@@ rd like@@ Manuel@@ wo@@ rd' - test_case = (orig, exp) - self._run_test_case(test_case) - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/Hina4867/bingo/Dockerfile b/spaces/Hina4867/bingo/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/Hoodady/3DFuse/ldm/modules/midas/utils.py b/spaces/Hoodady/3DFuse/ldm/modules/midas/utils.py deleted file mode 100644 index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/modules/midas/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -"""Utils for monoDepth.""" -import sys -import re -import numpy as np -import cv2 -import torch - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - - if width_orig > height_orig: - scale = width_orig / 384 - else: - scale = height_orig / 384 - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - - depth_resized = cv2.resize( - depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = np.zeros(depth.shape, dtype=depth.type) - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/__init__.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh deleted file mode 100644 index d8f5d596b4b4ec55f11a82dbbf83bad4a22c0b6c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -timit_root=$1 # assume it is the upper-cased version -tgt_dir=$2 -model=$3 - -set -eu - -setups="matched unmatched" -splits="test valid train train_text" - -tgt_dir=$(realpath $tgt_dir) -sph2wav=$KALDI_ROOT/tools/sph2pipe_v2.5/sph2pipe -wav_dir=$tgt_dir/wav - - -mkdir -p $tgt_dir $wav_dir -find $timit_root/{TRAIN,TEST} -iname "*.WAV" > $tgt_dir/all_sph.flist -cat $tgt_dir/all_sph.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).WAV#\1_\2#g' > $tgt_dir/all.uid -paste -d' ' $tgt_dir/{all_sph.flist,all.uid} | \ - awk -v sph2wav=$sph2wav -v wav_dir=$wav_dir '{print sph2wav " -f wav " $1 " > " wav_dir "/" $2 ".wav"}' \ - > $tgt_dir/sph2wav.sh -bash $tgt_dir/sph2wav.sh -cat $tgt_dir/all.uid | awk -v wav_dir=$(pwd)/$wav_dir '{print $1" "wav_dir"/"$1".wav"}' | sort > $tgt_dir/all_wav.scp -cut -d' ' -f2 $tgt_dir/all_wav.scp | xargs -I{} soxi -s {} > $tgt_dir/all.dur -paste -d' ' $tgt_dir/{all_wav.scp,all.dur} > $tgt_dir/all_wav_dur.scp -rm $tgt_dir/{all.uid,all_sph.flist,sph2wav.sh} - -find $timit_root/{TRAIN,TEST} -iname "*.PHN" > $tgt_dir/all_phn60.flist -while read line; do - if [ ! -f $line ]; then - >&2 echo "Cannot find transcription file '$line'" && exit 1; - fi - cut -f3 -d' ' "$line" | tr '\n' ' ' | perl -ape 's: *$:\n:;' -done < $tgt_dir/all_phn60.flist > $tgt_dir/all.phn60 -cat $tgt_dir/all_phn60.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).PHN#\1_\2#g' | \ - paste -d' ' - $tgt_dir/all.phn60 | \ - $KALDI_ROOT/egs/timit/s5/local/timit_norm_trans.pl -i - -m $KALDI_ROOT/egs/timit/s5/conf/phones.60-48-39.map -to 39 | \ - sort > $tgt_dir/all.phn -echo "done preparing wav and 39-phone transcripts" - - -for s in $setups; do - mkdir -p $tgt_dir/$s - for x in $splits; do - uid_path=config/timit_${s}/${x}.uid - grep -w -f $uid_path $tgt_dir/all.phn | cut -d' ' -f2- > $tgt_dir/$s/$x.phn - ln -sf $(realpath $tgt_dir/$s/$x.phn) $tgt_dir/$s/$x.wrd - - echo "/" > $tgt_dir/$s/$x.tsv && grep -w -f $uid_path $tgt_dir/all_wav_dur.scp | cut -d' ' -f2- | sed 's# #\t#' >> $tgt_dir/$s/$x.tsv - done - - for x in $splits; do - cat $tgt_dir/$s/$x.phn - done | tr ' ' '\n' | sort -u | awk '{print $1" "1}' > $tgt_dir/$s/dict.phn.txt - ln -sf $(realpath $tgt_dir/$s/dict.phn.txt) $tgt_dir/$s/dict.wrd.txt -done -echo "done preparing unmatched and matched setups for TIMIT" - - -for s in $setups; do - zsh scripts/prepare_audio.sh $tgt_dir/$s $tgt_dir/$s/feat $model - - lm_dir=$tgt_dir/$s/phones - fst_dir=$tgt_dir/$s/fst/phn_to_phn - - python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $tgt_dir/$s/train_text.phn --workers 10 --only-source --destdir $lm_dir --srcdict $tgt_dir/$s/dict.phn.txt - $KENLM_ROOT/lmplz -o 3 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.03.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.03.arpa $lm_dir/train_text_phn.03.bin - $KENLM_ROOT/lmplz -o 4 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.04.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.04.arpa $lm_dir/train_text_phn.04.bin - - python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$fst_dir lm_arpa=$lm_dir/train_text_phn.03.arpa data_dir=$tgt_dir/$s in_labels=phn -done -echo "done preprocessing audio and text for wav2vec-U" diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/feature_transforms/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/feature_transforms/__init__.py deleted file mode 100644 index 359fa069716cba0dd615ce0959368b20828c31f7..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/feature_transforms/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import importlib -import os -from abc import ABC, abstractmethod -from typing import Dict, Optional - - -class AudioFeatureTransform(ABC): - @classmethod - @abstractmethod - def from_config_dict(cls, config: Optional[Dict] = None): - pass - - -AUDIO_FEATURE_TRANSFORM_REGISTRY = {} -AUDIO_FEATURE_TRANSFORM_CLASS_NAMES = set() - - -def register_audio_feature_transform(name): - def register_audio_feature_transform_cls(cls): - if name in AUDIO_FEATURE_TRANSFORM_REGISTRY: - raise ValueError(f"Cannot register duplicate transform ({name})") - if not issubclass(cls, AudioFeatureTransform): - raise ValueError( - f"Transform ({name}: {cls.__name__}) must extend " - "AudioFeatureTransform" - ) - if cls.__name__ in AUDIO_FEATURE_TRANSFORM_CLASS_NAMES: - raise ValueError( - f"Cannot register audio feature transform with duplicate " - f"class name ({cls.__name__})" - ) - AUDIO_FEATURE_TRANSFORM_REGISTRY[name] = cls - AUDIO_FEATURE_TRANSFORM_CLASS_NAMES.add(cls.__name__) - return cls - - return register_audio_feature_transform_cls - - -def get_audio_feature_transform(name): - return AUDIO_FEATURE_TRANSFORM_REGISTRY[name] - - -transforms_dir = os.path.dirname(__file__) -for file in os.listdir(transforms_dir): - path = os.path.join(transforms_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module("fairseq.data.audio.feature_transforms." + name) - - -class CompositeAudioFeatureTransform(AudioFeatureTransform): - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - _transforms = _config.get("transforms") - if _transforms is None: - return None - transforms = [ - get_audio_feature_transform(_t).from_config_dict(_config.get(_t)) - for _t in _transforms - ] - return CompositeAudioFeatureTransform(transforms) - - def __init__(self, transforms): - self.transforms = [t for t in transforms if t is not None] - - def __call__(self, x): - for t in self.transforms: - x = t(x) - return x - - def __repr__(self): - format_string = ( - [self.__class__.__name__ + "("] - + [f" {t.__repr__()}" for t in self.transforms] - + [")"] - ) - return "\n".join(format_string) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/nat/cmlm_transformer.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/nat/cmlm_transformer.py deleted file mode 100644 index c876e9453c101c00bd8e93e6e6f1fb48dc26f993..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/nat/cmlm_transformer.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -This file implements: -Ghazvininejad, Marjan, et al. -"Constant-time machine translation with conditional masked language models." -arXiv preprint arXiv:1904.09324 (2019). -""" - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import NATransformerModel -from fairseq.utils import new_arange - - -def _skeptical_unmasking(output_scores, output_masks, p): - sorted_index = output_scores.sort(-1)[1] - boundary_len = ( - (output_masks.sum(1, keepdim=True).type_as(output_scores) - 2) * p - ).long() - skeptical_mask = new_arange(output_masks) < boundary_len - return skeptical_mask.scatter(1, sorted_index, skeptical_mask) - - -@register_model("cmlm_transformer") -class CMLMNATransformerModel(NATransformerModel): - @staticmethod - def add_args(parser): - NATransformerModel.add_args(parser) - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - assert not self.decoder.src_embedding_copy, "do not support embedding copy." - - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - word_ins_mask = prev_output_tokens.eq(self.unk) - - return { - "word_ins": { - "out": word_ins_out, - "tgt": tgt_tokens, - "mask": word_ins_mask, - "ls": self.args.label_smoothing, - "nll_loss": True, - }, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs): - - step = decoder_out.step - max_step = decoder_out.max_step - - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # execute the decoder - output_masks = output_tokens.eq(self.unk) - _scores, _tokens = self.decoder( - normalize=True, - prev_output_tokens=output_tokens, - encoder_out=encoder_out, - ).max(-1) - output_tokens.masked_scatter_(output_masks, _tokens[output_masks]) - output_scores.masked_scatter_(output_masks, _scores[output_masks]) - - if history is not None: - history.append(output_tokens.clone()) - - # skeptical decoding (depend on the maximum decoding steps.) - if (step + 1) < max_step: - skeptical_mask = _skeptical_unmasking( - output_scores, output_tokens.ne(self.pad), 1 - (step + 1) / max_step - ) - - output_tokens.masked_fill_(skeptical_mask, self.unk) - output_scores.masked_fill_(skeptical_mask, 0.0) - - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - -@register_model_architecture("cmlm_transformer", "cmlm_transformer") -def cmlm_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", True) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # --- special arguments --- - args.sg_length_pred = getattr(args, "sg_length_pred", False) - args.pred_length_offset = getattr(args, "pred_length_offset", False) - args.length_loss_factor = getattr(args, "length_loss_factor", 0.1) - args.ngram_predictor = getattr(args, "ngram_predictor", 1) - args.src_embedding_copy = getattr(args, "src_embedding_copy", False) - - -@register_model_architecture("cmlm_transformer", "cmlm_transformer_wmt_en_de") -def cmlm_wmt_en_de(args): - cmlm_base_architecture(args) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/pq/modules/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/pq/modules/__init__.py deleted file mode 100644 index b67c8e8ad691aa01e9e10e904d69d94595387668..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/pq/modules/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .qconv import PQConv2d # NOQA -from .qemb import PQEmbedding # NOQA -from .qlinear import PQLinear # NOQA diff --git a/spaces/Icar/AICompanion/README.md b/spaces/Icar/AICompanion/README.md deleted file mode 100644 index 07c1947c271250dd60d5b8f03b5ca91669266246..0000000000000000000000000000000000000000 --- a/spaces/Icar/AICompanion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AICompanion -emoji: 🐢 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/hifacegan_util.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/hifacegan_util.py deleted file mode 100644 index 35cbef3f532fcc6aab0fa57ab316a546d3a17bd5..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/hifacegan_util.py +++ /dev/null @@ -1,255 +0,0 @@ -import re -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import init -# Warning: spectral norm could be buggy -# under eval mode and multi-GPU inference -# A workaround is sticking to single-GPU inference and train mode -from torch.nn.utils import spectral_norm - - -class SPADE(nn.Module): - - def __init__(self, config_text, norm_nc, label_nc): - super().__init__() - - assert config_text.startswith('spade') - parsed = re.search('spade(\\D+)(\\d)x\\d', config_text) - param_free_norm_type = str(parsed.group(1)) - ks = int(parsed.group(2)) - - if param_free_norm_type == 'instance': - self.param_free_norm = nn.InstanceNorm2d(norm_nc) - elif param_free_norm_type == 'syncbatch': - print('SyncBatchNorm is currently not supported under single-GPU mode, switch to "instance" instead') - self.param_free_norm = nn.InstanceNorm2d(norm_nc) - elif param_free_norm_type == 'batch': - self.param_free_norm = nn.BatchNorm2d(norm_nc, affine=False) - else: - raise ValueError(f'{param_free_norm_type} is not a recognized param-free norm type in SPADE') - - # The dimension of the intermediate embedding space. Yes, hardcoded. - nhidden = 128 if norm_nc > 128 else norm_nc - - pw = ks // 2 - self.mlp_shared = nn.Sequential(nn.Conv2d(label_nc, nhidden, kernel_size=ks, padding=pw), nn.ReLU()) - self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, bias=False) - self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, bias=False) - - def forward(self, x, segmap): - - # Part 1. generate parameter-free normalized activations - normalized = self.param_free_norm(x) - - # Part 2. produce scaling and bias conditioned on semantic map - segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest') - actv = self.mlp_shared(segmap) - gamma = self.mlp_gamma(actv) - beta = self.mlp_beta(actv) - - # apply scale and bias - out = normalized * gamma + beta - - return out - - -class SPADEResnetBlock(nn.Module): - """ - ResNet block that uses SPADE. It differs from the ResNet block of pix2pixHD in that - it takes in the segmentation map as input, learns the skip connection if necessary, - and applies normalization first and then convolution. - This architecture seemed like a standard architecture for unconditional or - class-conditional GAN architecture using residual block. - The code was inspired from https://github.com/LMescheder/GAN_stability. - """ - - def __init__(self, fin, fout, norm_g='spectralspadesyncbatch3x3', semantic_nc=3): - super().__init__() - # Attributes - self.learned_shortcut = (fin != fout) - fmiddle = min(fin, fout) - - # create conv layers - self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=1) - self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=1) - if self.learned_shortcut: - self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False) - - # apply spectral norm if specified - if 'spectral' in norm_g: - self.conv_0 = spectral_norm(self.conv_0) - self.conv_1 = spectral_norm(self.conv_1) - if self.learned_shortcut: - self.conv_s = spectral_norm(self.conv_s) - - # define normalization layers - spade_config_str = norm_g.replace('spectral', '') - self.norm_0 = SPADE(spade_config_str, fin, semantic_nc) - self.norm_1 = SPADE(spade_config_str, fmiddle, semantic_nc) - if self.learned_shortcut: - self.norm_s = SPADE(spade_config_str, fin, semantic_nc) - - # note the resnet block with SPADE also takes in |seg|, - # the semantic segmentation map as input - def forward(self, x, seg): - x_s = self.shortcut(x, seg) - dx = self.conv_0(self.act(self.norm_0(x, seg))) - dx = self.conv_1(self.act(self.norm_1(dx, seg))) - out = x_s + dx - return out - - def shortcut(self, x, seg): - if self.learned_shortcut: - x_s = self.conv_s(self.norm_s(x, seg)) - else: - x_s = x - return x_s - - def act(self, x): - return F.leaky_relu(x, 2e-1) - - -class BaseNetwork(nn.Module): - """ A basis for hifacegan archs with custom initialization """ - - def init_weights(self, init_type='normal', gain=0.02): - - def init_func(m): - classname = m.__class__.__name__ - if classname.find('BatchNorm2d') != -1: - if hasattr(m, 'weight') and m.weight is not None: - init.normal_(m.weight.data, 1.0, gain) - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - init.normal_(m.weight.data, 0.0, gain) - elif init_type == 'xavier': - init.xavier_normal_(m.weight.data, gain=gain) - elif init_type == 'xavier_uniform': - init.xavier_uniform_(m.weight.data, gain=1.0) - elif init_type == 'kaiming': - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight.data, gain=gain) - elif init_type == 'none': # uses pytorch's default init method - m.reset_parameters() - else: - raise NotImplementedError(f'initialization method [{init_type}] is not implemented') - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - - self.apply(init_func) - - # propagate to children - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights(init_type, gain) - - def forward(self, x): - pass - - -def lip2d(x, logit, kernel=3, stride=2, padding=1): - weight = logit.exp() - return F.avg_pool2d(x * weight, kernel, stride, padding) / F.avg_pool2d(weight, kernel, stride, padding) - - -class SoftGate(nn.Module): - COEFF = 12.0 - - def forward(self, x): - return torch.sigmoid(x).mul(self.COEFF) - - -class SimplifiedLIP(nn.Module): - - def __init__(self, channels): - super(SimplifiedLIP, self).__init__() - self.logit = nn.Sequential( - nn.Conv2d(channels, channels, 3, padding=1, bias=False), nn.InstanceNorm2d(channels, affine=True), - SoftGate()) - - def init_layer(self): - self.logit[0].weight.data.fill_(0.0) - - def forward(self, x): - frac = lip2d(x, self.logit(x)) - return frac - - -class LIPEncoder(BaseNetwork): - """Local Importance-based Pooling (Ziteng Gao et.al.,ICCV 2019)""" - - def __init__(self, input_nc, ngf, sw, sh, n_2xdown, norm_layer=nn.InstanceNorm2d): - super().__init__() - self.sw = sw - self.sh = sh - self.max_ratio = 16 - # 20200310: Several Convolution (stride 1) + LIP blocks, 4 fold - kw = 3 - pw = (kw - 1) // 2 - - model = [ - nn.Conv2d(input_nc, ngf, kw, stride=1, padding=pw, bias=False), - norm_layer(ngf), - nn.ReLU(), - ] - cur_ratio = 1 - for i in range(n_2xdown): - next_ratio = min(cur_ratio * 2, self.max_ratio) - model += [ - SimplifiedLIP(ngf * cur_ratio), - nn.Conv2d(ngf * cur_ratio, ngf * next_ratio, kw, stride=1, padding=pw), - norm_layer(ngf * next_ratio), - ] - cur_ratio = next_ratio - if i < n_2xdown - 1: - model += [nn.ReLU(inplace=True)] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -def get_nonspade_norm_layer(norm_type='instance'): - # helper function to get # output channels of the previous layer - def get_out_channel(layer): - if hasattr(layer, 'out_channels'): - return getattr(layer, 'out_channels') - return layer.weight.size(0) - - # this function will be returned - def add_norm_layer(layer): - nonlocal norm_type - if norm_type.startswith('spectral'): - layer = spectral_norm(layer) - subnorm_type = norm_type[len('spectral'):] - - if subnorm_type == 'none' or len(subnorm_type) == 0: - return layer - - # remove bias in the previous layer, which is meaningless - # since it has no effect after normalization - if getattr(layer, 'bias', None) is not None: - delattr(layer, 'bias') - layer.register_parameter('bias', None) - - if subnorm_type == 'batch': - norm_layer = nn.BatchNorm2d(get_out_channel(layer), affine=True) - elif subnorm_type == 'sync_batch': - print('SyncBatchNorm is currently not supported under single-GPU mode, switch to "instance" instead') - # norm_layer = SynchronizedBatchNorm2d( - # get_out_channel(layer), affine=True) - norm_layer = nn.InstanceNorm2d(get_out_channel(layer), affine=False) - elif subnorm_type == 'instance': - norm_layer = nn.InstanceNorm2d(get_out_channel(layer), affine=False) - else: - raise ValueError(f'normalization layer {subnorm_type} is not recognized') - - return nn.Sequential(layer, norm_layer) - - print('This is a legacy from nvlabs/SPADE, and will be removed in future versions.') - return add_norm_layer diff --git a/spaces/Illumotion/Koboldcpp/scripts/convert-gg.sh b/spaces/Illumotion/Koboldcpp/scripts/convert-gg.sh deleted file mode 100644 index 01fda16fd7efc18066e23ae31c748fbedcf2ab6b..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/scripts/convert-gg.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -set -e - -# LLaMA v1 -python3 convert.py ../llama1/7B --outfile models/llama-7b/ggml-model-f16.gguf --outtype f16 -python3 convert.py ../llama1/13B --outfile models/llama-13b/ggml-model-f16.gguf --outtype f16 -python3 convert.py ../llama1/30B --outfile models/llama-30b/ggml-model-f16.gguf --outtype f16 -python3 convert.py ../llama1/65B --outfile models/llama-65b/ggml-model-f16.gguf --outtype f16 - -# LLaMA v2 -python3 convert.py ../llama2/llama-2-7b --outfile models/llama-7b-v2/ggml-model-f16.gguf --outtype f16 -python3 convert.py ../llama2/llama-2-13b --outfile models/llama-13b-v2/ggml-model-f16.gguf --outtype f16 -python3 convert.py ../llama2/llama-2-70b --outfile models/llama-70b-v2/ggml-model-f16.gguf --outtype f16 - -# Code Llama -python3 convert.py ../codellama/CodeLlama-7b/ --outfile models/codellama-7b/ggml-model-f16.gguf --outtype f16 -python3 convert.py ../codellama/CodeLlama-13b/ --outfile models/codellama-13b/ggml-model-f16.gguf --outtype f16 -python3 convert.py ../codellama/CodeLlama-34b/ --outfile models/codellama-34b/ggml-model-f16.gguf --outtype f16 - -# Falcon -python3 convert-falcon-hf-to-gguf.py ../falcon/falcon-7b 1 -mv -v ../falcon/falcon-7b/ggml-model-f16.gguf models/falcon-7b/ggml-model-f16.gguf - -python3 convert-falcon-hf-to-gguf.py ../falcon/falcon-40b 1 -mv -v ../falcon/falcon-40b/ggml-model-f16.gguf models/falcon-40b/ggml-model-f16.gguf diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/utils/import_utils.py b/spaces/Jackflack09/diffuse-custom/diffusers/utils/import_utils.py deleted file mode 100644 index 531f9eab2f7ae32f818c990ea905f8c5bb98b861..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/utils/import_utils.py +++ /dev/null @@ -1,396 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Import utilities: Utilities related to imports and our lazy inits. -""" -import importlib.util -import operator as op -import os -import sys -from collections import OrderedDict -from typing import Union - -from packaging import version -from packaging.version import Version, parse - -from . import logging - - -# The package importlib_metadata is in a different place, depending on the python version. -if sys.version_info < (3, 8): - import importlib_metadata -else: - import importlib.metadata as importlib_metadata - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -ENV_VARS_TRUE_VALUES = {"1", "ON", "YES", "TRUE"} -ENV_VARS_TRUE_AND_AUTO_VALUES = ENV_VARS_TRUE_VALUES.union({"AUTO"}) - -USE_TF = os.environ.get("USE_TF", "AUTO").upper() -USE_TORCH = os.environ.get("USE_TORCH", "AUTO").upper() -USE_JAX = os.environ.get("USE_FLAX", "AUTO").upper() -USE_SAFETENSORS = os.environ.get("USE_SAFETENSORS", "AUTO").upper() - -STR_OPERATION_TO_FUNC = {">": op.gt, ">=": op.ge, "==": op.eq, "!=": op.ne, "<=": op.le, "<": op.lt} - -_torch_version = "N/A" -if USE_TORCH in ENV_VARS_TRUE_AND_AUTO_VALUES and USE_TF not in ENV_VARS_TRUE_VALUES: - _torch_available = importlib.util.find_spec("torch") is not None - if _torch_available: - try: - _torch_version = importlib_metadata.version("torch") - logger.info(f"PyTorch version {_torch_version} available.") - except importlib_metadata.PackageNotFoundError: - _torch_available = False -else: - logger.info("Disabling PyTorch because USE_TORCH is set") - _torch_available = False - - -_tf_version = "N/A" -if USE_TF in ENV_VARS_TRUE_AND_AUTO_VALUES and USE_TORCH not in ENV_VARS_TRUE_VALUES: - _tf_available = importlib.util.find_spec("tensorflow") is not None - if _tf_available: - candidates = ( - "tensorflow", - "tensorflow-cpu", - "tensorflow-gpu", - "tf-nightly", - "tf-nightly-cpu", - "tf-nightly-gpu", - "intel-tensorflow", - "intel-tensorflow-avx512", - "tensorflow-rocm", - "tensorflow-macos", - "tensorflow-aarch64", - ) - _tf_version = None - # For the metadata, we have to look for both tensorflow and tensorflow-cpu - for pkg in candidates: - try: - _tf_version = importlib_metadata.version(pkg) - break - except importlib_metadata.PackageNotFoundError: - pass - _tf_available = _tf_version is not None - if _tf_available: - if version.parse(_tf_version) < version.parse("2"): - logger.info(f"TensorFlow found but with version {_tf_version}. Diffusers requires version 2 minimum.") - _tf_available = False - else: - logger.info(f"TensorFlow version {_tf_version} available.") -else: - logger.info("Disabling Tensorflow because USE_TORCH is set") - _tf_available = False - -_jax_version = "N/A" -_flax_version = "N/A" -if USE_JAX in ENV_VARS_TRUE_AND_AUTO_VALUES: - _flax_available = importlib.util.find_spec("jax") is not None and importlib.util.find_spec("flax") is not None - if _flax_available: - try: - _jax_version = importlib_metadata.version("jax") - _flax_version = importlib_metadata.version("flax") - logger.info(f"JAX version {_jax_version}, Flax version {_flax_version} available.") - except importlib_metadata.PackageNotFoundError: - _flax_available = False -else: - _flax_available = False - -if USE_SAFETENSORS in ENV_VARS_TRUE_AND_AUTO_VALUES: - _safetensors_available = importlib.util.find_spec("safetensors") is not None - if _safetensors_available: - try: - _safetensors_version = importlib_metadata.version("safetensors") - logger.info(f"Safetensors version {_safetensors_version} available.") - except importlib_metadata.PackageNotFoundError: - _safetensors_available = False -else: - logger.info("Disabling Safetensors because USE_TF is set") - _safetensors_available = False - -_transformers_available = importlib.util.find_spec("transformers") is not None -try: - _transformers_version = importlib_metadata.version("transformers") - logger.debug(f"Successfully imported transformers version {_transformers_version}") -except importlib_metadata.PackageNotFoundError: - _transformers_available = False - - -_inflect_available = importlib.util.find_spec("inflect") is not None -try: - _inflect_version = importlib_metadata.version("inflect") - logger.debug(f"Successfully imported inflect version {_inflect_version}") -except importlib_metadata.PackageNotFoundError: - _inflect_available = False - - -_unidecode_available = importlib.util.find_spec("unidecode") is not None -try: - _unidecode_version = importlib_metadata.version("unidecode") - logger.debug(f"Successfully imported unidecode version {_unidecode_version}") -except importlib_metadata.PackageNotFoundError: - _unidecode_available = False - - -_modelcards_available = importlib.util.find_spec("modelcards") is not None -try: - _modelcards_version = importlib_metadata.version("modelcards") - logger.debug(f"Successfully imported modelcards version {_modelcards_version}") -except importlib_metadata.PackageNotFoundError: - _modelcards_available = False - - -_onnxruntime_version = "N/A" -_onnx_available = importlib.util.find_spec("onnxruntime") is not None -if _onnx_available: - candidates = ( - "onnxruntime", - "onnxruntime-gpu", - "onnxruntime-directml", - "onnxruntime-openvino", - "ort_nightly_directml", - ) - _onnxruntime_version = None - # For the metadata, we have to look for both onnxruntime and onnxruntime-gpu - for pkg in candidates: - try: - _onnxruntime_version = importlib_metadata.version(pkg) - break - except importlib_metadata.PackageNotFoundError: - pass - _onnx_available = _onnxruntime_version is not None - if _onnx_available: - logger.debug(f"Successfully imported onnxruntime version {_onnxruntime_version}") - - -_scipy_available = importlib.util.find_spec("scipy") is not None -try: - _scipy_version = importlib_metadata.version("scipy") - logger.debug(f"Successfully imported transformers version {_scipy_version}") -except importlib_metadata.PackageNotFoundError: - _scipy_available = False - -_accelerate_available = importlib.util.find_spec("accelerate") is not None -try: - _accelerate_version = importlib_metadata.version("accelerate") - logger.debug(f"Successfully imported accelerate version {_accelerate_version}") -except importlib_metadata.PackageNotFoundError: - _accelerate_available = False - -_xformers_available = importlib.util.find_spec("xformers") is not None -try: - _xformers_version = importlib_metadata.version("xformers") - if _torch_available: - import torch - - if torch.__version__ < version.Version("1.12"): - raise ValueError("PyTorch should be >= 1.12") - logger.debug(f"Successfully imported xformers version {_xformers_version}") -except importlib_metadata.PackageNotFoundError: - _xformers_available = False - - -def is_torch_available(): - return _torch_available - - -def is_safetensors_available(): - return _safetensors_available - - -def is_tf_available(): - return _tf_available - - -def is_flax_available(): - return _flax_available - - -def is_transformers_available(): - return _transformers_available - - -def is_inflect_available(): - return _inflect_available - - -def is_unidecode_available(): - return _unidecode_available - - -def is_modelcards_available(): - return _modelcards_available - - -def is_onnx_available(): - return _onnx_available - - -def is_scipy_available(): - return _scipy_available - - -def is_xformers_available(): - return _xformers_available - - -def is_accelerate_available(): - return _accelerate_available - - -# docstyle-ignore -FLAX_IMPORT_ERROR = """ -{0} requires the FLAX library but it was not found in your environment. Checkout the instructions on the -installation page: https://github.com/google/flax and follow the ones that match your environment. -""" - -# docstyle-ignore -INFLECT_IMPORT_ERROR = """ -{0} requires the inflect library but it was not found in your environment. You can install it with pip: `pip install -inflect` -""" - -# docstyle-ignore -PYTORCH_IMPORT_ERROR = """ -{0} requires the PyTorch library but it was not found in your environment. Checkout the instructions on the -installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. -""" - -# docstyle-ignore -ONNX_IMPORT_ERROR = """ -{0} requires the onnxruntime library but it was not found in your environment. You can install it with pip: `pip -install onnxruntime` -""" - -# docstyle-ignore -SCIPY_IMPORT_ERROR = """ -{0} requires the scipy library but it was not found in your environment. You can install it with pip: `pip install -scipy` -""" - -# docstyle-ignore -TENSORFLOW_IMPORT_ERROR = """ -{0} requires the TensorFlow library but it was not found in your environment. Checkout the instructions on the -installation page: https://www.tensorflow.org/install and follow the ones that match your environment. -""" - -# docstyle-ignore -TRANSFORMERS_IMPORT_ERROR = """ -{0} requires the transformers library but it was not found in your environment. You can install it with pip: `pip -install transformers` -""" - -# docstyle-ignore -UNIDECODE_IMPORT_ERROR = """ -{0} requires the unidecode library but it was not found in your environment. You can install it with pip: `pip install -Unidecode` -""" - - -BACKENDS_MAPPING = OrderedDict( - [ - ("flax", (is_flax_available, FLAX_IMPORT_ERROR)), - ("inflect", (is_inflect_available, INFLECT_IMPORT_ERROR)), - ("onnx", (is_onnx_available, ONNX_IMPORT_ERROR)), - ("scipy", (is_scipy_available, SCIPY_IMPORT_ERROR)), - ("tf", (is_tf_available, TENSORFLOW_IMPORT_ERROR)), - ("torch", (is_torch_available, PYTORCH_IMPORT_ERROR)), - ("transformers", (is_transformers_available, TRANSFORMERS_IMPORT_ERROR)), - ("unidecode", (is_unidecode_available, UNIDECODE_IMPORT_ERROR)), - ] -) - - -def requires_backends(obj, backends): - if not isinstance(backends, (list, tuple)): - backends = [backends] - - name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__ - checks = (BACKENDS_MAPPING[backend] for backend in backends) - failed = [msg.format(name) for available, msg in checks if not available()] - if failed: - raise ImportError("".join(failed)) - - if name in [ - "VersatileDiffusionTextToImagePipeline", - "VersatileDiffusionPipeline", - "VersatileDiffusionDualGuidedPipeline", - "StableDiffusionImageVariationPipeline", - ] and is_transformers_version("<", "4.25.0.dev0"): - raise ImportError( - f"You need to install `transformers` from 'main' in order to use {name}: \n```\n pip install" - " git+https://github.com/huggingface/transformers \n```" - ) - - -class DummyObject(type): - """ - Metaclass for the dummy objects. Any class inheriting from it will return the ImportError generated by - `requires_backend` each time a user tries to access any method of that class. - """ - - def __getattr__(cls, key): - if key.startswith("_"): - return super().__getattr__(cls, key) - requires_backends(cls, cls._backends) - - -# This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L319 -def compare_versions(library_or_version: Union[str, Version], operation: str, requirement_version: str): - """ - Args: - Compares a library version to some requirement using a given operation. - library_or_version (`str` or `packaging.version.Version`): - A library name or a version to check. - operation (`str`): - A string representation of an operator, such as `">"` or `"<="`. - requirement_version (`str`): - The version to compare the library version against - """ - if operation not in STR_OPERATION_TO_FUNC.keys(): - raise ValueError(f"`operation` must be one of {list(STR_OPERATION_TO_FUNC.keys())}, received {operation}") - operation = STR_OPERATION_TO_FUNC[operation] - if isinstance(library_or_version, str): - library_or_version = parse(importlib_metadata.version(library_or_version)) - return operation(library_or_version, parse(requirement_version)) - - -# This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L338 -def is_torch_version(operation: str, version: str): - """ - Args: - Compares the current PyTorch version to a given reference with an operation. - operation (`str`): - A string representation of an operator, such as `">"` or `"<="` - version (`str`): - A string version of PyTorch - """ - return compare_versions(parse(_torch_version), operation, version) - - -def is_transformers_version(operation: str, version: str): - """ - Args: - Compares the current Transformers version to a given reference with an operation. - operation (`str`): - A string representation of an operator, such as `">"` or `"<="` - version (`str`): - A string version of PyTorch - """ - if not _transformers_available: - return False - return compare_versions(parse(_transformers_version), operation, version) diff --git a/spaces/JeffJing/ZookChatBot/steamship/client/skill_to_provider.py b/spaces/JeffJing/ZookChatBot/steamship/client/skill_to_provider.py deleted file mode 100644 index 2d59ac93f06c3baede7370600e25bd8f22f799af..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/client/skill_to_provider.py +++ /dev/null @@ -1,51 +0,0 @@ -from typing import Any, Dict - -from pydantic import BaseModel - -from steamship.client.skills import Skill -from steamship.client.vendors import Vendor - - -class SkillVendorConfig(BaseModel): - plugin_handle: str - config: Dict[str, Any] - - -SKILL_TO_PROVIDER: Dict[Skill, Dict[Vendor, SkillVendorConfig]] = { - Skill.ENTITIES: { - Vendor.OneAI: SkillVendorConfig( - plugin_handle="oneai-tagger", - config={"skills": ["names", "numbers", "business-entities"]}, - ) - }, - Skill.SUMMARY: { - Vendor.OneAI: SkillVendorConfig( - plugin_handle="oneai-tagger", config={"skills": ["summarize"]} - ) - }, - Skill.SENTIMENTS: { - Vendor.OneAI: SkillVendorConfig( - plugin_handle="oneai-tagger", config={"skills": ["sentiments"]} - ) - }, - Skill.EMOTIONS: { - Vendor.OneAI: SkillVendorConfig( - plugin_handle="oneai-tagger", config={"skills": ["emotions"]} - ) - }, - Skill.TOPICS: { - Vendor.OneAI: SkillVendorConfig( - plugin_handle="oneai-tagger", config={"skills": ["article-topics"]} - ), - }, - Skill.HIGHLIGHTS: { - Vendor.OneAI: SkillVendorConfig( - plugin_handle="oneai-tagger", config={"skills": ["highlights"]} - ) - }, - Skill.KEYWORDS: { - Vendor.OneAI: SkillVendorConfig( - plugin_handle="oneai-tagger", config={"skills": ["keywords"]} - ) - }, -} diff --git a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/plugin.py b/spaces/JeffJing/ZookChatBot/steamship/data/plugin/plugin.py deleted file mode 100644 index 8776f9f801952687f1e9078d227dbb818c8943a6..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/plugin.py +++ /dev/null @@ -1,146 +0,0 @@ -# Plugin -# -# This file contains the abstractions for managing Steamship plugins. -# To see how to implement a Steamship Plugin, see plugin_service.py in the same folder. -# -# - -from __future__ import annotations - -import json -from enum import Enum -from typing import Any, Dict, List, Optional, Type, Union - -from pydantic import BaseModel, Field - -from steamship.base.client import Client -from steamship.base.model import CamelModel -from steamship.base.request import IdentifierRequest, Request, UpdateRequest -from steamship.base.response import Response -from steamship.data.manifest import Manifest - -from .hosting import HostingType - - -class CreatePluginRequest(Request): - training_platform: Optional[HostingType] = None - id: str = None - type: str = None - transport: str = None - is_public: bool = None - handle: str = None - description: str = None - metadata: str = None - fetch_if_exists: bool = False - - -class PluginUpdateRequest(UpdateRequest): - id: Optional[str] = None - handle: Optional[str] = None - description: Optional[str] = None - profile: Optional[Manifest] = None - readme: Optional[str] = None - - -class ListPluginsRequest(Request): - type: Optional[str] = None - - -class ListPluginsResponse(Response): - plugins: List[Plugin] - - -class PluginType(str, Enum): - parser = "parser" - classifier = "classifier" - tagger = "tagger" - embedder = "embedder" - generator = "generator" - - -class PluginAdapterType(str, Enum): - steamship_docker = "steamshipDocker" - steamship_sagemaker = "steamshipSagemaker" - huggingface = "huggingface" - openai = "openai" - - -class PluginTargetType(str, Enum): - FILE = "file" - WORKSPACE = "workspace" - - -class Plugin(CamelModel): - client: Client = Field(None, exclude=True) - id: str = None - type: str = None - transport: str = None - is_public: bool = None - training_platform: Optional[HostingType] = None - handle: str = None - description: str = None - metadata: str = None - profile: Optional[Manifest] = None - readme: Optional[str] = None - user_id: Optional[str] = None - - @classmethod - def parse_obj(cls: Type[BaseModel], obj: Any) -> BaseModel: - # TODO (enias): This needs to be solved at the engine side - obj = obj["plugin"] if "plugin" in obj else obj - return super().parse_obj(obj) - - @staticmethod - def create( - client: Client, - description: str, - type_: str, - transport: str, - is_public: bool, - handle: str = None, - training_platform: Optional[HostingType] = None, - metadata: Union[str, Dict, List] = None, - fetch_if_exists: bool = False, - ) -> Plugin: - if isinstance(metadata, dict) or isinstance(metadata, list): - metadata = json.dumps(metadata) - - req = CreatePluginRequest( - training_platform=training_platform, - type=type_, - transport=transport, - is_public=is_public, - handle=handle, - description=description, - metadata=metadata, - fetch_if_exists=fetch_if_exists, - ) - return client.post( - "plugin/create", - req, - expect=Plugin, - ) - - @staticmethod - def list(client: Client, t: str = None) -> ListPluginsResponse: - return client.post( - "plugin/list", - ListPluginsRequest(type=t), - expect=ListPluginsResponse, - ) - - @staticmethod - def get(client: Client, handle: str): - return client.post("plugin/get", IdentifierRequest(handle=handle), expect=Plugin) - - def update(self, client: Client) -> Plugin: - return client.post( - "plugin/update", - PluginUpdateRequest( - id=self.id, description=self.description, profile=self.profile, readme=self.readme - ), - expect=Plugin, - ) - - -ListPluginsResponse.update_forward_refs() diff --git a/spaces/Jorgvt/CycleGAN-GTA-REAL/README.md b/spaces/Jorgvt/CycleGAN-GTA-REAL/README.md deleted file mode 100644 index 87ec7852a765a7e965f2fffea96c1535c02e6e61..0000000000000000000000000000000000000000 --- a/spaces/Jorgvt/CycleGAN-GTA-REAL/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CycleGAN GTA REAL -emoji: 👁 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/KOFTRFU204/AICoverGen/src/infer_pack/commons.py b/spaces/KOFTRFU204/AICoverGen/src/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/KOFTRFU204/AICoverGen/src/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Kajise/Demucs_v4-FT_2s/README.md b/spaces/Kajise/Demucs_v4-FT_2s/README.md deleted file mode 100644 index 2c2ae1cc48e58df51dbe1b4f9fb58e43cb33e1c2..0000000000000000000000000000000000000000 --- a/spaces/Kajise/Demucs_v4-FT_2s/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Demucs (Finetuned-2S) -emoji: 🐨 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: agpl-3.0 -duplicated_from: Kajise/Demucs_v4-FT_4s ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Katie-portswigger/Portswigger/theme_dropdown.py b/spaces/Katie-portswigger/Portswigger/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/Katie-portswigger/Portswigger/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/KenjieDec/RemBG/rembg/sessions/dis_general_use.py b/spaces/KenjieDec/RemBG/rembg/sessions/dis_general_use.py deleted file mode 100644 index a71b34f68450fc6d711b9baf443611102bf848a2..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/RemBG/rembg/sessions/dis_general_use.py +++ /dev/null @@ -1,49 +0,0 @@ -import os -from typing import List - -import numpy as np -import pooch -from PIL import Image -from PIL.Image import Image as PILImage - -from .base import BaseSession - - -class DisSession(BaseSession): - def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]: - ort_outs = self.inner_session.run( - None, - self.normalize(img, (0.485, 0.456, 0.406), (1.0, 1.0, 1.0), (1024, 1024)), - ) - - pred = ort_outs[0][:, 0, :, :] - - ma = np.max(pred) - mi = np.min(pred) - - pred = (pred - mi) / (ma - mi) - pred = np.squeeze(pred) - - mask = Image.fromarray((pred * 255).astype("uint8"), mode="L") - mask = mask.resize(img.size, Image.LANCZOS) - - return [mask] - - @classmethod - def download_models(cls, *args, **kwargs): - fname = f"{cls.name()}.onnx" - pooch.retrieve( - "https://github.com/danielgatis/rembg/releases/download/v0.0.0/isnet-general-use.onnx", - None - if cls.checksum_disabled(*args, **kwargs) - else "md5:fc16ebd8b0c10d971d3513d564d01e29", - fname=fname, - path=cls.u2net_home(*args, **kwargs), - progressbar=True, - ) - - return os.path.join(cls.u2net_home(), fname) - - @classmethod - def name(cls, *args, **kwargs): - return "isnet-general-use" diff --git a/spaces/Kevin676/Clone-Your-Voice/vocoder/hparams.py b/spaces/Kevin676/Clone-Your-Voice/vocoder/hparams.py deleted file mode 100644 index c1de9f7dcc2926735b80a28ed1226ff1b5824753..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Clone-Your-Voice/vocoder/hparams.py +++ /dev/null @@ -1,44 +0,0 @@ -from synthesizer.hparams import hparams as _syn_hp - - -# Audio settings------------------------------------------------------------------------ -# Match the values of the synthesizer -sample_rate = _syn_hp.sample_rate -n_fft = _syn_hp.n_fft -num_mels = _syn_hp.num_mels -hop_length = _syn_hp.hop_size -win_length = _syn_hp.win_size -fmin = _syn_hp.fmin -min_level_db = _syn_hp.min_level_db -ref_level_db = _syn_hp.ref_level_db -mel_max_abs_value = _syn_hp.max_abs_value -preemphasis = _syn_hp.preemphasis -apply_preemphasis = _syn_hp.preemphasize - -bits = 9 # bit depth of signal -mu_law = True # Recommended to suppress noise if using raw bits in hp.voc_mode - # below - - -# WAVERNN / VOCODER -------------------------------------------------------------------------------- -voc_mode = 'RAW' # either 'RAW' (softmax on raw bits) or 'MOL' (sample from -# mixture of logistics) -voc_upsample_factors = (5, 5, 8) # NB - this needs to correctly factorise hop_length -voc_rnn_dims = 512 -voc_fc_dims = 512 -voc_compute_dims = 128 -voc_res_out_dims = 128 -voc_res_blocks = 10 - -# Training -voc_batch_size = 100 -voc_lr = 1e-4 -voc_gen_at_checkpoint = 5 # number of samples to generate at each checkpoint -voc_pad = 2 # this will pad the input so that the resnet can 'see' wider - # than input length -voc_seq_len = hop_length * 5 # must be a multiple of hop_length - -# Generating / Synthesizing -voc_gen_batched = True # very fast (realtime+) single utterance batched generation -voc_target = 8000 # target number of samples to be generated in each batch entry -voc_overlap = 400 # number of samples for crossfading between batches diff --git a/spaces/KyanChen/BuildingExtraction/Models/BackBone/ResNet.py b/spaces/KyanChen/BuildingExtraction/Models/BackBone/ResNet.py deleted file mode 100644 index 8f7e1e7e36550b7c504faf9af15147f9386d9a59..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/BuildingExtraction/Models/BackBone/ResNet.py +++ /dev/null @@ -1,325 +0,0 @@ -import torch -import torch.nn as nn -from torch.hub import load_state_dict_from_url - -__all__ = ['get_resnet', 'BasicBlock'] - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None): - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - # if dilation > 1: - # raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes, dilation=dilation) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None): - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, block, layers, num_classes=1000, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, - norm_layer=None, out_keys=None, in_channels=3, **kwargs): - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - self.out_keys = out_keys - self.num_classes = num_classes - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(in_channels, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1]) - if 'block5' in self.out_keys: - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, - dilate=replace_stride_with_dilation[2]) - if self.num_classes is not None: - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(512 * block.expansion, self.num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False): - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def forward(self, x): - endpoints = dict() - endpoints['block0'] = x - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - endpoints['block1'] = x - x = self.maxpool(x) - x = self.layer1(x) - endpoints['block2'] = x - x = self.layer2(x) - endpoints['block3'] = x - x = self.layer3(x) - endpoints['block4'] = x - if 'block5' in self.out_keys: - x = self.layer4(x) - endpoints['block5'] = x - - if self.num_classes is not None: - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.fc(x) - if self.out_keys is not None: - endpoints = {key: endpoints[key] for key in self.out_keys} - return x, endpoints - - -def _resnet(arch, block, layers, pretrained, progress, num_classes=1000, in_channels=3, out_keys=None, **kwargs): - model = ResNet(block, layers, num_classes, out_keys=out_keys, in_channels=in_channels, **kwargs) - if pretrained: - state_dict = load_state_dict_from_url(model_urls[arch], - progress=progress) - if in_channels != 3: - keys = state_dict.keys() - keys = [x for x in keys if 'conv1.weight' in x] - for key in keys: - del state_dict[key] - if num_classes !=1000: - keys = state_dict.keys() - keys = [x for x in keys if 'fc' in x] - for key in keys: - del state_dict[key] - if 'block5' not in out_keys: - keys = state_dict.keys() - keys = [x for x in keys if 'layer4' in x] - for key in keys: - del state_dict[key] - model.load_state_dict(state_dict) - print('load resnet model...') - - return model - - -def _resnet18(name='resnet18', pretrained=True, progress=True, num_classes=1000, out_keys=None, **kwargs): - r"""ResNet-18 model from - `"Deep Residual Learning for Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet(name, BasicBlock, [2, 2, 2, 2], pretrained, progress, - num_classes=num_classes, out_keys=out_keys, **kwargs) - -def _resnet50(name='resnet50',pretrained=False, progress=True,num_classes=1000,out_keys=None, **kwargs): - r"""ResNet-50 model from - `"Deep Residual Learning for Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet(name, Bottleneck, [3, 4, 6, 3], pretrained, progress, - num_classes=num_classes,out_keys=out_keys, - **kwargs) - - -def _resnet101(name='resnet101',pretrained=False, progress=True, num_classes=1000,out_keys=None,**kwargs): - r"""ResNet-101 model from - `"Deep Residual Learning for Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet(name, Bottleneck, [3, 4, 23, 3], pretrained, progress, - num_classes=num_classes, out_keys=out_keys, - **kwargs) - - -def _resnet152(name='resnet152',pretrained=False, progress=True,num_classes=1000,out_keys=None,**kwargs): - r"""ResNet-152 model from - `"Deep Residual Learning for Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet(name, Bottleneck, [3, 8, 36, 3], pretrained, progress, - num_classes=num_classes, out_keys=out_keys, - **kwargs) - - -def get_resnet(model_name='resnet50', pretrained=True, progress=True, num_classes=1000, out_keys=None, in_channels=3, **kwargs): - ''' - Get resnet model with name. - :param name: resnet model name, optional values:[resnet18, reset50, resnet101, resnet152] - :param pretrained: If True, returns a model pre-trained on ImageNet - ''' - - if pretrained and num_classes != 1000: - print('warning: num_class is not equal to 1000, which will cause some parameters to fail to load!') - if pretrained and in_channels != 3: - print('warning: in_channels is not equal to 3, which will cause some parameters to fail to load!') - - if model_name == 'resnet18': - return _resnet18(name=model_name, pretrained=pretrained, progress=progress, - num_classes=num_classes, out_keys=out_keys, in_channels=in_channels, **kwargs) - elif model_name == 'resnet50': - return _resnet50(name=model_name, pretrained=pretrained, progress=progress, - num_classes=num_classes, out_keys=out_keys, in_channels=in_channels, **kwargs) - elif model_name == 'resnet101': - return _resnet101(name=model_name, pretrained=pretrained, progress=progress, - num_classes=num_classes, out_keys=out_keys, in_channels=in_channels, **kwargs) - elif model_name == 'resnet152': - return _resnet152(name=model_name, pretrained=pretrained, progress=progress, - num_classes=num_classes, out_keys=out_keys, in_channels=in_channels, **kwargs) - else: - raise NotImplementedError(r'''{0} is not an available values. \ - Please choose one of the available values in - [resnet18, reset50, resnet101, resnet152]'''.format(name)) - - -if __name__ == '__main__': - model = get_resnet('resnet18', pretrained=True, num_classes=None, in_channels=3, out_keys=['block4']) - x = torch.rand([2, 3, 256, 256]) - torch.save(model.state_dict(), 'res18nofc.pth') \ No newline at end of file diff --git a/spaces/Kyron2975/Linaqruf-anything-v3.0/README.md b/spaces/Kyron2975/Linaqruf-anything-v3.0/README.md deleted file mode 100644 index 9688d5ad4477870a9260373c41db45db9ea134d5..0000000000000000000000000000000000000000 --- a/spaces/Kyron2975/Linaqruf-anything-v3.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Linaqruf Anything V3.0 -emoji: 🏆 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Laihiujin/OneFormer/oneformer/modeling/meta_arch/__init__.py b/spaces/Laihiujin/OneFormer/oneformer/modeling/meta_arch/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/modeling/meta_arch/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/Lbin123/Lbingo/src/components/ui/alert-dialog.tsx b/spaces/Lbin123/Lbingo/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
- {children} -
-
-) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/Lianjd/stock_dashboard/backtrader/observers/__init__.py b/spaces/Lianjd/stock_dashboard/backtrader/observers/__init__.py deleted file mode 100644 index 99775a15317b6087c34017fc74370079cb3c5eb0..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/observers/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -# The modules below should/must define __all__ with the Indicator objects -# of prepend an "_" (underscore) to private classes/variables - -from .broker import * -from .buysell import * -from .trades import * -from .drawdown import * -from .timereturn import * -from .benchmark import * - -from .logreturns import * diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_adam_step_6e.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_adam_step_6e.py deleted file mode 100644 index 5b33a2f924e502fc3a7f53f080a43fae983bb00c..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_adam_step_6e.py +++ /dev/null @@ -1,8 +0,0 @@ -# optimizer -optimizer = dict(type='Adam', lr=1e-3) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='step', step=[3, 4]) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=6) -checkpoint_config = dict(interval=1) diff --git a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/serve/gradio_web_server.py b/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/serve/gradio_web_server.py deleted file mode 100644 index 9b6ae4feb4f52dee66a99970be79d92c3c94ff02..0000000000000000000000000000000000000000 --- a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/serve/gradio_web_server.py +++ /dev/null @@ -1,460 +0,0 @@ -import argparse -import datetime -import json -import os -import time - -import gradio as gr -import requests - -from mplug_owl2.conversation import (default_conversation, conv_templates, - SeparatorStyle) -from mplug_owl2.constants import LOGDIR -from mplug_owl2.utils import (build_logger, server_error_msg, - violates_moderation, moderation_msg) -import hashlib - - -logger = build_logger("gradio_web_server", "gradio_web_server.log") - -headers = {"User-Agent": "mPLUG-Owl2 Client"} - -no_change_btn = gr.Button.update() -enable_btn = gr.Button.update(interactive=True) -disable_btn = gr.Button.update(interactive=False) - -priority = { - "vicuna-13b": "aaaaaaa", - "koala-13b": "aaaaaab", -} - - -def get_conv_log_filename(): - t = datetime.datetime.now() - name = os.path.join(LOGDIR, f"{t.year}-{t.month:02d}-{t.day:02d}-conv.json") - return name - - -def get_model_list(): - ret = requests.post(args.controller_url + "/refresh_all_workers") - assert ret.status_code == 200 - ret = requests.post(args.controller_url + "/list_models") - models = ret.json()["models"] - models.sort(key=lambda x: priority.get(x, x)) - logger.info(f"Models: {models}") - return models - - -get_window_url_params = """ -function() { - const params = new URLSearchParams(window.location.search); - url_params = Object.fromEntries(params); - console.log(url_params); - return url_params; - } -""" - - -def load_demo(url_params, request: gr.Request): - logger.info(f"load_demo. ip: {request.client.host}. params: {url_params}") - - dropdown_update = gr.Dropdown.update(visible=True) - if "model" in url_params: - model = url_params["model"] - if model in models: - dropdown_update = gr.Dropdown.update( - value=model, visible=True) - - state = default_conversation.copy() - return state, dropdown_update - - -def load_demo_refresh_model_list(request: gr.Request): - logger.info(f"load_demo. ip: {request.client.host}") - models = get_model_list() - state = default_conversation.copy() - dropdown_update = gr.Dropdown.update( - choices=models, - value=models[0] if len(models) > 0 else "" - ) - return state, dropdown_update - - -def vote_last_response(state, vote_type, model_selector, request: gr.Request): - with open(get_conv_log_filename(), "a") as fout: - data = { - "tstamp": round(time.time(), 4), - "type": vote_type, - "model": model_selector, - "state": state.dict(), - "ip": request.client.host, - } - fout.write(json.dumps(data) + "\n") - - -def upvote_last_response(state, model_selector, request: gr.Request): - logger.info(f"upvote. ip: {request.client.host}") - vote_last_response(state, "upvote", model_selector, request) - return ("",) + (disable_btn,) * 3 - - -def downvote_last_response(state, model_selector, request: gr.Request): - logger.info(f"downvote. ip: {request.client.host}") - vote_last_response(state, "downvote", model_selector, request) - return ("",) + (disable_btn,) * 3 - - -def flag_last_response(state, model_selector, request: gr.Request): - logger.info(f"flag. ip: {request.client.host}") - vote_last_response(state, "flag", model_selector, request) - return ("",) + (disable_btn,) * 3 - - -def regenerate(state, image_process_mode, request: gr.Request): - logger.info(f"regenerate. ip: {request.client.host}") - state.messages[-1][-1] = None - prev_human_msg = state.messages[-2] - if type(prev_human_msg[1]) in (tuple, list): - prev_human_msg[1] = (*prev_human_msg[1][:2], image_process_mode) - state.skip_next = False - return (state, state.to_gradio_chatbot(), "", None) + (disable_btn,) * 5 - - -def clear_history(request: gr.Request): - logger.info(f"clear_history. ip: {request.client.host}") - state = default_conversation.copy() - return (state, state.to_gradio_chatbot(), "", None) + (disable_btn,) * 5 - - -def add_text(state, text, image, image_process_mode, request: gr.Request): - logger.info(f"add_text. ip: {request.client.host}. len: {len(text)}") - if len(text) <= 0 and image is None: - state.skip_next = True - return (state, state.to_gradio_chatbot(), "", None) + (no_change_btn,) * 5 - if args.moderate: - flagged = violates_moderation(text) - if flagged: - state.skip_next = True - return (state, state.to_gradio_chatbot(), moderation_msg, None) + ( - no_change_btn,) * 5 - - text = text[:1536] # Hard cut-off - if image is not None: - text = text[:1200] # Hard cut-off for images - if '<|image|>' not in text: - # text = text + '<|image|>' - text = '<|image|>' + text - text = (text, image, image_process_mode) - if len(state.get_images(return_pil=True)) > 0: - state = default_conversation.copy() - state.append_message(state.roles[0], text) - state.append_message(state.roles[1], None) - state.skip_next = False - return (state, state.to_gradio_chatbot(), "", None) + (disable_btn,) * 5 - - -def http_bot(state, model_selector, temperature, top_p, max_new_tokens, request: gr.Request): - logger.info(f"http_bot. ip: {request.client.host}") - start_tstamp = time.time() - model_name = model_selector - - if state.skip_next: - # This generate call is skipped due to invalid inputs - yield (state, state.to_gradio_chatbot()) + (no_change_btn,) * 5 - return - - if len(state.messages) == state.offset + 2: - # First round of conversation - template_name = "mplug_owl2" - new_state = conv_templates[template_name].copy() - new_state.append_message(new_state.roles[0], state.messages[-2][1]) - new_state.append_message(new_state.roles[1], None) - state = new_state - - # Query worker address - controller_url = args.controller_url - ret = requests.post(controller_url + "/get_worker_address", - json={"model": model_name}) - worker_addr = ret.json()["address"] - logger.info(f"model_name: {model_name}, worker_addr: {worker_addr}") - - # No available worker - if worker_addr == "": - state.messages[-1][-1] = server_error_msg - yield (state, state.to_gradio_chatbot(), disable_btn, disable_btn, disable_btn, enable_btn, enable_btn) - return - - # Construct prompt - prompt = state.get_prompt() - - all_images = state.get_images(return_pil=True) - all_image_hash = [hashlib.md5(image.tobytes()).hexdigest() for image in all_images] - for image, hash in zip(all_images, all_image_hash): - t = datetime.datetime.now() - filename = os.path.join(LOGDIR, "serve_images", f"{t.year}-{t.month:02d}-{t.day:02d}", f"{hash}.jpg") - if not os.path.isfile(filename): - os.makedirs(os.path.dirname(filename), exist_ok=True) - image.save(filename) - - # Make requests - pload = { - "model": model_name, - "prompt": prompt, - "temperature": float(temperature), - "top_p": float(top_p), - "max_new_tokens": min(int(max_new_tokens), 1536), - "stop": state.sep if state.sep_style in [SeparatorStyle.SINGLE, SeparatorStyle.MPT] else state.sep2, - "images": f'List of {len(state.get_images())} images: {all_image_hash}', - } - logger.info(f"==== request ====\n{pload}") - - pload['images'] = state.get_images() - - state.messages[-1][-1] = "▌" - yield (state, state.to_gradio_chatbot()) + (disable_btn,) * 5 - - try: - # Stream output - response = requests.post(worker_addr + "/worker_generate_stream", - headers=headers, json=pload, stream=True, timeout=10) - for chunk in response.iter_lines(decode_unicode=False, delimiter=b"\0"): - if chunk: - data = json.loads(chunk.decode()) - if data["error_code"] == 0: - output = data["text"][len(prompt):].strip() - state.messages[-1][-1] = output + "▌" - yield (state, state.to_gradio_chatbot()) + (disable_btn,) * 5 - else: - output = data["text"] + f" (error_code: {data['error_code']})" - state.messages[-1][-1] = output - yield (state, state.to_gradio_chatbot()) + (disable_btn, disable_btn, disable_btn, enable_btn, enable_btn) - return - time.sleep(0.03) - except requests.exceptions.RequestException as e: - state.messages[-1][-1] = server_error_msg - yield (state, state.to_gradio_chatbot()) + (disable_btn, disable_btn, disable_btn, enable_btn, enable_btn) - return - - state.messages[-1][-1] = state.messages[-1][-1][:-1] - yield (state, state.to_gradio_chatbot()) + (enable_btn,) * 5 - - finish_tstamp = time.time() - logger.info(f"{output}") - - with open(get_conv_log_filename(), "a") as fout: - data = { - "tstamp": round(finish_tstamp, 4), - "type": "chat", - "model": model_name, - "start": round(start_tstamp, 4), - "finish": round(start_tstamp, 4), - "state": state.dict(), - "images": all_image_hash, - "ip": request.client.host, - } - fout.write(json.dumps(data) + "\n") - - -title_markdown = (""" -

mPLUG-Owl

- -

mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration

- -
If you like our project, please give us a star ✨ on Github for latest update.
- -
-
- - - -
-
- -""") - - -tos_markdown = (""" -### Terms of use -By using this service, users are required to agree to the following terms: -The service is a research preview intended for non-commercial use only. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes. The service may collect user dialogue data for future research. -Please click the "Flag" button if you get any inappropriate answer! We will collect those to keep improving our moderator. -For an optimal experience, please use desktop computers for this demo, as mobile devices may compromise its quality. -""") - - -learn_more_markdown = (""" -### License -The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. -""") - -block_css = """ - -#buttons button { - min-width: min(120px,100%); -} - -""" - -def build_demo(embed_mode): - textbox = gr.Textbox(show_label=False, placeholder="Enter text and press ENTER", container=False) - with gr.Blocks(title="mPLUG-Owl2", theme=gr.themes.Default(), css=block_css) as demo: - state = gr.State() - - if not embed_mode: - gr.Markdown(title_markdown) - - with gr.Row(): - with gr.Column(scale=3): - with gr.Row(elem_id="model_selector_row"): - model_selector = gr.Dropdown( - choices=models, - value=models[0] if len(models) > 0 else "", - interactive=True, - show_label=False, - container=False) - - imagebox = gr.Image(type="pil") - image_process_mode = gr.Radio( - ["Crop", "Resize", "Pad", "Default"], - value="Default", - label="Preprocess for non-square image", visible=False) - - cur_dir = os.path.dirname(os.path.abspath(__file__)) - gr.Examples(examples=[ - [f"{cur_dir}/examples/extreme_ironing.jpg", "What is unusual about this image?"], - [f"{cur_dir}/examples/Rebecca_(1939_poster)_Small.jpeg", "What is the name of the movie in the poster?"], - ], inputs=[imagebox, textbox]) - - with gr.Accordion("Parameters", open=True) as parameter_row: - temperature = gr.Slider(minimum=0.0, maximum=1.0, value=0.2, step=0.1, interactive=True, label="Temperature",) - top_p = gr.Slider(minimum=0.0, maximum=1.0, value=0.7, step=0.1, interactive=True, label="Top P",) - max_output_tokens = gr.Slider(minimum=0, maximum=1024, value=512, step=64, interactive=True, label="Max output tokens",) - - with gr.Column(scale=8): - chatbot = gr.Chatbot(elem_id="Chatbot", label="mPLUG-Owl2 Chatbot", height=600) - with gr.Row(): - with gr.Column(scale=8): - textbox.render() - with gr.Column(scale=1, min_width=50): - submit_btn = gr.Button(value="Send", variant="primary") - with gr.Row(elem_id="buttons") as button_row: - upvote_btn = gr.Button(value="👍 Upvote", interactive=False) - downvote_btn = gr.Button(value="👎 Downvote", interactive=False) - flag_btn = gr.Button(value="⚠️ Flag", interactive=False) - #stop_btn = gr.Button(value="⏹️ Stop Generation", interactive=False) - regenerate_btn = gr.Button(value="🔄 Regenerate", interactive=False) - clear_btn = gr.Button(value="🗑️ Clear", interactive=False) - - if not embed_mode: - gr.Markdown(tos_markdown) - gr.Markdown(learn_more_markdown) - url_params = gr.JSON(visible=False) - - # Register listeners - btn_list = [upvote_btn, downvote_btn, flag_btn, regenerate_btn, clear_btn] - upvote_btn.click( - upvote_last_response, - [state, model_selector], - [textbox, upvote_btn, downvote_btn, flag_btn], - queue=False - ) - downvote_btn.click( - downvote_last_response, - [state, model_selector], - [textbox, upvote_btn, downvote_btn, flag_btn], - queue=False - ) - flag_btn.click( - flag_last_response, - [state, model_selector], - [textbox, upvote_btn, downvote_btn, flag_btn], - queue=False - ) - - regenerate_btn.click( - regenerate, - [state, image_process_mode], - [state, chatbot, textbox, imagebox] + btn_list, - queue=False - ).then( - http_bot, - [state, model_selector, temperature, top_p, max_output_tokens], - [state, chatbot] + btn_list - ) - - clear_btn.click( - clear_history, - None, - [state, chatbot, textbox, imagebox] + btn_list, - queue=False - ) - - textbox.submit( - add_text, - [state, textbox, imagebox, image_process_mode], - [state, chatbot, textbox, imagebox] + btn_list, - queue=False - ).then( - http_bot, - [state, model_selector, temperature, top_p, max_output_tokens], - [state, chatbot] + btn_list - ) - - submit_btn.click( - add_text, - [state, textbox, imagebox, image_process_mode], - [state, chatbot, textbox, imagebox] + btn_list, - queue=False - ).then( - http_bot, - [state, model_selector, temperature, top_p, max_output_tokens], - [state, chatbot] + btn_list - ) - - if args.model_list_mode == "once": - demo.load( - load_demo, - [url_params], - [state, model_selector], - _js=get_window_url_params, - queue=False - ) - elif args.model_list_mode == "reload": - demo.load( - load_demo_refresh_model_list, - None, - [state, model_selector], - queue=False - ) - else: - raise ValueError(f"Unknown model list mode: {args.model_list_mode}") - - return demo - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--host", type=str, default="0.0.0.0") - parser.add_argument("--port", type=int) - parser.add_argument("--controller-url", type=str, default="http://localhost:21001") - parser.add_argument("--concurrency-count", type=int, default=10) - parser.add_argument("--model-list-mode", type=str, default="once", - choices=["once", "reload"]) - parser.add_argument("--share", action="store_true") - parser.add_argument("--moderate", action="store_true") - parser.add_argument("--embed", action="store_true") - args = parser.parse_args() - logger.info(f"args: {args}") - - models = get_model_list() - - logger.info(args) - demo = build_demo(args.embed) - demo.queue( - concurrency_count=args.concurrency_count, - api_open=False - ).launch( - server_name=args.host, - server_port=args.port, - share=False - ) \ No newline at end of file diff --git a/spaces/MUmairAB/BreastCancerDetector-app/README.md b/spaces/MUmairAB/BreastCancerDetector-app/README.md deleted file mode 100644 index ea743fda49f003ca06374b155456becfc44b2d56..0000000000000000000000000000000000000000 --- a/spaces/MUmairAB/BreastCancerDetector-app/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: CanDetect -emoji: 🌖 -colorFrom: purple -colorTo: green -sdk: gradio -python_version: 3.10.11 -pip_version: 23.1.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/McClane-Lee/fnlp-moss-moon-003-base/README.md b/spaces/McClane-Lee/fnlp-moss-moon-003-base/README.md deleted file mode 100644 index cafdbe181027e438c70662716ddc4496fabac638..0000000000000000000000000000000000000000 --- a/spaces/McClane-Lee/fnlp-moss-moon-003-base/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Fnlp Moss Moon 003 Base -emoji: 👀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MetaWabbit/Auto-GPT/CONTRIBUTING.md b/spaces/MetaWabbit/Auto-GPT/CONTRIBUTING.md deleted file mode 100644 index 79169a0c1951853303f73ffa1fddb3518685606a..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/CONTRIBUTING.md +++ /dev/null @@ -1,105 +0,0 @@ -# Contributing to ProjectName - -First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request. - -This document provides guidelines and best practices to help you contribute effectively. - -## Table of Contents - -- [Code of Conduct](#code-of-conduct) -- [Getting Started](#getting-started) -- [How to Contribute](#how-to-contribute) - - [Reporting Bugs](#reporting-bugs) - - [Suggesting Enhancements](#suggesting-enhancements) - - [Submitting Pull Requests](#submitting-pull-requests) -- [Style Guidelines](#style-guidelines) - - [Code Formatting](#code-formatting) - - [Pre-Commit Hooks](#pre-commit-hooks) - -## Code of Conduct - -By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. - -## 📢 A Quick Word -Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT. - -However, you absolutely can still add these commands to Auto-GPT in the form of plugins. Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template). -> ⚠️ Plugin support is expected to ship within the week. You can follow PR #757 for more updates! - -## Getting Started - -To start contributing, follow these steps: - -1. Fork the repository and clone your fork. -2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`). -3. Make your changes in the new branch. -4. Test your changes thoroughly. -5. Commit and push your changes to your fork. -6. Create a pull request following the guidelines in the [Submitting Pull Requests](#submitting-pull-requests) section. - -## How to Contribute - -### Reporting Bugs - -If you find a bug in the project, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A description of the problem, including steps to reproduce the issue. -- Any relevant logs, screenshots, or other supporting information. - -### Suggesting Enhancements - -If you have an idea for a new feature or improvement, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A detailed description of the proposed enhancement, including any benefits and potential drawbacks. -- Any relevant examples, mockups, or supporting information. - -### Submitting Pull Requests - -When submitting a pull request, please ensure that your changes meet the following criteria: - -- Your pull request should be atomic and focus on a single change. -- Your pull request should include tests for your change. -- You should have thoroughly tested your changes with multiple different prompts. -- You should have considered potential risks and mitigations for your changes. -- You should have documented your changes clearly and comprehensively. -- You should not include any unrelated or "extra" small tweaks or changes. - -## Style Guidelines - -### Code Formatting - -We use the `black` code formatter to maintain a consistent coding style across the project. Please ensure that your code is formatted using `black` before submitting a pull request. You can install `black` using `pip`: - -```bash -pip install black -``` - -To format your code, run the following command in the project's root directory: - -```bash -black . -``` -### Pre-Commit Hooks -We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: - -Install the pre-commit package using pip: -```bash -pip install pre-commit -``` - -Run the following command in the project's root directory to install the pre-commit hooks: -```bash -pre-commit install -``` - -Now, the pre-commit hooks will run automatically before each commit, checking your code formatting and other requirements. - -If you encounter any issues or have questions, feel free to reach out to the maintainers or open a new issue on GitHub. We're here to help and appreciate your efforts to contribute to the project. - -Happy coding, and once again, thank you for your contributions! - -Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: - -https://github.com/Torantulino/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/workspace.py b/spaces/MetaWabbit/Auto-GPT/autogpt/workspace.py deleted file mode 100644 index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/workspace.py +++ /dev/null @@ -1,47 +0,0 @@ -from __future__ import annotations - -import os -from pathlib import Path - -from autogpt.config import Config - -CFG = Config() - -# Set a dedicated folder for file I/O -WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKSPACE_PATH): - os.makedirs(WORKSPACE_PATH) - - -def path_in_workspace(relative_path: str | Path) -> Path: - """Get full path for item in workspace - - Parameters: - relative_path (str | Path): Path to translate into the workspace - - Returns: - Path: Absolute path for the given path in the workspace - """ - return safe_path_join(WORKSPACE_PATH, relative_path) - - -def safe_path_join(base: Path, *paths: str | Path) -> Path: - """Join one or more path components, asserting the resulting path is within the workspace. - - Args: - base (Path): The base path - *paths (str): The paths to join to the base path - - Returns: - Path: The joined path - """ - joined_path = base.joinpath(*paths).resolve() - - if CFG.restrict_to_workspace and not joined_path.is_relative_to(base): - raise ValueError( - f"Attempted to access path '{joined_path}' outside of workspace '{base}'." - ) - - return joined_path diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/framework.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/framework.py deleted file mode 100644 index a4375b659a91267d3db9278f72bd1f0b030a4655..0000000000000000000000000000000000000000 --- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/framework.py +++ /dev/null @@ -1,90 +0,0 @@ -# Mario Rosasco, 2016 -# adapted from framework.cpp, Copyright (C) 2010-2012 by Jason L. McKesson -# This file is licensed under the MIT License. -# -# NB: Unlike in the framework.cpp organization, the main loop is contained -# in the tutorial files, not in this framework file. Additionally, a copy of -# this module file must exist in the same directory as the tutorial files -# to be imported properly. - -import os -from OpenGL.GL import * - -# Function that creates and compiles shaders according to the given type (a GL enum value) and -# shader program (a file containing a GLSL program). -def loadShader(shaderType, shaderFile): - # check if file exists, get full path name - strFilename = findFileOrThrow(shaderFile) - shaderData = None - with open(strFilename, 'r') as f: - shaderData = f.read() - - shader = glCreateShader(shaderType) - glShaderSource(shader, shaderData) # note that this is a simpler function call than in C - - # This shader compilation is more explicit than the one used in - # framework.cpp, which relies on a glutil wrapper function. - # This is made explicit here mainly to decrease dependence on pyOpenGL - # utilities and wrappers, which docs caution may change in future versions. - glCompileShader(shader) - - status = glGetShaderiv(shader, GL_COMPILE_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetShaderInfoLog(shader) - strShaderType = "" - if shaderType is GL_VERTEX_SHADER: - strShaderType = "vertex" - elif shaderType is GL_GEOMETRY_SHADER: - strShaderType = "geometry" - elif shaderType is GL_FRAGMENT_SHADER: - strShaderType = "fragment" - - print("Compilation failure for " + strShaderType + " shader:\n" + str(strInfoLog)) - - return shader - - -# Function that accepts a list of shaders, compiles them, and returns a handle to the compiled program -def createProgram(shaderList): - program = glCreateProgram() - - for shader in shaderList: - glAttachShader(program, shader) - - glLinkProgram(program) - - status = glGetProgramiv(program, GL_LINK_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetProgramInfoLog(program) - print("Linker failure: \n" + str(strInfoLog)) - - for shader in shaderList: - glDetachShader(program, shader) - - return program - - -# Helper function to locate and open the target file (passed in as a string). -# Returns the full path to the file as a string. -def findFileOrThrow(strBasename): - # Keep constant names in C-style convention, for readability - # when comparing to C(/C++) code. - if os.path.isfile(strBasename): - return strBasename - - LOCAL_FILE_DIR = "data" + os.sep - GLOBAL_FILE_DIR = os.path.dirname(os.path.abspath(__file__)) + os.sep + "data" + os.sep - - strFilename = LOCAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - strFilename = GLOBAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - raise IOError('Could not find target file ' + strBasename) \ No newline at end of file diff --git a/spaces/Moxxie-nolastname/Not-Moxxie-Proxy/README.md b/spaces/Moxxie-nolastname/Not-Moxxie-Proxy/README.md deleted file mode 100644 index 4c12528c1d1f3edf8118af005abf159e7de4e631..0000000000000000000000000000000000000000 --- a/spaces/Moxxie-nolastname/Not-Moxxie-Proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Moxxie Proxy -emoji: 🐨 -colorFrom: blue -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MrD05/text-generation-webui-space/extensions/google_translate/script.py b/spaces/MrD05/text-generation-webui-space/extensions/google_translate/script.py deleted file mode 100644 index 68bc54b293086bed1a070a310d276060ee939d44..0000000000000000000000000000000000000000 --- a/spaces/MrD05/text-generation-webui-space/extensions/google_translate/script.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -from deep_translator import GoogleTranslator - -params = { - "language string": "ja", -} - -language_codes = {'Afrikaans': 'af', 'Albanian': 'sq', 'Amharic': 'am', 'Arabic': 'ar', 'Armenian': 'hy', 'Azerbaijani': 'az', 'Basque': 'eu', 'Belarusian': 'be', 'Bengali': 'bn', 'Bosnian': 'bs', 'Bulgarian': 'bg', 'Catalan': 'ca', 'Cebuano': 'ceb', 'Chinese (Simplified)': 'zh-CN', 'Chinese (Traditional)': 'zh-TW', 'Corsican': 'co', 'Croatian': 'hr', 'Czech': 'cs', 'Danish': 'da', 'Dutch': 'nl', 'English': 'en', 'Esperanto': 'eo', 'Estonian': 'et', 'Finnish': 'fi', 'French': 'fr', 'Frisian': 'fy', 'Galician': 'gl', 'Georgian': 'ka', 'German': 'de', 'Greek': 'el', 'Gujarati': 'gu', 'Haitian Creole': 'ht', 'Hausa': 'ha', 'Hawaiian': 'haw', 'Hebrew': 'iw', 'Hindi': 'hi', 'Hmong': 'hmn', 'Hungarian': 'hu', 'Icelandic': 'is', 'Igbo': 'ig', 'Indonesian': 'id', 'Irish': 'ga', 'Italian': 'it', 'Japanese': 'ja', 'Javanese': 'jw', 'Kannada': 'kn', 'Kazakh': 'kk', 'Khmer': 'km', 'Korean': 'ko', 'Kurdish': 'ku', 'Kyrgyz': 'ky', 'Lao': 'lo', 'Latin': 'la', 'Latvian': 'lv', 'Lithuanian': 'lt', 'Luxembourgish': 'lb', 'Macedonian': 'mk', 'Malagasy': 'mg', 'Malay': 'ms', 'Malayalam': 'ml', 'Maltese': 'mt', 'Maori': 'mi', 'Marathi': 'mr', 'Mongolian': 'mn', 'Myanmar (Burmese)': 'my', 'Nepali': 'ne', 'Norwegian': 'no', 'Nyanja (Chichewa)': 'ny', 'Pashto': 'ps', 'Persian': 'fa', 'Polish': 'pl', 'Portuguese (Portugal, Brazil)': 'pt', 'Punjabi': 'pa', 'Romanian': 'ro', 'Russian': 'ru', 'Samoan': 'sm', 'Scots Gaelic': 'gd', 'Serbian': 'sr', 'Sesotho': 'st', 'Shona': 'sn', 'Sindhi': 'sd', 'Sinhala (Sinhalese)': 'si', 'Slovak': 'sk', 'Slovenian': 'sl', 'Somali': 'so', 'Spanish': 'es', 'Sundanese': 'su', 'Swahili': 'sw', 'Swedish': 'sv', 'Tagalog (Filipino)': 'tl', 'Tajik': 'tg', 'Tamil': 'ta', 'Telugu': 'te', 'Thai': 'th', 'Turkish': 'tr', 'Ukrainian': 'uk', 'Urdu': 'ur', 'Uzbek': 'uz', 'Vietnamese': 'vi', 'Welsh': 'cy', 'Xhosa': 'xh', 'Yiddish': 'yi', 'Yoruba': 'yo', 'Zulu': 'zu'} - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - return GoogleTranslator(source=params['language string'], target='en').translate(string) - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - return GoogleTranslator(source='en', target=params['language string']).translate(string) - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - -def ui(): - # Finding the language name from the language code to use as the default value - language_name = list(language_codes.keys())[list(language_codes.values()).index(params['language string'])] - - # Gradio elements - language = gr.Dropdown(value=language_name, choices=[k for k in language_codes], label='Language') - - # Event functions to update the parameters in the backend - language.change(lambda x: params.update({"language string": language_codes[x]}), language, None) diff --git a/spaces/MrTitanicus/rvc-models/app.py b/spaces/MrTitanicus/rvc-models/app.py deleted file mode 100644 index 8f1dd8103616f47920fdd5a43d91e847250a3833..0000000000000000000000000000000000000000 --- a/spaces/MrTitanicus/rvc-models/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
RVC Models (Outdated)\n" - "##
The input audio should be clean and pure voice without background music.\n" - "###
Updated Repository: [NEW RVC Models](https://huggingface.co/spaces/ArkanDash/rvc-models-new).\n" - "####
Recommended to use the Google Colab version for more feature.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ArkanDash.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
' - f'
{title}
\n'+ - (f'
Model author: {author}
' if author else "")+ - (f'' if cover else "")+ - '
' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/NATSpeech/DiffSpeech/data_gen/tts/wav_processors/__init__.py b/spaces/NATSpeech/DiffSpeech/data_gen/tts/wav_processors/__init__.py deleted file mode 100644 index 4be97b377dcb95a0e6bceb876ac0ce93c8290249..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/data_gen/tts/wav_processors/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from . import base_processor -from . import common_processors diff --git a/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/res_flow.py b/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/res_flow.py deleted file mode 100644 index d0d13285704543ec28fe37d82346011240bdcaf8..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/res_flow.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch import nn -from modules.commons.conv import ConditionalConvBlocks -from modules.commons.wavenet import WN - - -class FlipLayer(nn.Module): - def forward(self, x, nonpadding, cond=None, reverse=False): - x = torch.flip(x, [1]) - return x - - -class CouplingLayer(nn.Module): - def __init__(self, c_in, hidden_size, kernel_size, n_layers, p_dropout=0, c_in_g=0, nn_type='wn'): - super().__init__() - self.channels = c_in - self.hidden_size = hidden_size - self.kernel_size = kernel_size - self.n_layers = n_layers - self.c_half = c_in // 2 - - self.pre = nn.Conv1d(self.c_half, hidden_size, 1) - if nn_type == 'wn': - self.enc = WN(hidden_size, kernel_size, 1, n_layers, p_dropout=p_dropout, - c_cond=c_in_g) - elif nn_type == 'conv': - self.enc = ConditionalConvBlocks( - hidden_size, c_in_g, hidden_size, None, kernel_size, - layers_in_block=1, is_BTC=False, num_layers=n_layers) - self.post = nn.Conv1d(hidden_size, self.c_half, 1) - - def forward(self, x, nonpadding, cond=None, reverse=False): - x0, x1 = x[:, :self.c_half], x[:, self.c_half:] - x_ = self.pre(x0) * nonpadding - x_ = self.enc(x_, nonpadding=nonpadding, cond=cond) - m = self.post(x_) - x1 = m + x1 if not reverse else x1 - m - x = torch.cat([x0, x1], 1) - return x * nonpadding - - -class ResFlow(nn.Module): - def __init__(self, - c_in, - hidden_size, - kernel_size, - n_flow_layers, - n_flow_steps=4, - c_cond=0, - nn_type='wn'): - super().__init__() - self.flows = nn.ModuleList() - for i in range(n_flow_steps): - self.flows.append( - CouplingLayer(c_in, hidden_size, kernel_size, n_flow_layers, c_in_g=c_cond, nn_type=nn_type)) - self.flows.append(FlipLayer()) - - def forward(self, x, nonpadding, cond=None, reverse=False): - for flow in (self.flows if not reverse else reversed(self.flows)): - x = flow(x, nonpadding, cond=cond, reverse=reverse) - return x diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/model.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/model.py deleted file mode 100644 index c633c5c39a0463c026cc944218cd2cc0ea7ebfb0..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/model.py +++ /dev/null @@ -1,583 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Functions to build the Attention OCR model. - -Usage example: - ocr_model = model.Model(num_char_classes, seq_length, num_of_views) - - data = ... # create namedtuple InputEndpoints - endpoints = model.create_base(data.images, data.labels_one_hot) - # endpoints.predicted_chars is a tensor with predicted character codes. - total_loss = model.create_loss(data, endpoints) -""" -import sys -import collections -import logging -import tensorflow as tf -from tensorflow.contrib import slim -from tensorflow.contrib.slim.nets import inception - -import metrics -import sequence_layers -import utils - -OutputEndpoints = collections.namedtuple('OutputEndpoints', [ - 'chars_logit', 'chars_log_prob', 'predicted_chars', 'predicted_scores', - 'predicted_text' -]) - -# TODO(gorban): replace with tf.HParams when it is released. -ModelParams = collections.namedtuple('ModelParams', [ - 'num_char_classes', 'seq_length', 'num_views', 'null_code' -]) - -ConvTowerParams = collections.namedtuple('ConvTowerParams', ['final_endpoint']) - -SequenceLogitsParams = collections.namedtuple('SequenceLogitsParams', [ - 'use_attention', 'use_autoregression', 'num_lstm_units', 'weight_decay', - 'lstm_state_clip_value' -]) - -SequenceLossParams = collections.namedtuple('SequenceLossParams', [ - 'label_smoothing', 'ignore_nulls', 'average_across_timesteps' -]) - -EncodeCoordinatesParams = collections.namedtuple('EncodeCoordinatesParams', [ - 'enabled' -]) - - -def _dict_to_array(id_to_char, default_character): - num_char_classes = max(id_to_char.keys()) + 1 - array = [default_character] * num_char_classes - for k, v in id_to_char.items(): - array[k] = v - return array - - -class CharsetMapper(object): - """A simple class to map tensor ids into strings. - - It works only when the character set is 1:1 mapping between individual - characters and individual ids. - - Make sure you call tf.tables_initializer().run() as part of the init op. - """ - - def __init__(self, charset, default_character='?'): - """Creates a lookup table. - - Args: - charset: a dictionary with id-to-character mapping. - """ - mapping_strings = tf.constant(_dict_to_array(charset, default_character)) - self.table = tf.contrib.lookup.index_to_string_table_from_tensor( - mapping=mapping_strings, default_value=default_character) - - def get_text(self, ids): - """Returns a string corresponding to a sequence of character ids. - - Args: - ids: a tensor with shape [batch_size, max_sequence_length] - """ - return tf.reduce_join( - self.table.lookup(tf.to_int64(ids)), reduction_indices=1) - - -def get_softmax_loss_fn(label_smoothing): - """Returns sparse or dense loss function depending on the label_smoothing. - - Args: - label_smoothing: weight for label smoothing - - Returns: - a function which takes labels and predictions as arguments and returns - a softmax loss for the selected type of labels (sparse or dense). - """ - if label_smoothing > 0: - - def loss_fn(labels, logits): - return (tf.nn.softmax_cross_entropy_with_logits( - logits=logits, labels=labels)) - else: - - def loss_fn(labels, logits): - return tf.nn.sparse_softmax_cross_entropy_with_logits( - logits=logits, labels=labels) - - return loss_fn - - -class Model(object): - """Class to create the Attention OCR Model.""" - - def __init__(self, - num_char_classes, - seq_length, - num_views, - null_code, - mparams=None, - charset=None): - """Initialized model parameters. - - Args: - num_char_classes: size of character set. - seq_length: number of characters in a sequence. - num_views: Number of views (conv towers) to use. - null_code: A character code corresponding to a character which - indicates end of a sequence. - mparams: a dictionary with hyper parameters for methods, keys - - function names, values - corresponding namedtuples. - charset: an optional dictionary with a mapping between character ids and - utf8 strings. If specified the OutputEndpoints.predicted_text will - utf8 encoded strings corresponding to the character ids returned by - OutputEndpoints.predicted_chars (by default the predicted_text contains - an empty vector). - NOTE: Make sure you call tf.tables_initializer().run() if the charset - specified. - """ - super(Model, self).__init__() - self._params = ModelParams( - num_char_classes=num_char_classes, - seq_length=seq_length, - num_views=num_views, - null_code=null_code) - self._mparams = self.default_mparams() - if mparams: - self._mparams.update(mparams) - self._charset = charset - - def default_mparams(self): - return { - 'conv_tower_fn': - ConvTowerParams(final_endpoint='Mixed_5d'), - 'sequence_logit_fn': - SequenceLogitsParams( - use_attention=True, - use_autoregression=True, - num_lstm_units=256, - weight_decay=0.00004, - lstm_state_clip_value=10.0), - 'sequence_loss_fn': - SequenceLossParams( - label_smoothing=0.1, - ignore_nulls=True, - average_across_timesteps=False), - 'encode_coordinates_fn': EncodeCoordinatesParams(enabled=False) - } - - def set_mparam(self, function, **kwargs): - self._mparams[function] = self._mparams[function]._replace(**kwargs) - - def conv_tower_fn(self, images, is_training=True, reuse=None): - """Computes convolutional features using the InceptionV3 model. - - Args: - images: A tensor of shape [batch_size, height, width, channels]. - is_training: whether is training or not. - reuse: whether or not the network and its variables should be reused. To - be able to reuse 'scope' must be given. - - Returns: - A tensor of shape [batch_size, OH, OW, N], where OWxOH is resolution of - output feature map and N is number of output features (depends on the - network architecture). - """ - mparams = self._mparams['conv_tower_fn'] - logging.debug('Using final_endpoint=%s', mparams.final_endpoint) - with tf.variable_scope('conv_tower_fn/INCE'): - if reuse: - tf.get_variable_scope().reuse_variables() - with slim.arg_scope(inception.inception_v3_arg_scope()): - with slim.arg_scope([slim.batch_norm, slim.dropout], - is_training=is_training): - net, _ = inception.inception_v3_base( - images, final_endpoint=mparams.final_endpoint) - return net - - def _create_lstm_inputs(self, net): - """Splits an input tensor into a list of tensors (features). - - Args: - net: A feature map of shape [batch_size, num_features, feature_size]. - - Raises: - AssertionError: if num_features is less than seq_length. - - Returns: - A list with seq_length tensors of shape [batch_size, feature_size] - """ - num_features = net.get_shape().dims[1].value - if num_features < self._params.seq_length: - raise AssertionError('Incorrect dimension #1 of input tensor' - ' %d should be bigger than %d (shape=%s)' % - (num_features, self._params.seq_length, - net.get_shape())) - elif num_features > self._params.seq_length: - logging.warning('Ignoring some features: use %d of %d (shape=%s)', - self._params.seq_length, num_features, net.get_shape()) - net = tf.slice(net, [0, 0, 0], [-1, self._params.seq_length, -1]) - - return tf.unstack(net, axis=1) - - def sequence_logit_fn(self, net, labels_one_hot): - mparams = self._mparams['sequence_logit_fn'] - # TODO(gorban): remove /alias suffixes from the scopes. - with tf.variable_scope('sequence_logit_fn/SQLR'): - layer_class = sequence_layers.get_layer_class(mparams.use_attention, - mparams.use_autoregression) - layer = layer_class(net, labels_one_hot, self._params, mparams) - return layer.create_logits() - - def max_pool_views(self, nets_list): - """Max pool across all nets in spatial dimensions. - - Args: - nets_list: A list of 4D tensors with identical size. - - Returns: - A tensor with the same size as any input tensors. - """ - batch_size, height, width, num_features = [ - d.value for d in nets_list[0].get_shape().dims - ] - xy_flat_shape = (batch_size, 1, height * width, num_features) - nets_for_merge = [] - with tf.variable_scope('max_pool_views', values=nets_list): - for net in nets_list: - nets_for_merge.append(tf.reshape(net, xy_flat_shape)) - merged_net = tf.concat(nets_for_merge, 1) - net = slim.max_pool2d( - merged_net, kernel_size=[len(nets_list), 1], stride=1) - net = tf.reshape(net, (batch_size, height, width, num_features)) - return net - - def pool_views_fn(self, nets): - """Combines output of multiple convolutional towers into a single tensor. - - It stacks towers one on top another (in height dim) in a 4x1 grid. - The order is arbitrary design choice and shouldn't matter much. - - Args: - nets: list of tensors of shape=[batch_size, height, width, num_features]. - - Returns: - A tensor of shape [batch_size, seq_length, features_size]. - """ - with tf.variable_scope('pool_views_fn/STCK'): - net = tf.concat(nets, 1) - batch_size = net.get_shape().dims[0].value - feature_size = net.get_shape().dims[3].value - return tf.reshape(net, [batch_size, -1, feature_size]) - - def char_predictions(self, chars_logit): - """Returns confidence scores (softmax values) for predicted characters. - - Args: - chars_logit: chars logits, a tensor with shape - [batch_size x seq_length x num_char_classes] - - Returns: - A tuple (ids, log_prob, scores), where: - ids - predicted characters, a int32 tensor with shape - [batch_size x seq_length]; - log_prob - a log probability of all characters, a float tensor with - shape [batch_size, seq_length, num_char_classes]; - scores - corresponding confidence scores for characters, a float - tensor - with shape [batch_size x seq_length]. - """ - log_prob = utils.logits_to_log_prob(chars_logit) - ids = tf.to_int32(tf.argmax(log_prob, axis=2), name='predicted_chars') - mask = tf.cast( - slim.one_hot_encoding(ids, self._params.num_char_classes), tf.bool) - all_scores = tf.nn.softmax(chars_logit) - selected_scores = tf.boolean_mask(all_scores, mask, name='char_scores') - scores = tf.reshape(selected_scores, shape=(-1, self._params.seq_length)) - return ids, log_prob, scores - - def encode_coordinates_fn(self, net): - """Adds one-hot encoding of coordinates to different views in the networks. - - For each "pixel" of a feature map it adds a onehot encoded x and y - coordinates. - - Args: - net: a tensor of shape=[batch_size, height, width, num_features] - - Returns: - a tensor with the same height and width, but altered feature_size. - """ - mparams = self._mparams['encode_coordinates_fn'] - if mparams.enabled: - batch_size, h, w, _ = net.shape.as_list() - x, y = tf.meshgrid(tf.range(w), tf.range(h)) - w_loc = slim.one_hot_encoding(x, num_classes=w) - h_loc = slim.one_hot_encoding(y, num_classes=h) - loc = tf.concat([h_loc, w_loc], 2) - loc = tf.tile(tf.expand_dims(loc, 0), [batch_size, 1, 1, 1]) - return tf.concat([net, loc], 3) - else: - return net - - def create_base(self, - images, - labels_one_hot, - scope='AttentionOcr_v1', - reuse=None): - """Creates a base part of the Model (no gradients, losses or summaries). - - Args: - images: A tensor of shape [batch_size, height, width, channels]. - labels_one_hot: Optional (can be None) one-hot encoding for ground truth - labels. If provided the function will create a model for training. - scope: Optional variable_scope. - reuse: whether or not the network and its variables should be reused. To - be able to reuse 'scope' must be given. - - Returns: - A named tuple OutputEndpoints. - """ - logging.debug('images: %s', images) - is_training = labels_one_hot is not None - with tf.variable_scope(scope, reuse=reuse): - views = tf.split( - value=images, num_or_size_splits=self._params.num_views, axis=2) - logging.debug('Views=%d single view: %s', len(views), views[0]) - - nets = [ - self.conv_tower_fn(v, is_training, reuse=(i != 0)) - for i, v in enumerate(views) - ] - logging.debug('Conv tower: %s', nets[0]) - - nets = [self.encode_coordinates_fn(net) for net in nets] - logging.debug('Conv tower w/ encoded coordinates: %s', nets[0]) - - net = self.pool_views_fn(nets) - logging.debug('Pooled views: %s', net) - - chars_logit = self.sequence_logit_fn(net, labels_one_hot) - logging.debug('chars_logit: %s', chars_logit) - - predicted_chars, chars_log_prob, predicted_scores = ( - self.char_predictions(chars_logit)) - if self._charset: - character_mapper = CharsetMapper(self._charset) - predicted_text = character_mapper.get_text(predicted_chars) - else: - predicted_text = tf.constant([]) - return OutputEndpoints( - chars_logit=chars_logit, - chars_log_prob=chars_log_prob, - predicted_chars=predicted_chars, - predicted_scores=predicted_scores, - predicted_text=predicted_text) - - def create_loss(self, data, endpoints): - """Creates all losses required to train the model. - - Args: - data: InputEndpoints namedtuple. - endpoints: Model namedtuple. - - Returns: - Total loss. - """ - # NOTE: the return value of ModelLoss is not used directly for the - # gradient computation because under the hood it calls slim.losses.AddLoss, - # which registers the loss in an internal collection and later returns it - # as part of GetTotalLoss. We need to use total loss because model may have - # multiple losses including regularization losses. - self.sequence_loss_fn(endpoints.chars_logit, data.labels) - total_loss = slim.losses.get_total_loss() - tf.summary.scalar('TotalLoss', total_loss) - return total_loss - - def label_smoothing_regularization(self, chars_labels, weight=0.1): - """Applies a label smoothing regularization. - - Uses the same method as in https://arxiv.org/abs/1512.00567. - - Args: - chars_labels: ground truth ids of charactes, - shape=[batch_size, seq_length]; - weight: label-smoothing regularization weight. - - Returns: - A sensor with the same shape as the input. - """ - one_hot_labels = tf.one_hot( - chars_labels, depth=self._params.num_char_classes, axis=-1) - pos_weight = 1.0 - weight - neg_weight = weight / self._params.num_char_classes - return one_hot_labels * pos_weight + neg_weight - - def sequence_loss_fn(self, chars_logits, chars_labels): - """Loss function for char sequence. - - Depending on values of hyper parameters it applies label smoothing and can - also ignore all null chars after the first one. - - Args: - chars_logits: logits for predicted characters, - shape=[batch_size, seq_length, num_char_classes]; - chars_labels: ground truth ids of characters, - shape=[batch_size, seq_length]; - mparams: method hyper parameters. - - Returns: - A Tensor with shape [batch_size] - the log-perplexity for each sequence. - """ - mparams = self._mparams['sequence_loss_fn'] - with tf.variable_scope('sequence_loss_fn/SLF'): - if mparams.label_smoothing > 0: - smoothed_one_hot_labels = self.label_smoothing_regularization( - chars_labels, mparams.label_smoothing) - labels_list = tf.unstack(smoothed_one_hot_labels, axis=1) - else: - # NOTE: in case of sparse softmax we are not using one-hot - # encoding. - labels_list = tf.unstack(chars_labels, axis=1) - - batch_size, seq_length, _ = chars_logits.shape.as_list() - if mparams.ignore_nulls: - weights = tf.ones((batch_size, seq_length), dtype=tf.float32) - else: - # Suppose that reject character is the last in the charset. - reject_char = tf.constant( - self._params.num_char_classes - 1, - shape=(batch_size, seq_length), - dtype=tf.int64) - known_char = tf.not_equal(chars_labels, reject_char) - weights = tf.to_float(known_char) - - logits_list = tf.unstack(chars_logits, axis=1) - weights_list = tf.unstack(weights, axis=1) - loss = tf.contrib.legacy_seq2seq.sequence_loss( - logits_list, - labels_list, - weights_list, - softmax_loss_function=get_softmax_loss_fn(mparams.label_smoothing), - average_across_timesteps=mparams.average_across_timesteps) - tf.losses.add_loss(loss) - return loss - - def create_summaries(self, data, endpoints, charset, is_training): - """Creates all summaries for the model. - - Args: - data: InputEndpoints namedtuple. - endpoints: OutputEndpoints namedtuple. - charset: A dictionary with mapping between character codes and - unicode characters. Use the one provided by a dataset.charset. - is_training: If True will create summary prefixes for training job, - otherwise - for evaluation. - - Returns: - A list of evaluation ops - """ - - def sname(label): - prefix = 'train' if is_training else 'eval' - return '%s/%s' % (prefix, label) - - max_outputs = 4 - # TODO(gorban): uncomment, when tf.summary.text released. - # charset_mapper = CharsetMapper(charset) - # pr_text = charset_mapper.get_text( - # endpoints.predicted_chars[:max_outputs,:]) - # tf.summary.text(sname('text/pr'), pr_text) - # gt_text = charset_mapper.get_text(data.labels[:max_outputs,:]) - # tf.summary.text(sname('text/gt'), gt_text) - tf.summary.image(sname('image'), data.images, max_outputs=max_outputs) - - if is_training: - tf.summary.image( - sname('image/orig'), data.images_orig, max_outputs=max_outputs) - for var in tf.trainable_variables(): - tf.summary.histogram(var.op.name, var) - return None - - else: - names_to_values = {} - names_to_updates = {} - - def use_metric(name, value_update_tuple): - names_to_values[name] = value_update_tuple[0] - names_to_updates[name] = value_update_tuple[1] - - use_metric('CharacterAccuracy', - metrics.char_accuracy( - endpoints.predicted_chars, - data.labels, - streaming=True, - rej_char=self._params.null_code)) - # Sequence accuracy computed by cutting sequence at the first null char - use_metric('SequenceAccuracy', - metrics.sequence_accuracy( - endpoints.predicted_chars, - data.labels, - streaming=True, - rej_char=self._params.null_code)) - - for name, value in names_to_values.items(): - summary_name = 'eval/' + name - tf.summary.scalar(summary_name, tf.Print(value, [value], summary_name)) - return list(names_to_updates.values()) - - def create_init_fn_to_restore(self, master_checkpoint, - inception_checkpoint=None): - """Creates an init operations to restore weights from various checkpoints. - - Args: - master_checkpoint: path to a checkpoint which contains all weights for - the whole model. - inception_checkpoint: path to a checkpoint which contains weights for the - inception part only. - - Returns: - a function to run initialization ops. - """ - all_assign_ops = [] - all_feed_dict = {} - - def assign_from_checkpoint(variables, checkpoint): - logging.info('Request to re-store %d weights from %s', - len(variables), checkpoint) - if not variables: - logging.error('Can\'t find any variables to restore.') - sys.exit(1) - assign_op, feed_dict = slim.assign_from_checkpoint(checkpoint, variables) - all_assign_ops.append(assign_op) - all_feed_dict.update(feed_dict) - - logging.info('variables_to_restore:\n%s' % utils.variables_to_restore().keys()) - logging.info('moving_average_variables:\n%s' % [v.op.name for v in tf.moving_average_variables()]) - logging.info('trainable_variables:\n%s' % [v.op.name for v in tf.trainable_variables()]) - if master_checkpoint: - assign_from_checkpoint(utils.variables_to_restore(), master_checkpoint) - - if inception_checkpoint: - variables = utils.variables_to_restore( - 'AttentionOcr_v1/conv_tower_fn/INCE', strip_scope=True) - assign_from_checkpoint(variables, inception_checkpoint) - - def init_assign_fn(sess): - logging.info('Restoring checkpoint(s)') - sess.run(all_assign_ops, all_feed_dict) - - return init_assign_fn diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/audio/data_cfg.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/audio/data_cfg.py deleted file mode 100644 index 95b403ad9c617afb5656131693c92b9cc3befd3b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/audio/data_cfg.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -from typing import Dict, Optional - - -class S2TDataConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path: Path): - try: - import yaml - except ImportError: - print("Please install PyYAML: pip install PyYAML") - self.config = {} - if yaml_path.is_file(): - try: - with open(yaml_path) as f: - self.config = yaml.load(f, Loader=yaml.FullLoader) - except Exception as e: - raise Exception( - f"Failed to load config from {yaml_path.as_posix()}: {e}" - ) - else: - raise FileNotFoundError(f"{yaml_path.as_posix()} not found") - self.root = yaml_path.parent - - def _auto_convert_to_abs_path(self, x): - if isinstance(x, str): - if not Path(x).exists() and (self.root / x).exists(): - return (self.root / x).as_posix() - elif isinstance(x, dict): - return {k: self._auto_convert_to_abs_path(v) for k, v in x.items()} - return x - - @property - def vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("vocab_filename", "dict.txt") - - @property - def speaker_set_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("speaker_set_filename", None) - - @property - def shuffle(self) -> bool: - """Shuffle dataset samples before batching""" - return self.config.get("shuffle", False) - - @property - def pre_tokenizer(self) -> Dict: - """Pre-tokenizer to apply before subword tokenization. Returning - a dictionary with `tokenizer` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("pre_tokenizer", {"tokenizer": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def bpe_tokenizer(self) -> Dict: - """Subword tokenizer to apply after pre-tokenization. Returning - a dictionary with `bpe` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("bpe_tokenizer", {"bpe": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def prepend_tgt_lang_tag(self) -> bool: - """Prepend target lang ID token as the target BOS (e.g. for to-many - multilingual setting). During inference, this requires `--prefix-size 1` - to force BOS to be lang ID token.""" - return self.config.get("prepend_tgt_lang_tag", False) - - @property - def input_feat_per_channel(self): - """The dimension of input features (per audio channel)""" - return self.config.get("input_feat_per_channel", 80) - - @property - def input_channels(self): - """The number of channels in the input audio""" - return self.config.get("input_channels", 1) - - @property - def sample_rate(self): - return self.config.get("sample_rate", 16_000) - - @property - def sampling_alpha(self): - """Hyper-parameter alpha = 1/T for temperature-based resampling. - (alpha = 1 for no resampling)""" - return self.config.get("sampling_alpha", 1.0) - - @property - def use_audio_input(self): - """Needed by the dataset loader to see if the model requires - raw audio as inputs.""" - return self.config.get("use_audio_input", False) - - @property - def use_sample_rate(self): - """Needed by the dataset loader to see if the model requires - raw audio with specific sample rate as inputs.""" - return self.config.get("use_sample_rate", 16000) - - @property - def audio_root(self): - """Audio paths in the manifest TSV can be relative and this provides - the root path. Set this to empty string when using absolute paths.""" - return self.config.get("audio_root", "") - - def get_feature_transforms(self, split, is_train): - """Split-specific feature transforms. Allowing train set - wildcard `_train`, evaluation set wildcard `_eval` and general - wildcard `*` for matching.""" - from copy import deepcopy - - cfg = deepcopy(self.config) - _cur = cfg.get("transforms", {}) - cur = _cur.get(split) - cur = _cur.get("_train") if cur is None and is_train else cur - cur = _cur.get("_eval") if cur is None and not is_train else cur - cur = _cur.get("*") if cur is None else cur - cfg["transforms"] = cur - return cfg - - @property - def global_cmvn_stats_npz(self) -> Optional[str]: - path = self.config.get("global_cmvn", {}).get("stats_npz_path", None) - return self._auto_convert_to_abs_path(path) - - @property - def vocoder(self) -> Optional[Dict[str, str]]: - return self.config.get("vocoder", None) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/__init__.py deleted file mode 100644 index 5b3dbc023aa4a6f7bfb8403b8204d71ca432f79c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import importlib -import os - -from fairseq import registry -from fairseq.optim.lr_scheduler.fairseq_lr_scheduler import ( # noqa - FairseqLRScheduler, - LegacyFairseqLRScheduler, -) -from omegaconf import DictConfig - - -( - build_lr_scheduler_, - register_lr_scheduler, - LR_SCHEDULER_REGISTRY, - LR_SCHEDULER_DATACLASS_REGISTRY, -) = registry.setup_registry( - "--lr-scheduler", base_class=FairseqLRScheduler, default="fixed" -) - - -def build_lr_scheduler(cfg: DictConfig, optimizer): - return build_lr_scheduler_(cfg, optimizer) - - -# automatically import any Python files in the optim/lr_scheduler/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.optim.lr_scheduler." + file_name) diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/__init__.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/__init__.py deleted file mode 100644 index c5fa76039ff98c18d3c14b5f4a8f73ffe644de11..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import multilingual_translation_latent_depth # noqa -from .loss import latent_depth # noqa -from .models import latent_multilingual_transformer # noqa -from .modules import latent_layers # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py deleted file mode 100644 index 94bd71fb9c46a64a8b6e1960f47dfc43b78dda43..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer - -from . import build_monotonic_attention - -from typing import Dict, Optional, List - -from torch import Tensor -import torch - - -class TransformerMonotonicEncoderLayer(TransformerEncoderLayer): - def forward(self, x, encoder_padding_mask): - seq_len, _, _ = x.size() - attn_mask = x.new_ones([seq_len, seq_len]).triu(1) - attn_mask = attn_mask.masked_fill(attn_mask.bool(), float("-inf")) - return super().forward(x, encoder_padding_mask, attn_mask) - - -class TransformerMonotonicDecoderLayer(TransformerDecoderLayer): - def __init__(self, args): - super().__init__(args) - - assert args.simul_type is not None, "A --simul-type is needed." - self.encoder_attn = build_monotonic_attention(args) - - def prune_incremental_state( - self, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ): - input_buffer = self.self_attn._get_input_buffer(incremental_state) - for key in ["prev_key", "prev_value"]: - input_buffer_key = input_buffer[key] - assert input_buffer_key is not None - if input_buffer_key.size(2) > 1: - input_buffer[key] = input_buffer_key[:, :, :-1, :] - else: - typed_empty_dict: Dict[str, Optional[Tensor]] = {} - input_buffer = typed_empty_dict - break - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, input_buffer) - - def forward( - self, - x, - encoder_out: Optional[Tensor] = None, - encoder_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[Tensor]] = None, - prev_attn_state: Optional[List[Tensor]] = None, - self_attn_mask: Optional[Tensor] = None, - self_attn_padding_mask: Optional[Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - assert self.encoder_attn is not None - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/structures/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/structures/__init__.py deleted file mode 100644 index f3ee6057e3ec2731984ce8203c6eaf5348d08260..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/structures/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .boxes import Boxes, BoxMode, pairwise_iou, pairwise_ioa, pairwise_point_box_distance -from .image_list import ImageList - -from .instances import Instances -from .keypoints import Keypoints, heatmaps_to_keypoints -from .masks import BitMasks, PolygonMasks, polygons_to_bitmask, ROIMasks -from .rotated_boxes import RotatedBoxes -from .rotated_boxes import pairwise_iou as pairwise_iou_rotated - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -from detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/README.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/README.md deleted file mode 100644 index 8531cafd4d1aae0267f4fc5e7212f7db5ed90686..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/README.md +++ /dev/null @@ -1,15 +0,0 @@ -# Read the docs: - -The latest documentation built from this directory is available at [detectron2.readthedocs.io](https://detectron2.readthedocs.io/). -Documents in this directory are not meant to be read on github. - -# Build the docs: - -1. Install detectron2 according to [INSTALL.md](../INSTALL.md). -2. Install additional libraries required to build docs: - - docutils==0.16 - - Sphinx==3.2.0 - - recommonmark==0.6.0 - - sphinx_rtd_theme - -3. Run `make html` from this directory. diff --git a/spaces/OptimalScale/Robin-33b/README.md b/spaces/OptimalScale/Robin-33b/README.md deleted file mode 100644 index c6904e694dd84286141fb42b8b1ca6194b5d0e93..0000000000000000000000000000000000000000 --- a/spaces/OptimalScale/Robin-33b/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Robin 33b -emoji: 🔥 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PAIR/PAIR-Diffusion/ldm/modules/diffusionmodules/util.py b/spaces/PAIR/PAIR-Diffusion/ldm/modules/diffusionmodules/util.py deleted file mode 100644 index 637363dfe34799e70cfdbcd11445212df9d9ca1f..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/ldm/modules/diffusionmodules/util.py +++ /dev/null @@ -1,270 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldm.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(), - "dtype": torch.get_autocast_gpu_dtype(), - "cache_enabled": torch.is_autocast_cache_enabled()} - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(), \ - torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/profiler.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/profiler.py deleted file mode 100644 index b70236997eec59c2209ef351ae38863b4112d0ec..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/profiler.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Callable, List, Optional, Union - -import torch - -from ..dist_utils import master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ProfilerHook(Hook): - """Profiler to analyze performance during training. - - PyTorch Profiler is a tool that allows the collection of the performance - metrics during the training. More details on Profiler can be found at - https://pytorch.org/docs/1.8.1/profiler.html#torch.profiler.profile - - Args: - by_epoch (bool): Profile performance by epoch or by iteration. - Default: True. - profile_iters (int): Number of iterations for profiling. - If ``by_epoch=True``, profile_iters indicates that they are the - first profile_iters epochs at the beginning of the - training, otherwise it indicates the first profile_iters - iterations. Default: 1. - activities (list[str]): List of activity groups (CPU, CUDA) to use in - profiling. Default: ['cpu', 'cuda']. - schedule (dict, optional): Config of generating the callable schedule. - if schedule is None, profiler will not add step markers into the - trace and table view. Default: None. - on_trace_ready (callable, dict): Either a handler or a dict of generate - handler. Default: None. - record_shapes (bool): Save information about operator's input shapes. - Default: False. - profile_memory (bool): Track tensor memory allocation/deallocation. - Default: False. - with_stack (bool): Record source information (file and line number) - for the ops. Default: False. - with_flops (bool): Use formula to estimate the FLOPS of specific - operators (matrix multiplication and 2D convolution). - Default: False. - json_trace_path (str, optional): Exports the collected trace in Chrome - JSON format. Default: None. - - Example: - >>> runner = ... # instantiate a Runner - >>> # tensorboard trace - >>> trace_config = dict(type='tb_trace', dir_name='work_dir') - >>> profiler_config = dict(on_trace_ready=trace_config) - >>> runner.register_profiler_hook(profiler_config) - >>> runner.run(data_loaders=[trainloader], workflow=[('train', 1)]) - """ - - def __init__(self, - by_epoch: bool = True, - profile_iters: int = 1, - activities: List[str] = ['cpu', 'cuda'], - schedule: Optional[dict] = None, - on_trace_ready: Optional[Union[Callable, dict]] = None, - record_shapes: bool = False, - profile_memory: bool = False, - with_stack: bool = False, - with_flops: bool = False, - json_trace_path: Optional[str] = None) -> None: - try: - from torch import profiler # torch version >= 1.8.1 - except ImportError: - raise ImportError('profiler is the new feature of torch1.8.1, ' - f'but your version is {torch.__version__}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean.' - self.by_epoch = by_epoch - - if profile_iters < 1: - raise ValueError('profile_iters should be greater than 0, but got ' - f'{profile_iters}') - self.profile_iters = profile_iters - - if not isinstance(activities, list): - raise ValueError( - f'activities should be list, but got {type(activities)}') - self.activities = [] - for activity in activities: - activity = activity.lower() - if activity == 'cpu': - self.activities.append(profiler.ProfilerActivity.CPU) - elif activity == 'cuda': - self.activities.append(profiler.ProfilerActivity.CUDA) - else: - raise ValueError( - f'activity should be "cpu" or "cuda", but got {activity}') - - if schedule is not None: - self.schedule = profiler.schedule(**schedule) - else: - self.schedule = None - - self.on_trace_ready = on_trace_ready - self.record_shapes = record_shapes - self.profile_memory = profile_memory - self.with_stack = with_stack - self.with_flops = with_flops - self.json_trace_path = json_trace_path - - @master_only - def before_run(self, runner): - if self.by_epoch and runner.max_epochs < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_epochs}') - - if not self.by_epoch and runner.max_iters < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_iters}') - - if callable(self.on_trace_ready): # handler - _on_trace_ready = self.on_trace_ready - elif isinstance(self.on_trace_ready, dict): # config of handler - trace_cfg = self.on_trace_ready.copy() - trace_type = trace_cfg.pop('type') # log_trace handler - if trace_type == 'log_trace': - - def _log_handler(prof): - print(prof.key_averages().table(**trace_cfg)) - - _on_trace_ready = _log_handler - elif trace_type == 'tb_trace': # tensorboard_trace handler - try: - import torch_tb_profiler # noqa: F401 - except ImportError: - raise ImportError('please run "pip install ' - 'torch-tb-profiler" to install ' - 'torch_tb_profiler') - _on_trace_ready = torch.profiler.tensorboard_trace_handler( - **trace_cfg) - else: - raise ValueError('trace_type should be "log_trace" or ' - f'"tb_trace", but got {trace_type}') - elif self.on_trace_ready is None: - _on_trace_ready = None # type: ignore - else: - raise ValueError('on_trace_ready should be handler, dict or None, ' - f'but got {type(self.on_trace_ready)}') - - if runner.max_epochs > 1: - warnings.warn(f'profiler will profile {runner.max_epochs} epochs ' - 'instead of 1 epoch. Since profiler will slow down ' - 'the training, it is recommended to train 1 epoch ' - 'with ProfilerHook and adjust your setting according' - ' to the profiler summary. During normal training ' - '(epoch > 1), you may disable the ProfilerHook.') - - self.profiler = torch.profiler.profile( - activities=self.activities, - schedule=self.schedule, - on_trace_ready=_on_trace_ready, - record_shapes=self.record_shapes, - profile_memory=self.profile_memory, - with_stack=self.with_stack, - with_flops=self.with_flops) - - self.profiler.__enter__() - runner.logger.info('profiler is profiling...') - - @master_only - def after_train_epoch(self, runner): - if self.by_epoch and runner.epoch == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) - - @master_only - def after_train_iter(self, runner): - self.profiler.step() - if not self.by_epoch and runner.iter == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/backbones/skip/concat.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/backbones/skip/concat.py deleted file mode 100644 index 8798bd42c6ab7d6dc978106b46a9ae615826190f..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/backbones/skip/concat.py +++ /dev/null @@ -1,39 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn - - -class Concat(nn.Module): - def __init__(self, dim, *args): - super(Concat, self).__init__() - self.dim = dim - - for idx, module in enumerate(args): - self.add_module(str(idx), module) - - def forward(self, input): - inputs = [] - for module in self._modules.values(): - inputs.append(module(input)) - - inputs_shapes2 = [x.shape[2] for x in inputs] - inputs_shapes3 = [x.shape[3] for x in inputs] - - if np.all(np.array(inputs_shapes2) == min(inputs_shapes2)) and np.all( - np.array(inputs_shapes3) == min(inputs_shapes3) - ): - inputs_ = inputs - else: - target_shape2 = min(inputs_shapes2) - target_shape3 = min(inputs_shapes3) - - inputs_ = [] - for inp in inputs: - diff2 = (inp.size(2) - target_shape2) // 2 - diff3 = (inp.size(3) - target_shape3) // 2 - inputs_.append(inp[:, :, diff2 : diff2 + target_shape2, diff3 : diff3 + target_shape3]) - - return torch.cat(inputs_, dim=self.dim) - - def __len__(self): - return len(self._modules) diff --git "a/spaces/ParagKesharDas360/MovieRecommadationApp/pages/3_\360\237\224\221_Login Page.py" "b/spaces/ParagKesharDas360/MovieRecommadationApp/pages/3_\360\237\224\221_Login Page.py" deleted file mode 100644 index 0be2fce2f6b0ba89601ae38f20ad2f212cb5c0e8..0000000000000000000000000000000000000000 --- "a/spaces/ParagKesharDas360/MovieRecommadationApp/pages/3_\360\237\224\221_Login Page.py" +++ /dev/null @@ -1,59 +0,0 @@ -import streamlit as st -import subprocess -import csv -from streamlit import cache -# @st.cache(suppress_st_warning=True, allow_output_mutation=True) - -def run_app(app_file, username, user_id=None): - args = ["streamlit", "run", app_file, "--", "--username", username] - if user_id: - args += ["--user_id", user_id] - subprocess.Popen(args) - -def main(): - st.set_page_config(page_title="Login Page") - - st.title("Login Page") - st.write('Welcome to the Movie Recommendation App! Please login or register to access its features.') - # Create input fields for username and password - username = st.text_input("Username") - password = st.text_input("Password", type="password") - - st.session_state["UserName"] = "" - st.session_state["UserID"] = "" - - - # Create a login button - if st.button("Login"): - # Open the login.csv file - with open('login.csv', encoding="cp437", errors='ignore') as login_file: - csv_reader = csv.reader(login_file) - found_match = False - - # Loop through the rows in the login.csv file - for row in csv_reader: - # Check if the username and password match - if row[1] == username and row[2] == password: - st.success("Logged in successfully!") - user_id = row[0] - found_match = True - break - - if not found_match: - st.error("Invalid username or password") - else: - st.session_state["UserID"] = user_id - st.session_state["UserName"]=username - - if st.checkbox("Admin"): - password1 = st.text_input("Admin Password", type="password") - if st.button("Show Login Details"): - if password1=="Admin@123": - # Display the login.csv file as a table - with open('login.csv', mode='r') as login_file: - csv_reader = csv.reader(login_file) - st.table(csv_reader) - - -if __name__ == '__main__': - main() diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/occam-channel.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/occam-channel.go deleted file mode 100644 index 9d01f078769e5a47cecd8a1843130fb9591e5241..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/occam-channel.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/oop/goops/composite-slot.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/oop/goops/composite-slot.go deleted file mode 100644 index 8fb46037d33380d4234eb5b4368de115b42cbbb6..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/oop/goops/composite-slot.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/lint.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/lint.go deleted file mode 100644 index 1a0ddb7a4d30cae719c6f867b15b860895f1232b..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/lint.go and /dev/null differ diff --git a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/clvp.py b/spaces/Pranjal12345/Text_to_Speech/tortoise/models/clvp.py deleted file mode 100644 index 00f5011a053f28b53a363bcd696e6267c8924c3b..0000000000000000000000000000000000000000 --- a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/clvp.py +++ /dev/null @@ -1,155 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import einsum - -from tortoise.models.arch_util import CheckpointedXTransformerEncoder -from tortoise.models.transformer import Transformer -from tortoise.models.xtransformers import Encoder - - -def exists(val): - return val is not None - - -def masked_mean(t, mask, dim = 1): - t = t.masked_fill(~mask[:, :, None], 0.) - return t.sum(dim = 1) / mask.sum(dim = 1)[..., None] - -class CLVP(nn.Module): - """ - CLIP model retrofitted for performing contrastive evaluation between tokenized audio data and the corresponding - transcribed text. - - Originally from https://github.com/lucidrains/DALLE-pytorch/blob/main/dalle_pytorch/dalle_pytorch.py - """ - - def __init__( - self, - *, - dim_text=512, - dim_speech=512, - dim_latent=512, - num_text_tokens=256, - text_enc_depth=6, - text_seq_len=120, - text_heads=8, - num_speech_tokens=8192, - speech_enc_depth=6, - speech_heads=8, - speech_seq_len=250, - text_mask_percentage=0, - voice_mask_percentage=0, - wav_token_compression=1024, - use_xformers=False, - ): - super().__init__() - self.text_emb = nn.Embedding(num_text_tokens, dim_text) - self.to_text_latent = nn.Linear(dim_text, dim_latent, bias=False) - - self.speech_emb = nn.Embedding(num_speech_tokens, dim_speech) - self.to_speech_latent = nn.Linear(dim_speech, dim_latent, bias=False) - - if use_xformers: - self.text_transformer = CheckpointedXTransformerEncoder( - needs_permute=False, - exit_permute=False, - max_seq_len=-1, - attn_layers=Encoder( - dim=dim_text, - depth=text_enc_depth, - heads=text_heads, - ff_dropout=.1, - ff_mult=2, - attn_dropout=.1, - use_rmsnorm=True, - ff_glu=True, - rotary_pos_emb=True, - )) - self.speech_transformer = CheckpointedXTransformerEncoder( - needs_permute=False, - exit_permute=False, - max_seq_len=-1, - attn_layers=Encoder( - dim=dim_speech, - depth=speech_enc_depth, - heads=speech_heads, - ff_dropout=.1, - ff_mult=2, - attn_dropout=.1, - use_rmsnorm=True, - ff_glu=True, - rotary_pos_emb=True, - )) - else: - self.text_transformer = Transformer(causal=False, seq_len=text_seq_len, dim=dim_text, depth=text_enc_depth, - heads=text_heads) - self.speech_transformer = Transformer(causal=False, seq_len=speech_seq_len, dim=dim_speech, - depth=speech_enc_depth, heads=speech_heads) - - self.temperature = nn.Parameter(torch.tensor(1.)) - self.text_mask_percentage = text_mask_percentage - self.voice_mask_percentage = voice_mask_percentage - self.wav_token_compression = wav_token_compression - self.xformers = use_xformers - if not use_xformers: - self.text_pos_emb = nn.Embedding(text_seq_len, dim_text) - self.speech_pos_emb = nn.Embedding(num_speech_tokens, dim_speech) - - def forward( - self, - text, - speech_tokens, - return_loss=False - ): - b, device = text.shape[0], text.device - if self.training: - text_mask = torch.rand_like(text.float()) > self.text_mask_percentage - voice_mask = torch.rand_like(speech_tokens.float()) > self.voice_mask_percentage - else: - text_mask = torch.ones_like(text.float()).bool() - voice_mask = torch.ones_like(speech_tokens.float()).bool() - - text_emb = self.text_emb(text) - speech_emb = self.speech_emb(speech_tokens) - - if not self.xformers: - text_emb += self.text_pos_emb(torch.arange(text.shape[1], device=device)) - speech_emb += self.speech_pos_emb(torch.arange(speech_emb.shape[1], device=device)) - - enc_text = self.text_transformer(text_emb, mask=text_mask) - enc_speech = self.speech_transformer(speech_emb, mask=voice_mask) - - text_latents = masked_mean(enc_text, text_mask, dim=1) - speech_latents = masked_mean(enc_speech, voice_mask, dim=1) - - text_latents = self.to_text_latent(text_latents) - speech_latents = self.to_speech_latent(speech_latents) - - text_latents, speech_latents = map(lambda t: F.normalize(t, p=2, dim=-1), (text_latents, speech_latents)) - - temp = self.temperature.exp() - - if not return_loss: - sim = einsum('n d, n d -> n', text_latents, speech_latents) * temp - return sim - - sim = einsum('i d, j d -> i j', text_latents, speech_latents) * temp - labels = torch.arange(b, device=device) - loss = (F.cross_entropy(sim, labels) + F.cross_entropy(sim.t(), labels)) / 2 - return loss - - -if __name__ == '__main__': - clip = CLVP(text_mask_percentage=.2, voice_mask_percentage=.2) - clip(torch.randint(0,256,(2,120)), - torch.tensor([50,100]), - torch.randint(0,8192,(2,250)), - torch.tensor([101,102]), - return_loss=True) - nonloss = clip(torch.randint(0,256,(2,120)), - torch.tensor([50,100]), - torch.randint(0,8192,(2,250)), - torch.tensor([101,102]), - return_loss=False) - print(nonloss.shape) \ No newline at end of file diff --git a/spaces/PushkarA07/Cover-Gen-audio2image/README.md b/spaces/PushkarA07/Cover-Gen-audio2image/README.md deleted file mode 100644 index f0fccf88c8fea8e180947b302847c430d4945a43..0000000000000000000000000000000000000000 --- a/spaces/PushkarA07/Cover-Gen-audio2image/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cover Gen Audio2image -emoji: 😻 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/QuanLingZ/ChatResponse/get_paper_from_pdf.py b/spaces/QuanLingZ/ChatResponse/get_paper_from_pdf.py deleted file mode 100644 index 7bae3b4b7c64e691208c221c869d6a06c3023652..0000000000000000000000000000000000000000 --- a/spaces/QuanLingZ/ChatResponse/get_paper_from_pdf.py +++ /dev/null @@ -1,193 +0,0 @@ -import fitz, io, os -from PIL import Image -from collections import Counter -import json -import re - -class Paper: - def __init__(self, path, title='', url='', abs='', authors=[]): - # 初始化函数,根据pdf路径初始化Paper对象 - self.url = url # 文章链接 - self.path = path # pdf路径 - self.section_names = [] # 段落标题 - self.section_texts = {} # 段落内容 - self.abs = abs - self.title_page = 0 - if title == '': - self.pdf = fitz.open(self.path) # pdf文档 - self.title = self.get_title() - self.parse_pdf() - else: - self.title = title - self.authors = authors - self.roman_num = ["I", "II", 'III', "IV", "V", "VI", "VII", "VIII", "IIX", "IX", "X"] - self.digit_num = [str(d + 1) for d in range(10)] - self.first_image = '' - - def parse_pdf(self): - self.pdf = fitz.open(self.path) # pdf文档 - self.text_list = [page.get_text() for page in self.pdf] - self.all_text = ' '.join(self.text_list) - self.extract_section_infomation() - self.section_texts.update({"title": self.title}) - self.pdf.close() - - # 定义一个函数,根据字体的大小,识别每个章节名称,并返回一个列表 - def get_chapter_names(self, ): - # # 打开一个pdf文件 - doc = fitz.open(self.path) # pdf文档 - text_list = [page.get_text() for page in doc] - all_text = '' - for text in text_list: - all_text += text - # # 创建一个空列表,用于存储章节名称 - chapter_names = [] - for line in all_text.split('\n'): - line_list = line.split(' ') - if '.' in line: - point_split_list = line.split('.') - space_split_list = line.split(' ') - if 1 < len(space_split_list) < 5: - if 1 < len(point_split_list) < 5 and ( - point_split_list[0] in self.roman_num or point_split_list[0] in self.digit_num): - # print("line:", line) - chapter_names.append(line) - - return chapter_names - - def get_title(self): - doc = self.pdf # 打开pdf文件 - max_font_size = 0 # 初始化最大字体大小为0 - max_string = "" # 初始化最大字体大小对应的字符串为空 - max_font_sizes = [0] - for page_index, page in enumerate(doc): # 遍历每一页 - text = page.get_text("dict") # 获取页面上的文本信息 - blocks = text["blocks"] # 获取文本块列表 - for block in blocks: # 遍历每个文本块 - if block["type"] == 0 and len(block['lines']): # 如果是文字类型 - if len(block["lines"][0]["spans"]): - font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小 - max_font_sizes.append(font_size) - if font_size > max_font_size: # 如果字体大小大于当前最大值 - max_font_size = font_size # 更新最大值 - max_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串 - max_font_sizes.sort() - # print("max_font_sizes", max_font_sizes[-10:]) - cur_title = '' - for page_index, page in enumerate(doc): # 遍历每一页 - text = page.get_text("dict") # 获取页面上的文本信息 - blocks = text["blocks"] # 获取文本块列表 - for block in blocks: # 遍历每个文本块 - if block["type"] == 0 and len(block['lines']): # 如果是文字类型 - if len(block["lines"][0]["spans"]): - cur_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串 - font_flags = block["lines"][0]["spans"][0]["flags"] # 获取第一行第一段文字的字体特征 - font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小 - # print(font_size) - if abs(font_size - max_font_sizes[-1]) < 0.3 or abs(font_size - max_font_sizes[-2]) < 0.3: - # print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags) - if len(cur_string) > 4 and "arXiv" not in cur_string: - # print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags) - if cur_title == '': - cur_title += cur_string - else: - cur_title += ' ' + cur_string - self.title_page = page_index - # break - title = cur_title.replace('\n', ' ') - return title - - def extract_section_infomation(self): - doc = fitz.open(self.path) - - # 获取文档中所有字体大小 - font_sizes = [] - for page in doc: - blocks = page.get_text("dict")["blocks"] - for block in blocks: - if 'lines' not in block: - continue - lines = block["lines"] - for line in lines: - for span in line["spans"]: - font_sizes.append(span["size"]) - most_common_size, _ = Counter(font_sizes).most_common(1)[0] - - # 按照最频繁的字体大小确定标题字体大小的阈值 - threshold = most_common_size * 1 - - section_dict = {} - last_heading = None - subheadings = [] - heading_font = -1 - # 遍历每一页并查找子标题 - found_abstract = False - upper_heading = False - font_heading = False - for page in doc: - blocks = page.get_text("dict")["blocks"] - for block in blocks: - if not found_abstract: - try: - text = json.dumps(block) - except: - continue - if re.search(r"\bAbstract\b", text, re.IGNORECASE): - found_abstract = True - last_heading = "Abstract" - section_dict["Abstract"] = "" - if found_abstract: - if 'lines' not in block: - continue - lines = block["lines"] - for line in lines: - for span in line["spans"]: - # 如果当前文本是子标题 - if not font_heading and span["text"].isupper() and sum(1 for c in span["text"] if c.isupper() and ('A' <= c <='Z')) > 4: # 针对一些标题大小一样,但是全大写的论文 - upper_heading = True - heading = span["text"].strip() - if "References" in heading: # reference 以后的内容不考虑 - self.section_names = subheadings - self.section_texts = section_dict - return - subheadings.append(heading) - if last_heading is not None: - section_dict[last_heading] = section_dict[last_heading].strip() - section_dict[heading] = "" - last_heading = heading - if not upper_heading and span["size"] > threshold and re.match( # 正常情况下,通过字体大小判断 - r"[A-Z][a-z]+(?:\s[A-Z][a-z]+)*", - span["text"].strip()): - font_heading = True - if heading_font == -1: - heading_font = span["size"] - elif heading_font != span["size"]: - continue - heading = span["text"].strip() - if "References" in heading: # reference 以后的内容不考虑 - self.section_names = subheadings - self.section_texts = section_dict - return - subheadings.append(heading) - if last_heading is not None: - section_dict[last_heading] = section_dict[last_heading].strip() - section_dict[heading] = "" - last_heading = heading - # 否则将当前文本添加到上一个子标题的文本中 - elif last_heading is not None: - section_dict[last_heading] += " " + span["text"].strip() - self.section_names = subheadings - self.section_texts = section_dict - - -def main(): - path = r'demo.pdf' - paper = Paper(path=path) - paper.parse_pdf() - # for key, value in paper.section_text_dict.items(): - # print(key, value) - # print("*"*40) - - -if __name__ == '__main__': - main() diff --git a/spaces/RGBD-SOD/bbsnet/app.py b/spaces/RGBD-SOD/bbsnet/app.py deleted file mode 100644 index e0995fbb03c1d5a66f12f5e2441a83468da791df..0000000000000000000000000000000000000000 --- a/spaces/RGBD-SOD/bbsnet/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import gradio as gr - -from inference import inference -from prepare_samples import prepare_samples - -TITLE = "BBS-Net Demo" -DESCRIPTION = "Gradio demo for BBS-Net: RGB-D salient object detection with a bifurcated backbone strategy network." -examples = prepare_samples() - -demo = gr.Interface( - fn=inference, - inputs=[ - gr.inputs.Image(label="RGB", type="pil"), - gr.inputs.Image(label="Depth", type="pil"), - ], - outputs=[ - gr.outputs.Image(label="Prediction", type="pil"), - ], - title=TITLE, - examples=examples, - description=DESCRIPTION, -) - - -if __name__ == "__main__": - demo.launch(server_name="0.0.0.0") diff --git a/spaces/RamAnanth1/T2I-Adapter/ldm/modules/ema.py b/spaces/RamAnanth1/T2I-Adapter/ldm/modules/ema.py deleted file mode 100644 index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/T2I-Adapter/ldm/modules/ema.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -from torch import nn - - -class LitEma(nn.Module): - def __init__(self, model, decay=0.9999, use_num_upates=True): - super().__init__() - if decay < 0.0 or decay > 1.0: - raise ValueError('Decay must be between 0 and 1') - - self.m_name2s_name = {} - self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32)) - self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates - else torch.tensor(-1,dtype=torch.int)) - - for name, p in model.named_parameters(): - if p.requires_grad: - #remove as '.'-character is not allowed in buffers - s_name = name.replace('.','') - self.m_name2s_name.update({name:s_name}) - self.register_buffer(s_name,p.clone().detach().data) - - self.collected_params = [] - - def forward(self,model): - decay = self.decay - - if self.num_updates >= 0: - self.num_updates += 1 - decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates)) - - one_minus_decay = 1.0 - decay - - with torch.no_grad(): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - - for key in m_param: - if m_param[key].requires_grad: - sname = self.m_name2s_name[key] - shadow_params[sname] = shadow_params[sname].type_as(m_param[key]) - shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key])) - else: - assert not key in self.m_name2s_name - - def copy_to(self, model): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - for key in m_param: - if m_param[key].requires_grad: - m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data) - else: - assert not key in self.m_name2s_name - - def store(self, parameters): - """ - Save the current parameters for restoring later. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.collected_params = [param.clone() for param in parameters] - - def restore(self, parameters): - """ - Restore the parameters stored with the `store` method. - Useful to validate the model with EMA parameters without affecting the - original optimization process. Store the parameters before the - `copy_to` method. After validation (or model saving), use this to - restore the former parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. - """ - for c_param, param in zip(self.collected_params, parameters): - param.data.copy_(c_param.data) diff --git a/spaces/Ramos-Ramos/albef-vqa/app.py b/spaces/Ramos-Ramos/albef-vqa/app.py deleted file mode 100644 index 39cc660200980b649327c5d32b4e1c4e17bce30f..0000000000000000000000000000000000000000 --- a/spaces/Ramos-Ramos/albef-vqa/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import json -import torch -from PIL import Image -from ruamel import yaml -from model import albef_model_for_vqa -from data.transforms import ALBEFTextTransform, testing_image_transform -import gradio as gr - -data_dir = "./" - -config = yaml.load(open("./configs/vqa.yaml", "r"), Loader=yaml.Loader) -model = albef_model_for_vqa(config) - -checkpoint_url = "https://download.pytorch.org/models/multimodal/albef/finetuned_vqa_checkpoint.pt" -checkpoint = torch.hub.load_state_dict_from_url(checkpoint_url, map_location='cpu') -model.load_state_dict(checkpoint) - -image_transform = testing_image_transform() -question_transform = ALBEFTextTransform(add_end_token=False) -answer_transform = ALBEFTextTransform(do_pre_process=False) - -vqa_data = json.load(open(data_dir + "vqa_data.json", "r")) -answer_list = json.load(open(data_dir + "answer_list.json", "r")) - -examples = [[data['image'], data['question']] for data in vqa_data] - -title = 'VQA with ALBEF' -description = 'VQA with [ALBEF](https://arxiv.org/abs/2107.07651), adapted from the [torchmultimodal example notebook](https://github.com/facebookresearch/multimodal/blob/main/examples/albef/vqa_with_albef.ipynb).' -article = '''```bibtex -@article{li2021align, - title={Align before fuse: Vision and language representation learning with momentum distillation}, - author={Li, Junnan and Selvaraju, Ramprasaath and Gotmare, Akhilesh and Joty, Shafiq and Xiong, Caiming and Hoi, Steven Chu Hong}, - journal={Advances in neural information processing systems}, - volume={34}, - pages={9694--9705}, - year={2021} -} -```''' - -def infer(image, question): - images = [image] - image_input = [image_transform(image) for image in images] - image_input = torch.stack(image_input, dim=0) - question_input = question_transform([question]) - question_atts = (question_input != 0).type(torch.long) - answer_input = answer_transform(answer_list) - answer_atts = (answer_input != 0).type(torch.long) - - answer_ids, _ = model( - image_input, - question_input, - question_atts, - answer_input, - answer_atts, - k=1, - is_train=False, - ) - - predicted_answer_id = answer_ids[0] - predicted_answer = answer_list[predicted_answer_id] - - return predicted_answer - -demo = gr.Interface( - fn=infer, - inputs=[gr.Image(label='image', type='pil', image_mode='RGB'), gr.Text(label='question')], - outputs=gr.Text(label='answer'), - examples=examples, - title=title, - description=description, - article=article -) - -demo.launch() \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/benchmarks/megadepth_pose_estimation_benchmark.py b/spaces/Realcat/image-matching-webui/third_party/Roma/roma/benchmarks/megadepth_pose_estimation_benchmark.py deleted file mode 100644 index 5d936a07d550763d0378a23ea83c79cec5d373fe..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/benchmarks/megadepth_pose_estimation_benchmark.py +++ /dev/null @@ -1,148 +0,0 @@ -import numpy as np -import torch -from roma.utils import * -from PIL import Image -from tqdm import tqdm -import torch.nn.functional as F -import roma -import kornia.geometry.epipolar as kepi - - -class MegaDepthPoseEstimationBenchmark: - def __init__(self, data_root="data/megadepth", scene_names=None) -> None: - if scene_names is None: - self.scene_names = [ - "0015_0.1_0.3.npz", - "0015_0.3_0.5.npz", - "0022_0.1_0.3.npz", - "0022_0.3_0.5.npz", - "0022_0.5_0.7.npz", - ] - else: - self.scene_names = scene_names - self.scenes = [ - np.load(f"{data_root}/{scene}", allow_pickle=True) - for scene in self.scene_names - ] - self.data_root = data_root - - def benchmark( - self, - model, - model_name=None, - resolution=None, - scale_intrinsics=True, - calibrated=True, - ): - H, W = model.get_output_resolution() - with torch.no_grad(): - data_root = self.data_root - tot_e_t, tot_e_R, tot_e_pose = [], [], [] - thresholds = [5, 10, 20] - for scene_ind in range(len(self.scenes)): - import os - - scene_name = os.path.splitext(self.scene_names[scene_ind])[0] - scene = self.scenes[scene_ind] - pairs = scene["pair_infos"] - intrinsics = scene["intrinsics"] - poses = scene["poses"] - im_paths = scene["image_paths"] - pair_inds = range(len(pairs)) - for pairind in tqdm(pair_inds): - idx1, idx2 = pairs[pairind][0] - K1 = intrinsics[idx1].copy() - T1 = poses[idx1].copy() - R1, t1 = T1[:3, :3], T1[:3, 3] - K2 = intrinsics[idx2].copy() - T2 = poses[idx2].copy() - R2, t2 = T2[:3, :3], T2[:3, 3] - R, t = compute_relative_pose(R1, t1, R2, t2) - T1_to_2 = np.concatenate((R, t[:, None]), axis=-1) - im_A_path = f"{data_root}/{im_paths[idx1]}" - im_B_path = f"{data_root}/{im_paths[idx2]}" - dense_matches, dense_certainty = model.match( - im_A_path, im_B_path, K1.copy(), K2.copy(), T1_to_2.copy() - ) - sparse_matches, _ = model.sample( - dense_matches, dense_certainty, 5000 - ) - - im_A = Image.open(im_A_path) - w1, h1 = im_A.size - im_B = Image.open(im_B_path) - w2, h2 = im_B.size - - if scale_intrinsics: - scale1 = 1200 / max(w1, h1) - scale2 = 1200 / max(w2, h2) - w1, h1 = scale1 * w1, scale1 * h1 - w2, h2 = scale2 * w2, scale2 * h2 - K1, K2 = K1.copy(), K2.copy() - K1[:2] = K1[:2] * scale1 - K2[:2] = K2[:2] * scale2 - - kpts1 = sparse_matches[:, :2] - kpts1 = np.stack( - ( - w1 * (kpts1[:, 0] + 1) / 2, - h1 * (kpts1[:, 1] + 1) / 2, - ), - axis=-1, - ) - kpts2 = sparse_matches[:, 2:] - kpts2 = np.stack( - ( - w2 * (kpts2[:, 0] + 1) / 2, - h2 * (kpts2[:, 1] + 1) / 2, - ), - axis=-1, - ) - - for _ in range(5): - shuffling = np.random.permutation(np.arange(len(kpts1))) - kpts1 = kpts1[shuffling] - kpts2 = kpts2[shuffling] - try: - threshold = 0.5 - if calibrated: - norm_threshold = threshold / ( - np.mean(np.abs(K1[:2, :2])) - + np.mean(np.abs(K2[:2, :2])) - ) - R_est, t_est, mask = estimate_pose( - kpts1, - kpts2, - K1, - K2, - norm_threshold, - conf=0.99999, - ) - T1_to_2_est = np.concatenate((R_est, t_est), axis=-1) # - e_t, e_R = compute_pose_error(T1_to_2_est, R, t) - e_pose = max(e_t, e_R) - except Exception as e: - print(repr(e)) - e_t, e_R = 90, 90 - e_pose = max(e_t, e_R) - tot_e_t.append(e_t) - tot_e_R.append(e_R) - tot_e_pose.append(e_pose) - tot_e_pose = np.array(tot_e_pose) - auc = pose_auc(tot_e_pose, thresholds) - acc_5 = (tot_e_pose < 5).mean() - acc_10 = (tot_e_pose < 10).mean() - acc_15 = (tot_e_pose < 15).mean() - acc_20 = (tot_e_pose < 20).mean() - map_5 = acc_5 - map_10 = np.mean([acc_5, acc_10]) - map_20 = np.mean([acc_5, acc_10, acc_15, acc_20]) - print(f"{model_name} auc: {auc}") - return { - "auc_5": auc[0], - "auc_10": auc[1], - "auc_20": auc[2], - "map_5": map_5, - "map_10": map_10, - "map_20": map_20, - } diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py deleted file mode 100644 index 501e1f7befa086f0b2f818531807411fc383d7bd..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py +++ /dev/null @@ -1,261 +0,0 @@ -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import (bbox2distance, bbox_overlaps, distance2bbox, - multi_apply, reduce_mean) -from ..builder import HEADS, build_loss -from .gfl_head import GFLHead - - -@HEADS.register_module() -class LDHead(GFLHead): - """Localization distillation Head. (Short description) - - It utilizes the learned bbox distributions to transfer the localization - dark knowledge from teacher to student. Original paper: `Localization - Distillation for Object Detection. `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - loss_ld (dict): Config of Localization Distillation Loss (LD), - T is the temperature for distillation. - """ - - def __init__(self, - num_classes, - in_channels, - loss_ld=dict( - type='LocalizationDistillationLoss', - loss_weight=0.25, - T=10), - **kwargs): - - super(LDHead, self).__init__(num_classes, in_channels, **kwargs) - self.loss_ld = build_loss(loss_ld) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, soft_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[tuple, Tensor]: Loss components and weight targets. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - soft_targets = soft_targets.permute(0, 2, 3, - 1).reshape(-1, - 4 * (self.reg_max + 1)) - - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = distance2bbox(pos_anchor_centers, - pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - pos_soft_targets = soft_targets[pos_inds] - soft_corners = pos_soft_targets.reshape(-1, self.reg_max + 1) - - target_corners = bbox2distance(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - # ld loss - loss_ld = self.loss_ld( - pred_corners, - soft_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - - else: - loss_ld = bbox_pred.sum() * 0 - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, loss_ld, weight_targets.sum() - - def forward_train(self, - x, - out_teacher, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple[dict, list]: The loss components and proposals of each image. - - - losses (dict[str, Tensor]): A dictionary of loss components. - - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - soft_target = out_teacher[1] - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, soft_target, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, soft_target, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - soft_target, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl, losses_ld, \ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.anchor_generator.strides, - soft_target, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) + 1e-6 - avg_factor = reduce_mean(avg_factor).item() - losses_bbox = [x / avg_factor for x in losses_bbox] - losses_dfl = [x / avg_factor for x in losses_dfl] - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_dfl=losses_dfl, - loss_ld=losses_ld) diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/models/source.py b/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/models/source.py deleted file mode 100644 index f2a006e53c0e2194036fd08ea9d6ed4d9a10d6cf..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/models/source.py +++ /dev/null @@ -1,538 +0,0 @@ -import torch -import numpy as np -import sys -import torch.nn.functional as torch_nn_func - - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (idx + 2) - - # generate sine waveforms - sine_waves = self._f02sine(f0_buf) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class PulseGen(torch.nn.Module): - """ Definition of Pulse train generator - - There are many ways to implement pulse generator. - Here, PulseGen is based on SinGen. For a perfect - """ - def __init__(self, samp_rate, pulse_amp = 0.1, - noise_std = 0.003, voiced_threshold = 0): - super(PulseGen, self).__init__() - self.pulse_amp = pulse_amp - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.noise_std = noise_std - self.l_sinegen = SineGen(self.sampling_rate, harmonic_num=0, \ - sine_amp=self.pulse_amp, noise_std=0, \ - voiced_threshold=self.voiced_threshold, \ - flag_for_pulse=True) - - def forward(self, f0): - """ Pulse train generator - pulse_train, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output pulse_train: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - - Note: self.l_sine doesn't make sure that the initial phase of - a voiced segment is np.pi, the first pulse in a voiced segment - may not be at the first time step within a voiced segment - """ - with torch.no_grad(): - sine_wav, uv, noise = self.l_sinegen(f0) - - # sine without additive noise - pure_sine = sine_wav - noise - - # step t corresponds to a pulse if - # sine[t] > sine[t+1] & sine[t] > sine[t-1] - # & sine[t-1], sine[t+1], and sine[t] are voiced - # or - # sine[t] is voiced, sine[t-1] is unvoiced - # we use torch.roll to simulate sine[t+1] and sine[t-1] - sine_1 = torch.roll(pure_sine, shifts=1, dims=1) - uv_1 = torch.roll(uv, shifts=1, dims=1) - uv_1[:, 0, :] = 0 - sine_2 = torch.roll(pure_sine, shifts=-1, dims=1) - uv_2 = torch.roll(uv, shifts=-1, dims=1) - uv_2[:, -1, :] = 0 - - loc = (pure_sine > sine_1) * (pure_sine > sine_2) \ - * (uv_1 > 0) * (uv_2 > 0) * (uv > 0) \ - + (uv_1 < 1) * (uv > 0) - - # pulse train without noise - pulse_train = pure_sine * loc - - # additive noise to pulse train - # note that noise from sinegen is zero in voiced regions - pulse_noise = torch.randn_like(pure_sine) * self.noise_std - - # with additive noise on pulse, and unvoiced regions - pulse_train += pulse_noise * loc + pulse_noise * (1 - uv) - return pulse_train, sine_wav, uv, pulse_noise - - -class SignalsConv1d(torch.nn.Module): - """ Filtering input signal with time invariant filter - Note: FIRFilter conducted filtering given fixed FIR weight - SignalsConv1d convolves two signals - Note: this is based on torch.nn.functional.conv1d - - """ - - def __init__(self): - super(SignalsConv1d, self).__init__() - - def forward(self, signal, system_ir): - """ output = forward(signal, system_ir) - - signal: (batchsize, length1, dim) - system_ir: (length2, dim) - - output: (batchsize, length1, dim) - """ - if signal.shape[-1] != system_ir.shape[-1]: - print("Error: SignalsConv1d expects shape:") - print("signal (batchsize, length1, dim)") - print("system_id (batchsize, length2, dim)") - print("But received signal: {:s}".format(str(signal.shape))) - print(" system_ir: {:s}".format(str(system_ir.shape))) - sys.exit(1) - padding_length = system_ir.shape[0] - 1 - groups = signal.shape[-1] - - # pad signal on the left - signal_pad = torch_nn_func.pad(signal.permute(0, 2, 1), \ - (padding_length, 0)) - # prepare system impulse response as (dim, 1, length2) - # also flip the impulse response - ir = torch.flip(system_ir.unsqueeze(1).permute(2, 1, 0), \ - dims=[2]) - # convolute - output = torch_nn_func.conv1d(signal_pad, ir, groups=groups) - return output.permute(0, 2, 1) - - -class CyclicNoiseGen_v1(torch.nn.Module): - """ CyclicnoiseGen_v1 - Cyclic noise with a single parameter of beta. - Pytorch v1 implementation assumes f_t is also fixed - """ - - def __init__(self, samp_rate, - noise_std=0.003, voiced_threshold=0): - super(CyclicNoiseGen_v1, self).__init__() - self.samp_rate = samp_rate - self.noise_std = noise_std - self.voiced_threshold = voiced_threshold - - self.l_pulse = PulseGen(samp_rate, pulse_amp=1.0, - noise_std=noise_std, - voiced_threshold=voiced_threshold) - self.l_conv = SignalsConv1d() - - def noise_decay(self, beta, f0mean): - """ decayed_noise = noise_decay(beta, f0mean) - decayed_noise = n[t]exp(-t * f_mean / beta / samp_rate) - - beta: (dim=1) or (batchsize=1, 1, dim=1) - f0mean (batchsize=1, 1, dim=1) - - decayed_noise (batchsize=1, length, dim=1) - """ - with torch.no_grad(): - # exp(-1.0 n / T) < 0.01 => n > -log(0.01)*T = 4.60*T - # truncate the noise when decayed by -40 dB - length = 4.6 * self.samp_rate / f0mean - length = length.int() - time_idx = torch.arange(0, length, device=beta.device) - time_idx = time_idx.unsqueeze(0).unsqueeze(2) - time_idx = time_idx.repeat(beta.shape[0], 1, beta.shape[2]) - - noise = torch.randn(time_idx.shape, device=beta.device) - - # due to Pytorch implementation, use f0_mean as the f0 factor - decay = torch.exp(-time_idx * f0mean / beta / self.samp_rate) - return noise * self.noise_std * decay - - def forward(self, f0s, beta): - """ Producde cyclic-noise - """ - # pulse train - pulse_train, sine_wav, uv, noise = self.l_pulse(f0s) - pure_pulse = pulse_train - noise - - # decayed_noise (length, dim=1) - if (uv < 1).all(): - # all unvoiced - cyc_noise = torch.zeros_like(sine_wav) - else: - f0mean = f0s[uv > 0].mean() - - decayed_noise = self.noise_decay(beta, f0mean)[0, :, :] - # convolute - cyc_noise = self.l_conv(pure_pulse, decayed_noise) - - # add noise in invoiced segments - cyc_noise = cyc_noise + noise * (1.0 - uv) - return cyc_noise, pulse_train, sine_wav, uv, noise - - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, \ - device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (idx + 2) - - # generate sine waveforms - sine_waves = self._f02sine(f0_buf) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleCycNoise_v1(torch.nn.Module): - """ SourceModuleCycNoise_v1 - SourceModule(sampling_rate, noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - - noise_std: std of Gaussian noise (default: 0.003) - voiced_threshold: threshold to set U/V given F0 (default: 0) - - cyc, noise, uv = SourceModuleCycNoise_v1(F0_upsampled, beta) - F0_upsampled (batchsize, length, 1) - beta (1) - cyc (batchsize, length, 1) - noise (batchsize, length, 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, noise_std=0.003, voiced_threshod=0): - super(SourceModuleCycNoise_v1, self).__init__() - self.sampling_rate = sampling_rate - self.noise_std = noise_std - self.l_cyc_gen = CyclicNoiseGen_v1(sampling_rate, noise_std, - voiced_threshod) - - def forward(self, f0_upsamped, beta): - """ - cyc, noise, uv = SourceModuleCycNoise_v1(F0, beta) - F0_upsampled (batchsize, length, 1) - beta (1) - cyc (batchsize, length, 1) - noise (batchsize, length, 1) - uv (batchsize, length, 1) - """ - # source for harmonic branch - cyc, pulse, sine, uv, add_noi = self.l_cyc_gen(f0_upsamped, beta) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.noise_std / 3 - return cyc, noise, uv - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -if __name__ == '__main__': - source = SourceModuleCycNoise_v1(24000) - x = torch.randn(16, 25600, 1) - - diff --git a/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/layers/tf_layers.py b/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/layers/tf_layers.py deleted file mode 100644 index c0f46bd755c161cda2ac904fe37f3f3c6357a88d..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/layers/tf_layers.py +++ /dev/null @@ -1,129 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 MINH ANH (@dathudeptrai) -# MIT License (https://opensource.org/licenses/MIT) - -"""Tensorflow Layer modules complatible with pytorch.""" - -import tensorflow as tf - - -class TFReflectionPad1d(tf.keras.layers.Layer): - """Tensorflow ReflectionPad1d module.""" - - def __init__(self, padding_size): - """Initialize TFReflectionPad1d module. - - Args: - padding_size (int): Padding size. - - """ - super(TFReflectionPad1d, self).__init__() - self.padding_size = padding_size - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Padded tensor (B, T + 2 * padding_size, 1, C). - - """ - return tf.pad(x, [[0, 0], [self.padding_size, self.padding_size], [0, 0], [0, 0]], "REFLECT") - - -class TFConvTranspose1d(tf.keras.layers.Layer): - """Tensorflow ConvTranspose1d module.""" - - def __init__(self, channels, kernel_size, stride, padding): - """Initialize TFConvTranspose1d( module. - - Args: - channels (int): Number of channels. - kernel_size (int): kernel size. - strides (int): Stride width. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFConvTranspose1d, self).__init__() - self.conv1d_transpose = tf.keras.layers.Conv2DTranspose( - filters=channels, - kernel_size=(kernel_size, 1), - strides=(stride, 1), - padding=padding, - ) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensors: Output tensor (B, T', 1, C'). - - """ - x = self.conv1d_transpose(x) - return x - - -class TFResidualStack(tf.keras.layers.Layer): - """Tensorflow ResidualStack module.""" - - def __init__(self, - kernel_size, - channels, - dilation, - bias, - nonlinear_activation, - nonlinear_activation_params, - padding, - ): - """Initialize TFResidualStack module. - - Args: - kernel_size (int): Kernel size. - channles (int): Number of channels. - dilation (int): Dilation ine. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFResidualStack, self).__init__() - self.block = [ - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - TFReflectionPad1d(dilation), - tf.keras.layers.Conv2D( - filters=channels, - kernel_size=(kernel_size, 1), - dilation_rate=(dilation, 1), - use_bias=bias, - padding="valid", - ), - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - ] - self.shortcut = tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Output tensor (B, T, 1, C). - - """ - _x = tf.identity(x) - for i, layer in enumerate(self.block): - _x = layer(_x) - shortcut = self.shortcut(x) - return shortcut + _x diff --git a/spaces/SDXL-ME/stabilityai-stable-diffusion-xl-base-1.0/app.py b/spaces/SDXL-ME/stabilityai-stable-diffusion-xl-base-1.0/app.py deleted file mode 100644 index 9520517f687cf7229ddfab9d8c5f8af7f76b0bd4..0000000000000000000000000000000000000000 --- a/spaces/SDXL-ME/stabilityai-stable-diffusion-xl-base-1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-xl-base-1.0").launch() \ No newline at end of file diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/english.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/english.py deleted file mode 100644 index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000 --- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/SShaik/SS-01-H5-Play-Canvas-Sim-Physics/README.md b/spaces/SShaik/SS-01-H5-Play-Canvas-Sim-Physics/README.md deleted file mode 100644 index 13fe6f2b269c83150ee06e554549805537982d8c..0000000000000000000000000000000000000000 --- a/spaces/SShaik/SS-01-H5-Play-Canvas-Sim-Physics/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: SS 01 H5 Play Canvas Sim Physics -emoji: 🐠 -colorFrom: yellow -colorTo: gray -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Samood/whos_dat_doggo/README.md b/spaces/Samood/whos_dat_doggo/README.md deleted file mode 100644 index b0fed99f97040ce43451e55b37f4044ddbde99ca..0000000000000000000000000000000000000000 --- a/spaces/Samood/whos_dat_doggo/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Whos dat doggo -emoji: 🐶 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: unlicense ---- - -The dataset we are using is the Standford Dog Dataset (http://vision.stanford.edu/aditya86/ImageNetDogs/) -As this dataset were built using ImageNet images, the images can be normalized using the mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225] this will assure that the neural network will have and mean of 0 and standar deviation of 1. - -The input of the application is an image of a dog that you can insert through the UI built with Gradio. -The output of the application is a text describing the breed of the dog or the closest breed that the model predicted for the image. - -# Prediction Task -The task the application will be performing is classification. Specifically, the classification of images of dogs, with the objective of accurately classifying them according to their breed. As such, the input will be images of dogs, and the possible values for the output will be one of one hundred and twenty two different breeds; e.g. Poodle, Briard, Newfoundland, Bluetick, etc. - -# Data collection -The initial train dataset will be the Stanford Dogs dataset. The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. Contents of this dataset: - -Number of categories: 120 -Number of images: 20,580 -Annotations: Class labels, Bounding boxes - -# Making Predictions -To make a prediction, the model will be available for interaction in the HuggingFace platform, over at https://huggingface.co/spaces/Samood/whos_that_doggo - -# Building models -The application is based on a single model, based on a pre-trained ANN, ResNet50. Additional layers have been added for the purpose of dog breed classification and the pre-trained layers were frozen. - -This repository has Three main files: -- notebook.ipynb: This notebook contains all the steps that we took to create the RestNet50 Classification Model. -- app.py: This file contains the Gradio UI and also loads the model to HugginFace. -- model.zip: The model itself, already trained on the notebook and ready to be loaded. diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/utils/utils.py b/spaces/SankarSrin/image-matting-app/ppmatting/utils/utils.py deleted file mode 100644 index 13513cb193757b63043f44a2c145b3e9b6fad82e..0000000000000000000000000000000000000000 --- a/spaces/SankarSrin/image-matting-app/ppmatting/utils/utils.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os - - -def get_files(root_path): - res = [] - for root, dirs, files in os.walk(root_path, followlinks=True): - for f in files: - if f.endswith(('.jpg', '.png', '.jpeg', 'JPG')): - res.append(os.path.join(root, f)) - return res - - -def get_image_list(image_path): - """Get image list""" - valid_suffix = [ - '.JPEG', '.jpeg', '.JPG', '.jpg', '.BMP', '.bmp', '.PNG', '.png' - ] - image_list = [] - image_dir = None - if os.path.isfile(image_path): - image_dir = None - if os.path.splitext(image_path)[-1] in valid_suffix: - image_list.append(image_path) - else: - image_dir = os.path.dirname(image_path) - with open(image_path, 'r') as f: - for line in f: - line = line.strip() - if len(line.split()) > 1: - raise RuntimeError( - 'There should be only one image path per line in `image_path` file. Wrong line: {}' - .format(line)) - image_list.append(os.path.join(image_dir, line)) - elif os.path.isdir(image_path): - image_dir = image_path - for root, dirs, files in os.walk(image_path): - for f in files: - if '.ipynb_checkpoints' in root: - continue - if os.path.splitext(f)[-1] in valid_suffix: - image_list.append(os.path.join(root, f)) - image_list.sort() - else: - raise FileNotFoundError( - '`image_path` is not found. it should be an image file or a directory including images' - ) - - if len(image_list) == 0: - raise RuntimeError('There are not image file in `image_path`') - - return image_list, image_dir - - -def mkdir(path): - sub_dir = os.path.dirname(path) - if not os.path.exists(sub_dir): - os.makedirs(sub_dir) diff --git a/spaces/SkyYeXianer/vits-uma-genshin-honkai/utils.py b/spaces/SkyYeXianer/vits-uma-genshin-honkai/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/SkyYeXianer/vits-uma-genshin-honkai/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/SophieTr/TextSummarizationDemo/README.md b/spaces/SophieTr/TextSummarizationDemo/README.md deleted file mode 100644 index a36e49c78c96b9ea2e65bdcf2da9e50aa0f41402..0000000000000000000000000000000000000000 --- a/spaces/SophieTr/TextSummarizationDemo/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: TextSummarizationDemo -emoji: 🚀 -colorFrom: red -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false ---- -This is a demo to show the result from my fine-tuned Pegasus model for text-summarization task. The pre-trained model is "google/pegasus-large". The model was fine tuned on 1000 training data points from my customized dataset - "SophieTr/reddit_clean" \ No newline at end of file diff --git a/spaces/Stanford-CS236g/example-pokemon-gan/app.py b/spaces/Stanford-CS236g/example-pokemon-gan/app.py deleted file mode 100644 index f71f3ee5f1469eaf8d5da702534ae3290fcadda3..0000000000000000000000000000000000000000 --- a/spaces/Stanford-CS236g/example-pokemon-gan/app.py +++ /dev/null @@ -1,129 +0,0 @@ -import sys -import os -import gradio as gr -from PIL import Image - -os.system("git clone https://github.com/autonomousvision/projected_gan.git") - -sys.path.append("projected_gan") - - -"""Generate images using pretrained network pickle.""" - -import re -from typing import List, Optional, Tuple, Union - -import click -import dnnlib -import numpy as np -import PIL.Image -import torch - -import legacy - -from huggingface_hub import hf_hub_url - -#---------------------------------------------------------------------------- - -def parse_range(s: Union[str, List]) -> List[int]: - '''Parse a comma separated list of numbers or ranges and return a list of ints. - Example: '1,2,5-10' returns [1, 2, 5, 6, 7] - ''' - if isinstance(s, list): return s - ranges = [] - range_re = re.compile(r'^(\d+)-(\d+)$') - for p in s.split(','): - m = range_re.match(p) - if m: - ranges.extend(range(int(m.group(1)), int(m.group(2))+1)) - else: - ranges.append(int(p)) - return ranges - -#---------------------------------------------------------------------------- - -def parse_vec2(s: Union[str, Tuple[float, float]]) -> Tuple[float, float]: - '''Parse a floating point 2-vector of syntax 'a,b'. - Example: - '0,1' returns (0,1) - ''' - if isinstance(s, tuple): return s - parts = s.split(',') - if len(parts) == 2: - return (float(parts[0]), float(parts[1])) - raise ValueError(f'cannot parse 2-vector {s}') - -#---------------------------------------------------------------------------- - -def make_transform(translate: Tuple[float,float], angle: float): - m = np.eye(3) - s = np.sin(angle/360.0*np.pi*2) - c = np.cos(angle/360.0*np.pi*2) - m[0][0] = c - m[0][1] = s - m[0][2] = translate[0] - m[1][0] = -s - m[1][1] = c - m[1][2] = translate[1] - return m - -#---------------------------------------------------------------------------- - -device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') - -config_file_url = hf_hub_url("autonomousvision/Projected_GAN_Pokemon", filename="pokemon.pkl") -with dnnlib.util.open_url(config_file_url) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore - -def generate_images(seeds): - """Generate images using pretrained network pickle. - Examples: - \b - # Generate an image using pre-trained AFHQv2 model ("Ours" in Figure 1, left). - python gen_images.py --outdir=out --trunc=1 --seeds=2 \\ - --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl - \b - # Generate uncurated images with truncation using the MetFaces-U dataset - python gen_images.py --outdir=out --trunc=0.7 --seeds=600-605 \\ - --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-t-metfacesu-1024x1024.pkl - """ - - - - - # Labels. - label = torch.zeros([1, G.c_dim], device=device) - - - # Generate images. - for seed_idx, seed in enumerate(seeds): - print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds))) - z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device).float() - - # Construct an inverse rotation/translation matrix and pass to the generator. The - # generator expects this matrix as an inverse to avoid potentially failing numerical - # operations in the network. - if hasattr(G.synthesis, 'input'): - m = make_transform('0,0', 0) - m = np.linalg.inv(m) - G.synthesis.input.transform.copy_(torch.from_numpy(m)) - - img = G(z, label, truncation_psi=1, noise_mode='const') - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - pilimg = PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB') - return pilimg - - -def inference(seedin): - listseed = [int(seedin)] - output = generate_images(listseed) - return output - -title = "Example: Pokemon GAN" -description = "Gradio demo for Pokemon GAN. To use it, provide a seed, or click one of the examples to load them. Read more at the links below." - -article = "

Projected GANs Converge Faster | Github Repo

visitor badge
" - -gr.Interface(inference,gr.inputs.Slider(label="Seed",minimum=0, maximum=5000, step=1, default=0),"pil",title=title,description=description,article=article, allow_screenshot=False, allow_flagging="never", live=True, examples=[ - [0],[1],[10],[20],[30],[42],[50],[60],[77],[102] - ]).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_runfiles/pydev_runfiles.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_runfiles/pydev_runfiles.py deleted file mode 100644 index 9c199e175f3eb8cf9cf7ebada3a4549d4df6832c..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_runfiles/pydev_runfiles.py +++ /dev/null @@ -1,857 +0,0 @@ -from __future__ import nested_scopes - -import fnmatch -import os.path -from _pydev_runfiles.pydev_runfiles_coverage import start_coverage_support -from _pydevd_bundle.pydevd_constants import * # @UnusedWildImport -import re -import time - - -#======================================================================================================================= -# Configuration -#======================================================================================================================= -class Configuration: - - def __init__( - self, - files_or_dirs='', - verbosity=2, - include_tests=None, - tests=None, - port=None, - files_to_tests=None, - jobs=1, - split_jobs='tests', - coverage_output_dir=None, - coverage_include=None, - coverage_output_file=None, - exclude_files=None, - exclude_tests=None, - include_files=None, - django=False, - ): - self.files_or_dirs = files_or_dirs - self.verbosity = verbosity - self.include_tests = include_tests - self.tests = tests - self.port = port - self.files_to_tests = files_to_tests - self.jobs = jobs - self.split_jobs = split_jobs - self.django = django - - if include_tests: - assert isinstance(include_tests, (list, tuple)) - - if exclude_files: - assert isinstance(exclude_files, (list, tuple)) - - if exclude_tests: - assert isinstance(exclude_tests, (list, tuple)) - - self.exclude_files = exclude_files - self.include_files = include_files - self.exclude_tests = exclude_tests - - self.coverage_output_dir = coverage_output_dir - self.coverage_include = coverage_include - self.coverage_output_file = coverage_output_file - - def __str__(self): - return '''Configuration - - files_or_dirs: %s - - verbosity: %s - - tests: %s - - port: %s - - files_to_tests: %s - - jobs: %s - - split_jobs: %s - - - include_files: %s - - include_tests: %s - - - exclude_files: %s - - exclude_tests: %s - - - coverage_output_dir: %s - - coverage_include_dir: %s - - coverage_output_file: %s - - - django: %s -''' % ( - self.files_or_dirs, - self.verbosity, - self.tests, - self.port, - self.files_to_tests, - self.jobs, - self.split_jobs, - - self.include_files, - self.include_tests, - - self.exclude_files, - self.exclude_tests, - - self.coverage_output_dir, - self.coverage_include, - self.coverage_output_file, - - self.django, - ) - - -#======================================================================================================================= -# parse_cmdline -#======================================================================================================================= -def parse_cmdline(argv=None): - """ - Parses command line and returns test directories, verbosity, test filter and test suites - - usage: - runfiles.py -v|--verbosity -t|--tests dirs|files - - Multiprocessing options: - jobs=number (with the number of jobs to be used to run the tests) - split_jobs='module'|'tests' - if == module, a given job will always receive all the tests from a module - if == tests, the tests will be split independently of their originating module (default) - - --exclude_files = comma-separated list of patterns with files to exclude (fnmatch style) - --include_files = comma-separated list of patterns with files to include (fnmatch style) - --exclude_tests = comma-separated list of patterns with test names to exclude (fnmatch style) - - Note: if --tests is given, --exclude_files, --include_files and --exclude_tests are ignored! - """ - if argv is None: - argv = sys.argv - - verbosity = 2 - include_tests = None - tests = None - port = None - jobs = 1 - split_jobs = 'tests' - files_to_tests = {} - coverage_output_dir = None - coverage_include = None - exclude_files = None - exclude_tests = None - include_files = None - django = False - - from _pydev_bundle._pydev_getopt import gnu_getopt - optlist, dirs = gnu_getopt( - argv[1:], "", - [ - "verbosity=", - "tests=", - - "port=", - "config_file=", - - "jobs=", - "split_jobs=", - - "include_tests=", - "include_files=", - - "exclude_files=", - "exclude_tests=", - - "coverage_output_dir=", - "coverage_include=", - - "django=" - ] - ) - - for opt, value in optlist: - if opt in ("-v", "--verbosity"): - verbosity = value - - elif opt in ("-p", "--port"): - port = int(value) - - elif opt in ("-j", "--jobs"): - jobs = int(value) - - elif opt in ("-s", "--split_jobs"): - split_jobs = value - if split_jobs not in ('module', 'tests'): - raise AssertionError('Expected split to be either "module" or "tests". Was :%s' % (split_jobs,)) - - elif opt in ("-d", "--coverage_output_dir",): - coverage_output_dir = value.strip() - - elif opt in ("-i", "--coverage_include",): - coverage_include = value.strip() - - elif opt in ("-I", "--include_tests"): - include_tests = value.split(',') - - elif opt in ("-E", "--exclude_files"): - exclude_files = value.split(',') - - elif opt in ("-F", "--include_files"): - include_files = value.split(',') - - elif opt in ("-e", "--exclude_tests"): - exclude_tests = value.split(',') - - elif opt in ("-t", "--tests"): - tests = value.split(',') - - elif opt in ("--django",): - django = value.strip() in ['true', 'True', '1'] - - elif opt in ("-c", "--config_file"): - config_file = value.strip() - if os.path.exists(config_file): - f = open(config_file, 'r') - try: - config_file_contents = f.read() - finally: - f.close() - - if config_file_contents: - config_file_contents = config_file_contents.strip() - - if config_file_contents: - for line in config_file_contents.splitlines(): - file_and_test = line.split('|') - if len(file_and_test) == 2: - file, test = file_and_test - if file in files_to_tests: - files_to_tests[file].append(test) - else: - files_to_tests[file] = [test] - - else: - sys.stderr.write('Could not find config file: %s\n' % (config_file,)) - - if type([]) != type(dirs): - dirs = [dirs] - - ret_dirs = [] - for d in dirs: - if '|' in d: - # paths may come from the ide separated by | - ret_dirs.extend(d.split('|')) - else: - ret_dirs.append(d) - - verbosity = int(verbosity) - - if tests: - if verbosity > 4: - sys.stdout.write('--tests provided. Ignoring --exclude_files, --exclude_tests and --include_files\n') - exclude_files = exclude_tests = include_files = None - - config = Configuration( - ret_dirs, - verbosity, - include_tests, - tests, - port, - files_to_tests, - jobs, - split_jobs, - coverage_output_dir, - coverage_include, - exclude_files=exclude_files, - exclude_tests=exclude_tests, - include_files=include_files, - django=django, - ) - - if verbosity > 5: - sys.stdout.write(str(config) + '\n') - return config - - -#======================================================================================================================= -# PydevTestRunner -#======================================================================================================================= -class PydevTestRunner(object): - """ finds and runs a file or directory of files as a unit test """ - - __py_extensions = ["*.py", "*.pyw"] - __exclude_files = ["__init__.*"] - - # Just to check that only this attributes will be written to this file - __slots__ = [ - 'verbosity', # Always used - - 'files_to_tests', # If this one is given, the ones below are not used - - 'files_or_dirs', # Files or directories received in the command line - 'include_tests', # The filter used to collect the tests - 'tests', # Strings with the tests to be run - - 'jobs', # Integer with the number of jobs that should be used to run the test cases - 'split_jobs', # String with 'tests' or 'module' (how should the jobs be split) - - 'configuration', - 'coverage', - ] - - def __init__(self, configuration): - self.verbosity = configuration.verbosity - - self.jobs = configuration.jobs - self.split_jobs = configuration.split_jobs - - files_to_tests = configuration.files_to_tests - if files_to_tests: - self.files_to_tests = files_to_tests - self.files_or_dirs = list(files_to_tests.keys()) - self.tests = None - else: - self.files_to_tests = {} - self.files_or_dirs = configuration.files_or_dirs - self.tests = configuration.tests - - self.configuration = configuration - self.__adjust_path() - - def __adjust_path(self): - """ add the current file or directory to the python path """ - path_to_append = None - for n in range(len(self.files_or_dirs)): - dir_name = self.__unixify(self.files_or_dirs[n]) - if os.path.isdir(dir_name): - if not dir_name.endswith("/"): - self.files_or_dirs[n] = dir_name + "/" - path_to_append = os.path.normpath(dir_name) - elif os.path.isfile(dir_name): - path_to_append = os.path.dirname(dir_name) - else: - if not os.path.exists(dir_name): - block_line = '*' * 120 - sys.stderr.write('\n%s\n* PyDev test runner error: %s does not exist.\n%s\n' % (block_line, dir_name, block_line)) - return - msg = ("unknown type. \n%s\nshould be file or a directory.\n" % (dir_name)) - raise RuntimeError(msg) - if path_to_append is not None: - # Add it as the last one (so, first things are resolved against the default dirs and - # if none resolves, then we try a relative import). - sys.path.append(path_to_append) - - def __is_valid_py_file(self, fname): - """ tests that a particular file contains the proper file extension - and is not in the list of files to exclude """ - is_valid_fname = 0 - for invalid_fname in self.__class__.__exclude_files: - is_valid_fname += int(not fnmatch.fnmatch(fname, invalid_fname)) - if_valid_ext = 0 - for ext in self.__class__.__py_extensions: - if_valid_ext += int(fnmatch.fnmatch(fname, ext)) - return is_valid_fname > 0 and if_valid_ext > 0 - - def __unixify(self, s): - """ stupid windows. converts the backslash to forwardslash for consistency """ - return os.path.normpath(s).replace(os.sep, "/") - - def __importify(self, s, dir=False): - """ turns directory separators into dots and removes the ".py*" extension - so the string can be used as import statement """ - if not dir: - dirname, fname = os.path.split(s) - - if fname.count('.') > 1: - # if there's a file named xxx.xx.py, it is not a valid module, so, let's not load it... - return - - imp_stmt_pieces = [dirname.replace("\\", "/").replace("/", "."), os.path.splitext(fname)[0]] - - if len(imp_stmt_pieces[0]) == 0: - imp_stmt_pieces = imp_stmt_pieces[1:] - - return ".".join(imp_stmt_pieces) - - else: # handle dir - return s.replace("\\", "/").replace("/", ".") - - def __add_files(self, pyfiles, root, files): - """ if files match, appends them to pyfiles. used by os.path.walk fcn """ - for fname in files: - if self.__is_valid_py_file(fname): - name_without_base_dir = self.__unixify(os.path.join(root, fname)) - pyfiles.append(name_without_base_dir) - - def find_import_files(self): - """ return a list of files to import """ - if self.files_to_tests: - pyfiles = self.files_to_tests.keys() - else: - pyfiles = [] - - for base_dir in self.files_or_dirs: - if os.path.isdir(base_dir): - for root, dirs, files in os.walk(base_dir): - # Note: handling directories that should be excluded from the search because - # they don't have __init__.py - exclude = {} - for d in dirs: - for init in ['__init__.py', '__init__.pyo', '__init__.pyc', '__init__.pyw', '__init__$py.class']: - if os.path.exists(os.path.join(root, d, init).replace('\\', '/')): - break - else: - exclude[d] = 1 - - if exclude: - new = [] - for d in dirs: - if d not in exclude: - new.append(d) - - dirs[:] = new - - self.__add_files(pyfiles, root, files) - - elif os.path.isfile(base_dir): - pyfiles.append(base_dir) - - if self.configuration.exclude_files or self.configuration.include_files: - ret = [] - for f in pyfiles: - add = True - basename = os.path.basename(f) - if self.configuration.include_files: - add = False - - for pat in self.configuration.include_files: - if fnmatch.fnmatchcase(basename, pat): - add = True - break - - if not add: - if self.verbosity > 3: - sys.stdout.write('Skipped file: %s (did not match any include_files pattern: %s)\n' % (f, self.configuration.include_files)) - - elif self.configuration.exclude_files: - for pat in self.configuration.exclude_files: - if fnmatch.fnmatchcase(basename, pat): - if self.verbosity > 3: - sys.stdout.write('Skipped file: %s (matched exclude_files pattern: %s)\n' % (f, pat)) - - elif self.verbosity > 2: - sys.stdout.write('Skipped file: %s\n' % (f,)) - - add = False - break - - if add: - if self.verbosity > 3: - sys.stdout.write('Adding file: %s for test discovery.\n' % (f,)) - ret.append(f) - - pyfiles = ret - - return pyfiles - - def __get_module_from_str(self, modname, print_exception, pyfile): - """ Import the module in the given import path. - * Returns the "final" module, so importing "coilib40.subject.visu" - returns the "visu" module, not the "coilib40" as returned by __import__ """ - try: - mod = __import__(modname) - for part in modname.split('.')[1:]: - mod = getattr(mod, part) - return mod - except: - if print_exception: - from _pydev_runfiles import pydev_runfiles_xml_rpc - from _pydevd_bundle import pydevd_io - buf_err = pydevd_io.start_redirect(keep_original_redirection=True, std='stderr') - buf_out = pydevd_io.start_redirect(keep_original_redirection=True, std='stdout') - try: - import traceback;traceback.print_exc() - sys.stderr.write('ERROR: Module: %s could not be imported (file: %s).\n' % (modname, pyfile)) - finally: - pydevd_io.end_redirect('stderr') - pydevd_io.end_redirect('stdout') - - pydev_runfiles_xml_rpc.notifyTest( - 'error', buf_out.getvalue(), buf_err.getvalue(), pyfile, modname, 0) - - return None - - def remove_duplicates_keeping_order(self, seq): - seen = set() - seen_add = seen.add - return [x for x in seq if not (x in seen or seen_add(x))] - - def find_modules_from_files(self, pyfiles): - """ returns a list of modules given a list of files """ - # let's make sure that the paths we want are in the pythonpath... - imports = [(s, self.__importify(s)) for s in pyfiles] - - sys_path = [os.path.normpath(path) for path in sys.path] - sys_path = self.remove_duplicates_keeping_order(sys_path) - - system_paths = [] - for s in sys_path: - system_paths.append(self.__importify(s, True)) - - ret = [] - for pyfile, imp in imports: - if imp is None: - continue # can happen if a file is not a valid module - choices = [] - for s in system_paths: - if imp.startswith(s): - add = imp[len(s) + 1:] - if add: - choices.append(add) - # sys.stdout.write(' ' + add + ' ') - - if not choices: - sys.stdout.write('PYTHONPATH not found for file: %s\n' % imp) - else: - for i, import_str in enumerate(choices): - print_exception = i == len(choices) - 1 - mod = self.__get_module_from_str(import_str, print_exception, pyfile) - if mod is not None: - ret.append((pyfile, mod, import_str)) - break - - return ret - - #=================================================================================================================== - # GetTestCaseNames - #=================================================================================================================== - class GetTestCaseNames: - """Yes, we need a class for that (cannot use outer context on jython 2.1)""" - - def __init__(self, accepted_classes, accepted_methods): - self.accepted_classes = accepted_classes - self.accepted_methods = accepted_methods - - def __call__(self, testCaseClass): - """Return a sorted sequence of method names found within testCaseClass""" - testFnNames = [] - className = testCaseClass.__name__ - - if className in self.accepted_classes: - for attrname in dir(testCaseClass): - # If a class is chosen, we select all the 'test' methods' - if attrname.startswith('test') and hasattr(getattr(testCaseClass, attrname), '__call__'): - testFnNames.append(attrname) - - else: - for attrname in dir(testCaseClass): - # If we have the class+method name, we must do a full check and have an exact match. - if className + '.' + attrname in self.accepted_methods: - if hasattr(getattr(testCaseClass, attrname), '__call__'): - testFnNames.append(attrname) - - # sorted() is not available in jython 2.1 - testFnNames.sort() - return testFnNames - - def _decorate_test_suite(self, suite, pyfile, module_name): - import unittest - if isinstance(suite, unittest.TestSuite): - add = False - suite.__pydev_pyfile__ = pyfile - suite.__pydev_module_name__ = module_name - - for t in suite._tests: - t.__pydev_pyfile__ = pyfile - t.__pydev_module_name__ = module_name - if self._decorate_test_suite(t, pyfile, module_name): - add = True - - return add - - elif isinstance(suite, unittest.TestCase): - return True - - else: - return False - - def find_tests_from_modules(self, file_and_modules_and_module_name): - """ returns the unittests given a list of modules """ - # Use our own suite! - from _pydev_runfiles import pydev_runfiles_unittest - import unittest - unittest.TestLoader.suiteClass = pydev_runfiles_unittest.PydevTestSuite - loader = unittest.TestLoader() - - ret = [] - if self.files_to_tests: - for pyfile, m, module_name in file_and_modules_and_module_name: - accepted_classes = {} - accepted_methods = {} - tests = self.files_to_tests[pyfile] - for t in tests: - accepted_methods[t] = t - - loader.getTestCaseNames = self.GetTestCaseNames(accepted_classes, accepted_methods) - - suite = loader.loadTestsFromModule(m) - if self._decorate_test_suite(suite, pyfile, module_name): - ret.append(suite) - return ret - - if self.tests: - accepted_classes = {} - accepted_methods = {} - - for t in self.tests: - splitted = t.split('.') - if len(splitted) == 1: - accepted_classes[t] = t - - elif len(splitted) == 2: - accepted_methods[t] = t - - loader.getTestCaseNames = self.GetTestCaseNames(accepted_classes, accepted_methods) - - for pyfile, m, module_name in file_and_modules_and_module_name: - suite = loader.loadTestsFromModule(m) - if self._decorate_test_suite(suite, pyfile, module_name): - ret.append(suite) - - return ret - - def filter_tests(self, test_objs, internal_call=False): - """ based on a filter name, only return those tests that have - the test case names that match """ - import unittest - if not internal_call: - if not self.configuration.include_tests and not self.tests and not self.configuration.exclude_tests: - # No need to filter if we have nothing to filter! - return test_objs - - if self.verbosity > 1: - if self.configuration.include_tests: - sys.stdout.write('Tests to include: %s\n' % (self.configuration.include_tests,)) - - if self.tests: - sys.stdout.write('Tests to run: %s\n' % (self.tests,)) - - if self.configuration.exclude_tests: - sys.stdout.write('Tests to exclude: %s\n' % (self.configuration.exclude_tests,)) - - test_suite = [] - for test_obj in test_objs: - - if isinstance(test_obj, unittest.TestSuite): - # Note: keep the suites as they are and just 'fix' the tests (so, don't use the iter_tests). - if test_obj._tests: - test_obj._tests = self.filter_tests(test_obj._tests, True) - if test_obj._tests: # Only add the suite if we still have tests there. - test_suite.append(test_obj) - - elif isinstance(test_obj, unittest.TestCase): - try: - testMethodName = test_obj._TestCase__testMethodName - except AttributeError: - # changed in python 2.5 - testMethodName = test_obj._testMethodName - - add = True - if self.configuration.exclude_tests: - for pat in self.configuration.exclude_tests: - if fnmatch.fnmatchcase(testMethodName, pat): - if self.verbosity > 3: - sys.stdout.write('Skipped test: %s (matched exclude_tests pattern: %s)\n' % (testMethodName, pat)) - - elif self.verbosity > 2: - sys.stdout.write('Skipped test: %s\n' % (testMethodName,)) - - add = False - break - - if add: - if self.__match_tests(self.tests, test_obj, testMethodName): - include = True - if self.configuration.include_tests: - include = False - for pat in self.configuration.include_tests: - if fnmatch.fnmatchcase(testMethodName, pat): - include = True - break - if include: - test_suite.append(test_obj) - else: - if self.verbosity > 3: - sys.stdout.write('Skipped test: %s (did not match any include_tests pattern %s)\n' % ( - testMethodName, self.configuration.include_tests,)) - return test_suite - - def iter_tests(self, test_objs): - # Note: not using yield because of Jython 2.1. - import unittest - tests = [] - for test_obj in test_objs: - if isinstance(test_obj, unittest.TestSuite): - tests.extend(self.iter_tests(test_obj._tests)) - - elif isinstance(test_obj, unittest.TestCase): - tests.append(test_obj) - return tests - - def list_test_names(self, test_objs): - names = [] - for tc in self.iter_tests(test_objs): - try: - testMethodName = tc._TestCase__testMethodName - except AttributeError: - # changed in python 2.5 - testMethodName = tc._testMethodName - names.append(testMethodName) - return names - - def __match_tests(self, tests, test_case, test_method_name): - if not tests: - return 1 - - for t in tests: - class_and_method = t.split('.') - if len(class_and_method) == 1: - # only class name - if class_and_method[0] == test_case.__class__.__name__: - return 1 - - elif len(class_and_method) == 2: - if class_and_method[0] == test_case.__class__.__name__ and class_and_method[1] == test_method_name: - return 1 - - return 0 - - def __match(self, filter_list, name): - """ returns whether a test name matches the test filter """ - if filter_list is None: - return 1 - for f in filter_list: - if re.match(f, name): - return 1 - return 0 - - def run_tests(self, handle_coverage=True): - """ runs all tests """ - sys.stdout.write("Finding files... ") - files = self.find_import_files() - if self.verbosity > 3: - sys.stdout.write('%s ... done.\n' % (self.files_or_dirs)) - else: - sys.stdout.write('done.\n') - sys.stdout.write("Importing test modules ... ") - - if handle_coverage: - coverage_files, coverage = start_coverage_support(self.configuration) - - file_and_modules_and_module_name = self.find_modules_from_files(files) - sys.stdout.write("done.\n") - - all_tests = self.find_tests_from_modules(file_and_modules_and_module_name) - all_tests = self.filter_tests(all_tests) - - from _pydev_runfiles import pydev_runfiles_unittest - test_suite = pydev_runfiles_unittest.PydevTestSuite(all_tests) - from _pydev_runfiles import pydev_runfiles_xml_rpc - pydev_runfiles_xml_rpc.notifyTestsCollected(test_suite.countTestCases()) - - start_time = time.time() - - def run_tests(): - executed_in_parallel = False - if self.jobs > 1: - from _pydev_runfiles import pydev_runfiles_parallel - - # What may happen is that the number of jobs needed is lower than the number of jobs requested - # (e.g.: 2 jobs were requested for running 1 test) -- in which case execute_tests_in_parallel will - # return False and won't run any tests. - executed_in_parallel = pydev_runfiles_parallel.execute_tests_in_parallel( - all_tests, self.jobs, self.split_jobs, self.verbosity, coverage_files, self.configuration.coverage_include) - - if not executed_in_parallel: - # If in coverage, we don't need to pass anything here (coverage is already enabled for this execution). - runner = pydev_runfiles_unittest.PydevTextTestRunner(stream=sys.stdout, descriptions=1, verbosity=self.verbosity) - sys.stdout.write('\n') - runner.run(test_suite) - - if self.configuration.django: - get_django_test_suite_runner()(run_tests).run_tests([]) - else: - run_tests() - - if handle_coverage: - coverage.stop() - coverage.save() - - total_time = 'Finished in: %.2f secs.' % (time.time() - start_time,) - pydev_runfiles_xml_rpc.notifyTestRunFinished(total_time) - - -DJANGO_TEST_SUITE_RUNNER = None - - -def get_django_test_suite_runner(): - global DJANGO_TEST_SUITE_RUNNER - if DJANGO_TEST_SUITE_RUNNER: - return DJANGO_TEST_SUITE_RUNNER - try: - # django >= 1.8 - import django - from django.test.runner import DiscoverRunner - - class MyDjangoTestSuiteRunner(DiscoverRunner): - - def __init__(self, on_run_suite): - django.setup() - DiscoverRunner.__init__(self) - self.on_run_suite = on_run_suite - - def build_suite(self, *args, **kwargs): - pass - - def suite_result(self, *args, **kwargs): - pass - - def run_suite(self, *args, **kwargs): - self.on_run_suite() - - except: - # django < 1.8 - try: - from django.test.simple import DjangoTestSuiteRunner - except: - - class DjangoTestSuiteRunner: - - def __init__(self): - pass - - def run_tests(self, *args, **kwargs): - raise AssertionError("Unable to run suite with django.test.runner.DiscoverRunner nor django.test.simple.DjangoTestSuiteRunner because it couldn't be imported.") - - class MyDjangoTestSuiteRunner(DjangoTestSuiteRunner): - - def __init__(self, on_run_suite): - DjangoTestSuiteRunner.__init__(self) - self.on_run_suite = on_run_suite - - def build_suite(self, *args, **kwargs): - pass - - def suite_result(self, *args, **kwargs): - pass - - def run_suite(self, *args, **kwargs): - self.on_run_suite() - - DJANGO_TEST_SUITE_RUNNER = MyDjangoTestSuiteRunner - return DJANGO_TEST_SUITE_RUNNER - - -#======================================================================================================================= -# main -#======================================================================================================================= -def main(configuration): - PydevTestRunner(configuration).run_tests() diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py deleted file mode 100644 index 988d9adf2f289ef223bd1c680a5ae1d3387f0269..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import kaiming_init -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class GeneralizedAttention(nn.Module): - """GeneralizedAttention module. - - See 'An Empirical Study of Spatial Attention Mechanisms in Deep Networks' - (https://arxiv.org/abs/1711.07971) for details. - - Args: - in_channels (int): Channels of the input feature map. - spatial_range (int): The spatial range. -1 indicates no spatial range - constraint. Default: -1. - num_heads (int): The head number of empirical_attention module. - Default: 9. - position_embedding_dim (int): The position embedding dimension. - Default: -1. - position_magnitude (int): A multiplier acting on coord difference. - Default: 1. - kv_stride (int): The feature stride acting on key/value feature map. - Default: 2. - q_stride (int): The feature stride acting on query feature map. - Default: 1. - attention_type (str): A binary indicator string for indicating which - items in generalized empirical_attention module are used. - Default: '1111'. - - - '1000' indicates 'query and key content' (appr - appr) item, - - '0100' indicates 'query content and relative position' - (appr - position) item, - - '0010' indicates 'key content only' (bias - appr) item, - - '0001' indicates 'relative position only' (bias - position) item. - """ - - _abbr_ = 'gen_attention_block' - - def __init__(self, - in_channels, - spatial_range=-1, - num_heads=9, - position_embedding_dim=-1, - position_magnitude=1, - kv_stride=2, - q_stride=1, - attention_type='1111'): - - super(GeneralizedAttention, self).__init__() - - # hard range means local range for non-local operation - self.position_embedding_dim = ( - position_embedding_dim - if position_embedding_dim > 0 else in_channels) - - self.position_magnitude = position_magnitude - self.num_heads = num_heads - self.in_channels = in_channels - self.spatial_range = spatial_range - self.kv_stride = kv_stride - self.q_stride = q_stride - self.attention_type = [bool(int(_)) for _ in attention_type] - self.qk_embed_dim = in_channels // num_heads - out_c = self.qk_embed_dim * num_heads - - if self.attention_type[0] or self.attention_type[1]: - self.query_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.query_conv.kaiming_init = True - - if self.attention_type[0] or self.attention_type[2]: - self.key_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.key_conv.kaiming_init = True - - self.v_dim = in_channels // num_heads - self.value_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=self.v_dim * num_heads, - kernel_size=1, - bias=False) - self.value_conv.kaiming_init = True - - if self.attention_type[1] or self.attention_type[3]: - self.appr_geom_fc_x = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_x.kaiming_init = True - - self.appr_geom_fc_y = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_y.kaiming_init = True - - if self.attention_type[2]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - appr_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.appr_bias = nn.Parameter(appr_bias_value) - - if self.attention_type[3]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - geom_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.geom_bias = nn.Parameter(geom_bias_value) - - self.proj_conv = nn.Conv2d( - in_channels=self.v_dim * num_heads, - out_channels=in_channels, - kernel_size=1, - bias=True) - self.proj_conv.kaiming_init = True - self.gamma = nn.Parameter(torch.zeros(1)) - - if self.spatial_range >= 0: - # only works when non local is after 3*3 conv - if in_channels == 256: - max_len = 84 - elif in_channels == 512: - max_len = 42 - - max_len_kv = int((max_len - 1.0) / self.kv_stride + 1) - local_constraint_map = np.ones( - (max_len, max_len, max_len_kv, max_len_kv), dtype=np.int) - for iy in range(max_len): - for ix in range(max_len): - local_constraint_map[ - iy, ix, - max((iy - self.spatial_range) // - self.kv_stride, 0):min((iy + self.spatial_range + - 1) // self.kv_stride + - 1, max_len), - max((ix - self.spatial_range) // - self.kv_stride, 0):min((ix + self.spatial_range + - 1) // self.kv_stride + - 1, max_len)] = 0 - - self.local_constraint_map = nn.Parameter( - torch.from_numpy(local_constraint_map).byte(), - requires_grad=False) - - if self.q_stride > 1: - self.q_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.q_stride) - else: - self.q_downsample = None - - if self.kv_stride > 1: - self.kv_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.kv_stride) - else: - self.kv_downsample = None - - self.init_weights() - - def get_position_embedding(self, - h, - w, - h_kv, - w_kv, - q_stride, - kv_stride, - device, - dtype, - feat_dim, - wave_length=1000): - # the default type of Tensor is float32, leading to type mismatch - # in fp16 mode. Cast it to support fp16 mode. - h_idxs = torch.linspace(0, h - 1, h).to(device=device, dtype=dtype) - h_idxs = h_idxs.view((h, 1)) * q_stride - - w_idxs = torch.linspace(0, w - 1, w).to(device=device, dtype=dtype) - w_idxs = w_idxs.view((w, 1)) * q_stride - - h_kv_idxs = torch.linspace(0, h_kv - 1, h_kv).to( - device=device, dtype=dtype) - h_kv_idxs = h_kv_idxs.view((h_kv, 1)) * kv_stride - - w_kv_idxs = torch.linspace(0, w_kv - 1, w_kv).to( - device=device, dtype=dtype) - w_kv_idxs = w_kv_idxs.view((w_kv, 1)) * kv_stride - - # (h, h_kv, 1) - h_diff = h_idxs.unsqueeze(1) - h_kv_idxs.unsqueeze(0) - h_diff *= self.position_magnitude - - # (w, w_kv, 1) - w_diff = w_idxs.unsqueeze(1) - w_kv_idxs.unsqueeze(0) - w_diff *= self.position_magnitude - - feat_range = torch.arange(0, feat_dim / 4).to( - device=device, dtype=dtype) - - dim_mat = torch.Tensor([wave_length]).to(device=device, dtype=dtype) - dim_mat = dim_mat**((4. / feat_dim) * feat_range) - dim_mat = dim_mat.view((1, 1, -1)) - - embedding_x = torch.cat( - ((w_diff / dim_mat).sin(), (w_diff / dim_mat).cos()), dim=2) - - embedding_y = torch.cat( - ((h_diff / dim_mat).sin(), (h_diff / dim_mat).cos()), dim=2) - - return embedding_x, embedding_y - - def forward(self, x_input): - num_heads = self.num_heads - - # use empirical_attention - if self.q_downsample is not None: - x_q = self.q_downsample(x_input) - else: - x_q = x_input - n, _, h, w = x_q.shape - - if self.kv_downsample is not None: - x_kv = self.kv_downsample(x_input) - else: - x_kv = x_input - _, _, h_kv, w_kv = x_kv.shape - - if self.attention_type[0] or self.attention_type[1]: - proj_query = self.query_conv(x_q).view( - (n, num_heads, self.qk_embed_dim, h * w)) - proj_query = proj_query.permute(0, 1, 3, 2) - - if self.attention_type[0] or self.attention_type[2]: - proj_key = self.key_conv(x_kv).view( - (n, num_heads, self.qk_embed_dim, h_kv * w_kv)) - - if self.attention_type[1] or self.attention_type[3]: - position_embed_x, position_embed_y = self.get_position_embedding( - h, w, h_kv, w_kv, self.q_stride, self.kv_stride, - x_input.device, x_input.dtype, self.position_embedding_dim) - # (n, num_heads, w, w_kv, dim) - position_feat_x = self.appr_geom_fc_x(position_embed_x).\ - view(1, w, w_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - # (n, num_heads, h, h_kv, dim) - position_feat_y = self.appr_geom_fc_y(position_embed_y).\ - view(1, h, h_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - position_feat_x /= math.sqrt(2) - position_feat_y /= math.sqrt(2) - - # accelerate for saliency only - if (np.sum(self.attention_type) == 1) and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy = torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, h_kv * w_kv) - - h = 1 - w = 1 - else: - # (n, num_heads, h*w, h_kv*w_kv), query before key, 540mb for - if not self.attention_type[0]: - energy = torch.zeros( - n, - num_heads, - h, - w, - h_kv, - w_kv, - dtype=x_input.dtype, - device=x_input.device) - - # attention_type[0]: appr - appr - # attention_type[1]: appr - position - # attention_type[2]: bias - appr - # attention_type[3]: bias - position - if self.attention_type[0] or self.attention_type[2]: - if self.attention_type[0] and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - energy = torch.matmul(proj_query + appr_bias, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[0]: - energy = torch.matmul(proj_query, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy += torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, 1, h_kv, w_kv) - - if self.attention_type[1] or self.attention_type[3]: - if self.attention_type[1] and self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - - proj_query_reshape = (proj_query + geom_bias).\ - view(n, num_heads, h, w, self.qk_embed_dim) - - energy_x = torch.matmul( - proj_query_reshape.permute(0, 1, 3, 2, 4), - position_feat_x.permute(0, 1, 2, 4, 3)) - energy_x = energy_x.\ - permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul( - proj_query_reshape, - position_feat_y.permute(0, 1, 2, 4, 3)) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[1]: - proj_query_reshape = proj_query.\ - view(n, num_heads, h, w, self.qk_embed_dim) - proj_query_reshape = proj_query_reshape.\ - permute(0, 1, 3, 2, 4) - position_feat_x_reshape = position_feat_x.\ - permute(0, 1, 2, 4, 3) - position_feat_y_reshape = position_feat_y.\ - permute(0, 1, 2, 4, 3) - - energy_x = torch.matmul(proj_query_reshape, - position_feat_x_reshape) - energy_x = energy_x.permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul(proj_query_reshape, - position_feat_y_reshape) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, self.qk_embed_dim, 1).\ - repeat(n, 1, 1, 1) - - position_feat_x_reshape = position_feat_x.\ - view(n, num_heads, w*w_kv, self.qk_embed_dim) - - position_feat_y_reshape = position_feat_y.\ - view(n, num_heads, h * h_kv, self.qk_embed_dim) - - energy_x = torch.matmul(position_feat_x_reshape, geom_bias) - energy_x = energy_x.view(n, num_heads, 1, w, 1, w_kv) - - energy_y = torch.matmul(position_feat_y_reshape, geom_bias) - energy_y = energy_y.view(n, num_heads, h, 1, h_kv, 1) - - energy += energy_x + energy_y - - energy = energy.view(n, num_heads, h * w, h_kv * w_kv) - - if self.spatial_range >= 0: - cur_local_constraint_map = \ - self.local_constraint_map[:h, :w, :h_kv, :w_kv].\ - contiguous().\ - view(1, 1, h*w, h_kv*w_kv) - - energy = energy.masked_fill_(cur_local_constraint_map, - float('-inf')) - - attention = F.softmax(energy, 3) - - proj_value = self.value_conv(x_kv) - proj_value_reshape = proj_value.\ - view((n, num_heads, self.v_dim, h_kv * w_kv)).\ - permute(0, 1, 3, 2) - - out = torch.matmul(attention, proj_value_reshape).\ - permute(0, 1, 3, 2).\ - contiguous().\ - view(n, self.v_dim * self.num_heads, h, w) - - out = self.proj_conv(out) - - # output is downsampled, upsample back to input size - if self.q_downsample is not None: - out = F.interpolate( - out, - size=x_input.shape[2:], - mode='bilinear', - align_corners=False) - - out = self.gamma * out + x_input - return out - - def init_weights(self): - for m in self.modules(): - if hasattr(m, 'kaiming_init') and m.kaiming_init: - kaiming_init( - m, - mode='fan_in', - nonlinearity='leaky_relu', - bias=0, - distribution='uniform', - a=1) diff --git a/spaces/Surn/UnlimitedMusicGen/audiocraft/data/zip.py b/spaces/Surn/UnlimitedMusicGen/audiocraft/data/zip.py deleted file mode 100644 index 1f1154231da321dd38d151ff285dbcff5e38a6e0..0000000000000000000000000000000000000000 --- a/spaces/Surn/UnlimitedMusicGen/audiocraft/data/zip.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing -import zipfile - -from dataclasses import dataclass -from functools import lru_cache -from typing_extensions import Literal - - -DEFAULT_SIZE = 32 -MODE = Literal['r', 'w', 'x', 'a'] - - -@dataclass(order=True) -class PathInZip: - """Class for holding a path of file within a zip file. - - Args: - path: The convention is : - Let's assume there is a zip file /some/location/foo.zip - and inside of it is a json file located at /data/file1.json, - Then we expect path = "/some/location/foo.zip:/data/file1.json" - """ - - INFO_PATH_SEP = ':' - zip_path: str - file_path: str - - def __init__(self, path: str) -> None: - split_path = path.split(self.INFO_PATH_SEP) - assert len(split_path) == 2 - self.zip_path, self.file_path = split_path - - @classmethod - def from_paths(cls, zip_path: str, file_path: str): - return cls(zip_path + cls.INFO_PATH_SEP + file_path) - - def __str__(self) -> str: - return self.zip_path + self.INFO_PATH_SEP + self.file_path - - -def _open_zip(path: str, mode: MODE = 'r'): - return zipfile.ZipFile(path, mode) - - -_cached_open_zip = lru_cache(DEFAULT_SIZE)(_open_zip) - - -def set_zip_cache_size(max_size: int): - """Sets the maximal LRU caching for zip file opening. - - Args: - max_size: the maximal LRU cache. - """ - global _cached_open_zip - _cached_open_zip = lru_cache(max_size)(_open_zip) - - -def open_file_in_zip(path_in_zip: PathInZip, mode: str = 'r') -> typing.IO: - """Opens a file stored inside a zip and returns a file-like object. - - Args: - path_in_zip: A PathInZip object representing the file to return a file-like object of. - mode: The mode in which to open the file with. - Returns: - A file-like object for PathInZip. - """ - zf = _cached_open_zip(path_in_zip.zip_path) - return zf.open(path_in_zip.file_path) diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/priors/fast_gp.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/priors/fast_gp.py deleted file mode 100644 index 1c4b061c547d9ec5c17135b9f58562bbb4a54da1..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/priors/fast_gp.py +++ /dev/null @@ -1,144 +0,0 @@ -import time - -import torch -from torch import nn -import gpytorch - -from .utils import get_batch_to_dataloader -from utils import default_device - - -# We will use the simplest form of GP model, exact inference -class ExactGPModel(gpytorch.models.ExactGP): - def __init__(self, train_x, train_y, likelihood): - super(ExactGPModel, self).__init__(train_x, train_y, likelihood) - self.mean_module = gpytorch.means.ConstantMean() - self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) - - def forward(self, x): - mean_x = self.mean_module(x) - covar_x = self.covar_module(x) - return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) - - -def get_model(x, y, hyperparameters): - likelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.GreaterThan(1.e-9)) - model = ExactGPModel(x, y, likelihood) - model.likelihood.noise = torch.ones_like(model.likelihood.noise) * hyperparameters["noise"] - model.covar_module.outputscale = torch.ones_like(model.covar_module.outputscale) * hyperparameters["outputscale"] - model.covar_module.base_kernel.lengthscale = torch.ones_like(model.covar_module.base_kernel.lengthscale) * \ - hyperparameters["lengthscale"] - return model, likelihood - - -@torch.no_grad() -def get_batch(batch_size, seq_len, num_features, device=default_device, hyperparameters=None, - equidistant_x=False, fix_x=None, **kwargs): - if isinstance(hyperparameters, (tuple, list)): - hyperparameters = {"noise": hyperparameters[0] - , "outputscale": hyperparameters[1] - , "lengthscale": hyperparameters[2] - , "is_binary_classification": hyperparameters[3] - # , "num_features_used": hyperparameters[4] - , "normalize_by_used_features": hyperparameters[5] - , "order_y": hyperparameters[6] - , "sampling": hyperparameters[7] - } - elif hyperparameters is None: - hyperparameters = {"noise": .1, "outputscale": .1, "lengthscale": .1} - - if 'verbose' in hyperparameters and hyperparameters['verbose']: - print({"noise": hyperparameters['noise'], "outputscale": hyperparameters['outputscale'] - , "lengthscale": hyperparameters['lengthscale'], 'batch_size': batch_size, 'sampling': hyperparameters['sampling']}) - - # hyperparameters = {k: hyperparameters[k]() if callable(hyperparameters[k]) else hyperparameters[k] for k in - # hyperparameters.keys()} - assert not (equidistant_x and (fix_x is not None)) - - with gpytorch.settings.fast_computations(*hyperparameters.get('fast_computations', (True, True, True))): - if equidistant_x: - assert num_features == 1 - x = torch.linspace(0, 1., seq_len).unsqueeze(0).repeat(batch_size, 1).unsqueeze(-1) - elif fix_x is not None: - assert fix_x.shape == (seq_len, num_features) - x = fix_x.unsqueeze(0).repeat(batch_size, 1, 1).to(device) - else: - if hyperparameters.get('sampling','uniform') == 'uniform': - x = torch.rand(batch_size, seq_len, num_features, device=device) - else: - x = torch.randn(batch_size, seq_len, num_features, device=device) - model, likelihood = get_model(x, torch.Tensor(), hyperparameters) - model.to(device) - # trained_model = ExactGPModel(train_x, train_y, likelihood).cuda() - # trained_model.eval() - is_fitted = False - while not is_fitted: - try: - with gpytorch.settings.prior_mode(True): - model, likelihood = get_model(x, torch.Tensor(), hyperparameters) - model.to(device) - - d = model(x) - d = likelihood(d) - sample = d.sample().transpose(0, 1) - is_fitted = True - except RuntimeError: # This can happen when torch.linalg.eigh fails. Restart with new init resolves this. - print('GP Fitting unsuccessful, retrying.. ') - print(x) - print(hyperparameters) - - if bool(torch.any(torch.isnan(x)).detach().cpu().numpy()): - print({"noise": hyperparameters['noise'], "outputscale": hyperparameters['outputscale'] - , "lengthscale": hyperparameters['lengthscale'], 'batch_size': batch_size}) - - # TODO: Multi output - return x.transpose(0, 1), sample, sample # x.shape = (T,B,H) - -DataLoader = get_batch_to_dataloader(get_batch) -DataLoader.num_outputs = 1 - -def get_model_on_device(x,y,hyperparameters,device): - model, likelihood = get_model(x, y, hyperparameters) - model.to(device) - return model, likelihood - - -@torch.no_grad() -def evaluate(x, y, y_non_noisy, use_mse=False, hyperparameters={}, get_model_on_device=get_model_on_device, device=default_device, step_size=1, start_pos=0): - start_time = time.time() - losses_after_t = [.0] if start_pos == 0 else [] - all_losses_after_t = [] - - with gpytorch.settings.fast_computations(*hyperparameters.get('fast_computations',(True,True,True))), gpytorch.settings.fast_pred_var(False): - for t in range(max(start_pos, 1), len(x), step_size): - loss_sum = 0. - model, likelihood = get_model_on_device(x[:t].transpose(0, 1), y[:t].transpose(0, 1), hyperparameters, device) - - - model.eval() - # print([t.shape for t in model.train_inputs]) - # print(x[:t].transpose(0,1).shape, x[t].unsqueeze(1).shape, y[:t].transpose(0,1).shape) - f = model(x[t].unsqueeze(1)) - l = likelihood(f) - means = l.mean.squeeze() - varis = l.covariance_matrix.squeeze() - # print(l.variance.squeeze(), l.mean.squeeze(), y[t]) - - assert len(means.shape) == len(varis.shape) == 1 - assert len(means) == len(varis) == x.shape[1] - - if use_mse: - c = nn.MSELoss(reduction='none') - ls = c(means, y[t]) - else: - ls = -l.log_prob(y[t].unsqueeze(1)) - - losses_after_t.append(ls.mean()) - all_losses_after_t.append(ls.flatten()) - return torch.stack(all_losses_after_t).to('cpu'), torch.tensor(losses_after_t).to('cpu'), time.time() - start_time - -if __name__ == '__main__': - hps = (.1,.1,.1) - for redo_idx in range(1): - print( - evaluate(*get_batch(1000, 10, hyperparameters=hps, num_features=10), use_mse=False, hyperparameters=hps)) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/requirements.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/requirements.py deleted file mode 100644 index f34bfa85c8029f514f81f829fc24020e78b3786e..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/requirements.py +++ /dev/null @@ -1,95 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import urllib.parse -from typing import Any, List, Optional, Set - -from ._parser import parse_requirement as _parse_requirement -from ._tokenizer import ParserSyntaxError -from .markers import Marker, _normalize_extra_values -from .specifiers import SpecifierSet - - -class InvalidRequirement(ValueError): - """ - An invalid requirement was found, users should refer to PEP 508. - """ - - -class Requirement: - """Parse a requirement. - - Parse a given requirement string into its parts, such as name, specifier, - URL, and extras. Raises InvalidRequirement on a badly-formed requirement - string. - """ - - # TODO: Can we test whether something is contained within a requirement? - # If so how do we do that? Do we need to test against the _name_ of - # the thing as well as the version? What about the markers? - # TODO: Can we normalize the name and extra name? - - def __init__(self, requirement_string: str) -> None: - try: - parsed = _parse_requirement(requirement_string) - except ParserSyntaxError as e: - raise InvalidRequirement(str(e)) from e - - self.name: str = parsed.name - if parsed.url: - parsed_url = urllib.parse.urlparse(parsed.url) - if parsed_url.scheme == "file": - if urllib.parse.urlunparse(parsed_url) != parsed.url: - raise InvalidRequirement("Invalid URL given") - elif not (parsed_url.scheme and parsed_url.netloc) or ( - not parsed_url.scheme and not parsed_url.netloc - ): - raise InvalidRequirement(f"Invalid URL: {parsed.url}") - self.url: Optional[str] = parsed.url - else: - self.url = None - self.extras: Set[str] = set(parsed.extras if parsed.extras else []) - self.specifier: SpecifierSet = SpecifierSet(parsed.specifier) - self.marker: Optional[Marker] = None - if parsed.marker is not None: - self.marker = Marker.__new__(Marker) - self.marker._markers = _normalize_extra_values(parsed.marker) - - def __str__(self) -> str: - parts: List[str] = [self.name] - - if self.extras: - formatted_extras = ",".join(sorted(self.extras)) - parts.append(f"[{formatted_extras}]") - - if self.specifier: - parts.append(str(self.specifier)) - - if self.url: - parts.append(f"@ {self.url}") - if self.marker: - parts.append(" ") - - if self.marker: - parts.append(f"; {self.marker}") - - return "".join(parts) - - def __repr__(self) -> str: - return f"" - - def __hash__(self) -> int: - return hash((self.__class__.__name__, str(self))) - - def __eq__(self, other: Any) -> bool: - if not isinstance(other, Requirement): - return NotImplemented - - return ( - self.name == other.name - and self.extras == other.extras - and self.specifier == other.specifier - and self.url == other.url - and self.marker == other.marker - ) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/video_visualizer.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/video_visualizer.py deleted file mode 100644 index 9d8a366d3ca78c1824eff62f6fe422542075f055..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/video_visualizer.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import pycocotools.mask as mask_util - -from detectron2.utils.visualizer import ( - ColorMode, - Visualizer, - _create_text_labels, - _PanopticPrediction, -) - -from .colormap import random_color - - -class _DetectedInstance: - """ - Used to store data about detected objects in video frame, - in order to transfer color to objects in the future frames. - - Attributes: - label (int): - bbox (tuple[float]): - mask_rle (dict): - color (tuple[float]): RGB colors in range (0, 1) - ttl (int): time-to-live for the instance. For example, if ttl=2, - the instance color can be transferred to objects in the next two frames. - """ - - __slots__ = ["label", "bbox", "mask_rle", "color", "ttl"] - - def __init__(self, label, bbox, mask_rle, color, ttl): - self.label = label - self.bbox = bbox - self.mask_rle = mask_rle - self.color = color - self.ttl = ttl - - -class VideoVisualizer: - def __init__(self, metadata, instance_mode=ColorMode.IMAGE): - """ - Args: - metadata (MetadataCatalog): image metadata. - """ - self.metadata = metadata - self._old_instances = [] - assert instance_mode in [ - ColorMode.IMAGE, - ColorMode.IMAGE_BW, - ], "Other mode not supported yet." - self._instance_mode = instance_mode - - def draw_instance_predictions(self, frame, predictions): - """ - Draw instance-level prediction results on an image. - - Args: - frame (ndarray): an RGB image of shape (H, W, C), in the range [0, 255]. - predictions (Instances): the output of an instance detection/segmentation - model. Following fields will be used to draw: - "pred_boxes", "pred_classes", "scores", "pred_masks" (or "pred_masks_rle"). - - Returns: - output (VisImage): image object with visualizations. - """ - frame_visualizer = Visualizer(frame, self.metadata) - num_instances = len(predictions) - if num_instances == 0: - return frame_visualizer.output - - boxes = predictions.pred_boxes.tensor.numpy() if predictions.has("pred_boxes") else None - scores = predictions.scores if predictions.has("scores") else None - classes = predictions.pred_classes.numpy() if predictions.has("pred_classes") else None - keypoints = predictions.pred_keypoints if predictions.has("pred_keypoints") else None - colors = predictions.COLOR if predictions.has("COLOR") else [None] * len(predictions) - durations = predictions.ID_duration if predictions.has("ID_duration") else None - duration_threshold = self.metadata.get("duration_threshold", 0) - visibilities = None if durations is None else [x > duration_threshold for x in durations] - - if predictions.has("pred_masks"): - masks = predictions.pred_masks - # mask IOU is not yet enabled - # masks_rles = mask_util.encode(np.asarray(masks.permute(1, 2, 0), order="F")) - # assert len(masks_rles) == num_instances - else: - masks = None - - detected = [ - _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=colors[i], ttl=8) - for i in range(num_instances) - ] - if not predictions.has("COLOR"): - colors = self._assign_colors(detected) - - labels = _create_text_labels(classes, scores, self.metadata.get("thing_classes", None)) - - if self._instance_mode == ColorMode.IMAGE_BW: - # any() returns uint8 tensor - frame_visualizer.output.reset_image( - frame_visualizer._create_grayscale_image( - (masks.any(dim=0) > 0).numpy() if masks is not None else None - ) - ) - alpha = 0.3 - else: - alpha = 0.5 - - labels = ( - None - if labels is None - else [y[0] for y in filter(lambda x: x[1], zip(labels, visibilities))] - ) # noqa - assigned_colors = ( - None - if colors is None - else [y[0] for y in filter(lambda x: x[1], zip(colors, visibilities))] - ) # noqa - frame_visualizer.overlay_instances( - boxes=None if masks is not None else boxes[visibilities], # boxes are a bit distracting - masks=None if masks is None else masks[visibilities], - labels=labels, - keypoints=None if keypoints is None else keypoints[visibilities], - assigned_colors=assigned_colors, - alpha=alpha, - ) - - return frame_visualizer.output - - def draw_sem_seg(self, frame, sem_seg, area_threshold=None): - """ - Args: - sem_seg (ndarray or Tensor): semantic segmentation of shape (H, W), - each value is the integer label. - area_threshold (Optional[int]): only draw segmentations larger than the threshold - """ - # don't need to do anything special - frame_visualizer = Visualizer(frame, self.metadata) - frame_visualizer.draw_sem_seg(sem_seg, area_threshold=None) - return frame_visualizer.output - - def draw_panoptic_seg_predictions( - self, frame, panoptic_seg, segments_info, area_threshold=None, alpha=0.5 - ): - frame_visualizer = Visualizer(frame, self.metadata) - pred = _PanopticPrediction(panoptic_seg, segments_info, self.metadata) - - if self._instance_mode == ColorMode.IMAGE_BW: - frame_visualizer.output.reset_image( - frame_visualizer._create_grayscale_image(pred.non_empty_mask()) - ) - - # draw mask for all semantic segments first i.e. "stuff" - for mask, sinfo in pred.semantic_masks(): - category_idx = sinfo["category_id"] - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[category_idx]] - except AttributeError: - mask_color = None - - frame_visualizer.draw_binary_mask( - mask, - color=mask_color, - text=self.metadata.stuff_classes[category_idx], - alpha=alpha, - area_threshold=area_threshold, - ) - - all_instances = list(pred.instance_masks()) - if len(all_instances) == 0: - return frame_visualizer.output - # draw mask for all instances second - masks, sinfo = list(zip(*all_instances)) - num_instances = len(masks) - masks_rles = mask_util.encode( - np.asarray(np.asarray(masks).transpose(1, 2, 0), dtype=np.uint8, order="F") - ) - assert len(masks_rles) == num_instances - - category_ids = [x["category_id"] for x in sinfo] - detected = [ - _DetectedInstance(category_ids[i], bbox=None, mask_rle=masks_rles[i], color=None, ttl=8) - for i in range(num_instances) - ] - colors = self._assign_colors(detected) - labels = [self.metadata.thing_classes[k] for k in category_ids] - - frame_visualizer.overlay_instances( - boxes=None, - masks=masks, - labels=labels, - keypoints=None, - assigned_colors=colors, - alpha=alpha, - ) - return frame_visualizer.output - - def _assign_colors(self, instances): - """ - Naive tracking heuristics to assign same color to the same instance, - will update the internal state of tracked instances. - - Returns: - list[tuple[float]]: list of colors. - """ - - # Compute iou with either boxes or masks: - is_crowd = np.zeros((len(instances),), dtype=np.bool) - if instances[0].bbox is None: - assert instances[0].mask_rle is not None - # use mask iou only when box iou is None - # because box seems good enough - rles_old = [x.mask_rle for x in self._old_instances] - rles_new = [x.mask_rle for x in instances] - ious = mask_util.iou(rles_old, rles_new, is_crowd) - threshold = 0.5 - else: - boxes_old = [x.bbox for x in self._old_instances] - boxes_new = [x.bbox for x in instances] - ious = mask_util.iou(boxes_old, boxes_new, is_crowd) - threshold = 0.6 - if len(ious) == 0: - ious = np.zeros((len(self._old_instances), len(instances)), dtype="float32") - - # Only allow matching instances of the same label: - for old_idx, old in enumerate(self._old_instances): - for new_idx, new in enumerate(instances): - if old.label != new.label: - ious[old_idx, new_idx] = 0 - - matched_new_per_old = np.asarray(ious).argmax(axis=1) - max_iou_per_old = np.asarray(ious).max(axis=1) - - # Try to find match for each old instance: - extra_instances = [] - for idx, inst in enumerate(self._old_instances): - if max_iou_per_old[idx] > threshold: - newidx = matched_new_per_old[idx] - if instances[newidx].color is None: - instances[newidx].color = inst.color - continue - # If an old instance does not match any new instances, - # keep it for the next frame in case it is just missed by the detector - inst.ttl -= 1 - if inst.ttl > 0: - extra_instances.append(inst) - - # Assign random color to newly-detected instances: - for inst in instances: - if inst.color is None: - inst.color = random_color(rgb=True, maximum=1) - self._old_instances = instances[:] + extra_instances - return [d.color for d in instances] diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/svm.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/svm.py deleted file mode 100644 index c6d545da802009b4724318a260ef60804a9db80f..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/svm.py +++ /dev/null @@ -1,141 +0,0 @@ -#!/usr/local/bin/python3 - -# avenir-python: Machine Learning -# Author: Pranab Ghosh -# -# Licensed under the Apache License, Version 2.0 (the "License"); you -# may not use this file except in compliance with the License. You may -# obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. See the License for the specific language governing -# permissions and limitations under the License. - -# Package imports -import os -import sys -import matplotlib.pyplot as plt -import numpy as np -import sklearn as sk -import sklearn.linear_model -import matplotlib -import random -import jprops -from random import randint -sys.path.append(os.path.abspath("../lib")) -from util import * -from mlutil import * -from pasearch import * -from bacl import * - -# gradient boosting classification -class SupportVectorMachine(BaseClassifier): - - def __init__(self, configFile): - defValues = {} - defValues["common.mode"] = ("train", None) - defValues["common.model.directory"] = ("model", None) - defValues["common.model.file"] = (None, None) - defValues["common.scale.file.path"] = (None, "missing scale file path") - defValues["common.preprocessing"] = (None, None) - defValues["common.verbose"] = (False, None) - defValues["train.data.file"] = (None, "missing training data file") - defValues["train.data.fields"] = (None, "missing training data field ordinals") - defValues["train.data.feature.fields"] = (None, "missing training data feature field ordinals") - defValues["train.data.class.field"] = (None, "missing class field ordinal") - defValues["train.validation"] = ("kfold", None) - defValues["train.num.folds"] = (5, None) - defValues["train.algorithm"] = ("svc", None) - defValues["train.kernel.function"] = ("rbf", None) - defValues["train.poly.degree"] = (3, None) - defValues["train.penalty"] = (1.0, None) - defValues["train.gamma"] = ("scale", None) - defValues["train.penalty.norm"] = ("l2", None) - defValues["train.loss"] = ("squared_hinge", None) - defValues["train.dual"] = (True, None) - defValues["train.shrinking"] = (True, None) - defValues["train.nu"] = (0.5, None) - defValues["train.predict.probability"] = (False, None) - defValues["train.print.sup.vectors"] = (False, None) - defValues["train.success.criterion"] = ("error", None) - defValues["train.model.save"] = (False, None) - defValues["train.score.method"] = ("accuracy", None) - defValues["train.search.param.strategy"] = (None, None) - defValues["train.search.params"] = (None, None) - defValues["predict.data.file"] = (None, None) - defValues["predict.data.fields"] = (None, "missing data field ordinals") - defValues["predict.data.feature.fields"] = (None, "missing data feature field ordinals") - defValues["predict.use.saved.model"] = (False, None) - defValues["validate.data.file"] = (None, "missing validation data file") - defValues["validate.data.fields"] = (None, "missing validation data field ordinals") - defValues["validate.data.feature.fields"] = (None, "missing validation data feature field ordinals") - defValues["validate.data.class.field"] = (None, "missing class field ordinal") - defValues["validate.use.saved.model"] = (False, None) - defValues["validate.score.method"] = ("accuracy", None) - - super(SupportVectorMachine, self).__init__(configFile, defValues, __name__) - - # builds model object - def buildModel(self): - self.logger.info("...building svm model") - algo = self.config.getStringConfig("train.algorithm")[0] - kernelFun = self.config.getStringConfig("train.kernel.function")[0] - penalty = self.config.getFloatConfig("train.penalty")[0] - polyDegree = self.config.getIntConfig("train.poly.degree")[0] - kernelCoeff = self.config.getStringConfig("train.gamma")[0] - kernelCoeff = typedValue(kernelCoeff) - penaltyNorm = self.config.getStringConfig("train.penalty.norm")[0] - trainLoss = self.config.getStringConfig("train.loss")[0] - dualOpt = self.config.getBooleanConfig("train.dual")[0] - shrinkHeuristic = self.config.getBooleanConfig("train.shrinking")[0] - predictProb = self.config.getBooleanConfig("train.predict.probability")[0] - supVecBound = self.config.getFloatConfig("train.nu")[0] - - if (algo == "svc"): - if kernelFun == "poly": - model = sk.svm.SVC(C=penalty,kernel=kernelFun,degree=polyDegree,gamma=kernelCoeff, shrinking=shrinkHeuristic, \ - probability=predictProb) - elif kernelFun == "rbf" or kernelFun == "sigmoid": - model = sk.svm.SVC(C=penalty,kernel=kernelFun,gamma=kernelCoeff, shrinking=shrinkHeuristic, probability=predictProb) - else: - model = sk.svm.SVC(C=penalty, kernel=kernelFun, shrinking=shrinkHeuristic, probability=predictProb) - elif (algo == "nusvc"): - if kernelFun == "poly": - model = sk.svm.NuSVC(nu=supVecBound, kernel=kernelFun,degree=polyDegree,gamma=kernelCoeff, shrinking=shrinkHeuristic, \ - probability=predictProb) - elif kernelFun == "rbf" or kernelFun == "sigmoid": - model = sk.svm.NuSVC(nu=supVecBound, kernel=kernelFun,gamma=kernelCoeff, shrinking=shrinkHeuristic, probability=predictProb) - else: - model = sk.svm.NuSVC(nu=supVecBound, kernel=kernelFun, shrinking=shrinkHeuristic, probability=predictProb) - elif (algo == "linearsvc"): - model = sk.svm.LinearSVC(penalty=penaltyNorm, loss=trainLoss, dual=dualOpt) - else: - self.logger.info("invalid svm algorithm") - sys.exit() - self.classifier = model - return self.classifier - - #predict probability with in memory data - def predictProb(self, recs): - # create model - self.prepModel() - - #input record - if type(recs) is str: - featData = self.prepStringPredictData(recs) - else: - featData = recs - if (featData.ndim == 1): - featData = featData.reshape(1, -1) - - #predict - self.logger.info("...predicting class probability") - clsData = self.classifier.predict_proba(featData) - return clsData - - - diff --git a/spaces/Toaster496/openaccess-ai-collective-manticore-13b/app.py b/spaces/Toaster496/openaccess-ai-collective-manticore-13b/app.py deleted file mode 100644 index 861ab19eb403993f8c6e517b658e2c6d96958344..0000000000000000000000000000000000000000 --- a/spaces/Toaster496/openaccess-ai-collective-manticore-13b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/openaccess-ai-collective/manticore-13b").launch() \ No newline at end of file diff --git a/spaces/TungB/mini-photoshop/u2net.py b/spaces/TungB/mini-photoshop/u2net.py deleted file mode 100644 index 22de8e7001686982930671896efe6cae88998c41..0000000000000000000000000000000000000000 --- a/spaces/TungB/mini-photoshop/u2net.py +++ /dev/null @@ -1,307 +0,0 @@ -import math -import os -from collections import namedtuple -from os.path import abspath, dirname - -import cv2 -import gdown -import numpy as np -import torch -import torch.nn as nn - -from utils import norm_img, resize_max_size - -ModelConfig = namedtuple("ModelConfig", ["name", "url"]) - - -u2net_lite = ModelConfig( - name="u2net_lite", - url="https://drive.google.com/uc?id=1iACXN1N2AqvbdojnUwsQ9sz6BZj2zEXn", -) -u2net = ModelConfig( - name="u2net", - url="https://drive.google.com/uc?id=1st1TfjUn7wN0seWhWidmS80AYtNsCsyc", -) -u2net_human_seg = ModelConfig( - name="u2net_human_seg", - url="https://drive.google.com/uc?id=1fx5J0wV3CK5eNNzbU30P4fUXvKYbTrZH", -) - -u2net_list = [u2net_lite, u2net, u2net_human_seg] - - -def _upsample_like(x, size): - return nn.Upsample(size=size, mode="bilinear", align_corners=False)(x) - - -def _size_map(x, height): - # {height: size} for Upsample - size = list(x.shape[-2:]) - sizes = {} - for h in range(1, height): - sizes[h] = size - size = [math.ceil(w / 2) for w in size] - return sizes - - -class REBNCONV(nn.Module): - def __init__(self, in_ch=3, out_ch=3, dilate=1): - super(REBNCONV, self).__init__() - - self.conv_s1 = nn.Conv2d( - in_ch, out_ch, 3, padding=1 * dilate, dilation=1 * dilate - ) - self.bn_s1 = nn.BatchNorm2d(out_ch) - self.relu_s1 = nn.ReLU(inplace=True) - - def forward(self, x): - return self.relu_s1(self.bn_s1(self.conv_s1(x))) - - -class RSU(nn.Module): - def __init__(self, name, height, in_ch, mid_ch, out_ch, dilated=False): - super(RSU, self).__init__() - self.name = name - self.height = height - self.dilated = dilated - self._make_layers(height, in_ch, mid_ch, out_ch, dilated) - - def forward(self, x): - sizes = _size_map(x, self.height) - x = self.rebnconvin(x) - - # U-Net like symmetric encoder-decoder structure - def unet(x, height=1): - if height < self.height: - x1 = getattr(self, f"rebnconv{height}")(x) - if not self.dilated and height < self.height - 1: - x2 = unet(getattr(self, "downsample")(x1), height + 1) - else: - x2 = unet(x1, height + 1) - - x = getattr(self, f"rebnconv{height}d")(torch.cat((x2, x1), 1)) - return ( - _upsample_like(x, sizes[height - 1]) - if not self.dilated and height > 1 - else x - ) - else: - return getattr(self, f"rebnconv{height}")(x) - - return x + unet(x) - - def _make_layers(self, height, in_ch, mid_ch, out_ch, dilated=False): - self.add_module("rebnconvin", REBNCONV(in_ch, out_ch)) - self.add_module( - "downsample", nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True) - ) - - self.add_module("rebnconv1", REBNCONV(out_ch, mid_ch)) - self.add_module("rebnconv1d", REBNCONV(mid_ch * 2, out_ch)) - - for i in range(2, height): - dilate = 1 if not dilated else 2 ** (i - 1) - self.add_module( - f"rebnconv{i}", REBNCONV(mid_ch, mid_ch, dilate=dilate) - ) - self.add_module( - f"rebnconv{i}d", REBNCONV(mid_ch * 2, mid_ch, dilate=dilate) - ) - - dilate = 2 if not dilated else 2 ** (height - 1) - self.add_module( - f"rebnconv{height}", REBNCONV(mid_ch, mid_ch, dilate=dilate) - ) - - -class U2NET(nn.Module): - def __init__(self, cfgs, out_ch): - super(U2NET, self).__init__() - self.out_ch = out_ch - self._make_layers(cfgs) - - def forward(self, x): - sizes = _size_map(x, self.height) - maps = [] # storage for maps - - # side saliency map - def unet(x, height=1): - if height < 6: - x1 = getattr(self, f"stage{height}")(x) - x2 = unet(getattr(self, "downsample")(x1), height + 1) - x = getattr(self, f"stage{height}d")(torch.cat((x2, x1), 1)) - side(x, height) - return ( - _upsample_like(x, sizes[height - 1]) if height > 1 else x - ) - else: - x = getattr(self, f"stage{height}")(x) - side(x, height) - return _upsample_like(x, sizes[height - 1]) - - def side(x, h): - # side output saliency map (before sigmoid) - x = getattr(self, f"side{h}")(x) - x = _upsample_like(x, sizes[1]) - maps.append(x) - - def fuse(): - # fuse saliency probability maps - maps.reverse() - x = torch.cat(maps, 1) - x = getattr(self, "outconv")(x) - maps.insert(0, x) - return [torch.sigmoid(x) for x in maps] - - unet(x) - maps = fuse() - return maps - - def _make_layers(self, cfgs): - self.height = int((len(cfgs) + 1) / 2) - self.add_module( - "downsample", nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True) - ) - for k, v in cfgs.items(): - # build rsu block - self.add_module(k, RSU(v[0], *v[1])) - if v[2] > 0: - # build side layer - self.add_module( - f"side{v[0][-1]}", - nn.Conv2d(v[2], self.out_ch, 3, padding=1), - ) - # build fuse layer - self.add_module( - "outconv", - nn.Conv2d(int(self.height * self.out_ch), self.out_ch, 1), - ) - - -class U2Net: - def __init__(self, model_name, device): - """Init class - - Args: - model_name (str): Model name - device (str): device - """ - self.model_name = model_name - self.device = device - self.init_model() - - def init_model(self): - """Init model""" - model_cfg = None - for u2net_cfg in u2net_list: - if u2net_cfg.name == self.model_name: - model_cfg = u2net_cfg - break - - if not model_cfg: - model_cfg = u2net_lite - - parent_dir = dirname(abspath(__file__)) - model_path = f"{parent_dir}/{model_cfg.name}.pth" - - if not os.path.exists(model_path): - os.makedirs(dirname(model_path), exist_ok=True) - with open(model_path, "wb") as model_file: - gdown.download(url=model_cfg.url, output=model_file) - - if "_lite" in model_cfg.name: - layer_cfgs = { - # cfgs for building RSUs and sides - # {stage : [name, (height(L), in_ch, mid_ch, out_ch, dilated), side]} - "stage1": ["En_1", (7, 3, 16, 64), -1], - "stage2": ["En_2", (6, 64, 16, 64), -1], - "stage3": ["En_3", (5, 64, 16, 64), -1], - "stage4": ["En_4", (4, 64, 16, 64), -1], - "stage5": ["En_5", (4, 64, 16, 64, True), -1], - "stage6": ["En_6", (4, 64, 16, 64, True), 64], - "stage5d": ["De_5", (4, 128, 16, 64, True), 64], - "stage4d": ["De_4", (4, 128, 16, 64), 64], - "stage3d": ["De_3", (5, 128, 16, 64), 64], - "stage2d": ["De_2", (6, 128, 16, 64), 64], - "stage1d": ["De_1", (7, 128, 16, 64), 64], - } - else: - layer_cfgs = { - # cfgs for building RSUs and sides - # {stage : [name, (height(L), in_ch, mid_ch, out_ch, dilated), side]} - "stage1": ["En_1", (7, 3, 32, 64), -1], - "stage2": ["En_2", (6, 64, 32, 128), -1], - "stage3": ["En_3", (5, 128, 64, 256), -1], - "stage4": ["En_4", (4, 256, 128, 512), -1], - "stage5": ["En_5", (4, 512, 256, 512, True), -1], - "stage6": ["En_6", (4, 512, 256, 512, True), 512], - "stage5d": ["De_5", (4, 1024, 256, 512, True), 512], - "stage4d": ["De_4", (4, 1024, 128, 256), 256], - "stage3d": ["De_3", (5, 512, 64, 128), 128], - "stage2d": ["De_2", (6, 256, 32, 64), 64], - "stage1d": ["De_1", (7, 128, 16, 64), 64], - } - - model = U2NET(cfgs=layer_cfgs, out_ch=1) - model.load_state_dict( - state_dict=torch.load(model_path, map_location="cpu") - ) - model = model.to(self.device) - model.eval() - self.model = model - - def norm_pred(self, d): - """Normalize prediction - - Args: - d (torch.Tensor): Decoder Output (prediction) - - Returns: - torch.Tensor: Normalized prediction - """ - max = torch.max(d) - min = torch.min(d) - dn = (d - min) / (max - min) - return dn - - def forward(self, image): - """Forward image to model - - Args: - image (np.ndarray): Image - - Returns: - np.ndarray: Mask - """ - image = norm_img(image) - image = image = torch.from_numpy(image).unsqueeze(0).to(self.device) - - d1, d2, d3, d4, d5, d6, d7 = self.model(image) - mask = self.norm_pred(d1[:, 0, :, :]) - - mask = mask.squeeze().cpu().data.numpy() - return mask - - @torch.no_grad() - def __call__(self, image, resize_limit=512): - """ - Args: - image (np.ndarray): Image - resize_limit (int, optional): max size. Defaults to 512. - - Returns: - np.ndarray: Mask - """ - if resize_limit and max(image.shape) > resize_limit: - origin_size = image.shape[:2] - downsize_image = resize_max_size(image, size_limit=resize_limit) - mask = self.forward(downsize_image) - mask = cv2.resize( - mask, - dsize=(origin_size[1], origin_size[0]), - interpolation=cv2.INTER_CUBIC, - ) - else: - mask = self.forward(image) - mask = np.rint(mask).astype(np.uint8) - return mask diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/text/english_bert_mock.py b/spaces/XzJosh/Bekki-Bert-VITS2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bekki-Bert-VITS2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/XzJosh/maimai-Bert-VITS2/text/english_bert_mock.py b/spaces/XzJosh/maimai-Bert-VITS2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/maimai-Bert-VITS2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/YotamNitzan/domain-expansion/torch_utils/ops/__init__.py b/spaces/YotamNitzan/domain-expansion/torch_utils/ops/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/YotamNitzan/domain-expansion/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/README.md b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/README.md deleted file mode 100644 index 02f8a07dbd3d5dc8cd2827ffd118d0e2c60a578a..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Gradio_YOLOv5_Det_v0.2 -emoji: 🚀 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - -🚀 项目主页:https://gitee.com/CV_Lab/gradio_yolov5_det diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/evaluation/eval_hooks.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/evaluation/eval_hooks.py deleted file mode 100644 index 6fb932eae1ccb23a2b687a05a6cb9525200de718..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,303 +0,0 @@ -import os.path as osp -import warnings -from math import inf - -import mmcv -import torch.distributed as dist -from mmcv.runner import Hook -from torch.nn.modules.batchnorm import _BatchNorm -from torch.utils.data import DataLoader - -from mmdet.utils import get_root_logger - - -class EvalHook(Hook): - """Evaluation hook. - - Notes: - If new arguments are added for EvalHook, tools/test.py, - tools/analysis_tools/eval_metric.py may be effected. - - Attributes: - dataloader (DataLoader): A PyTorch dataloader. - start (int, optional): Evaluation starting epoch. It enables evaluation - before the training starts if ``start`` <= the resuming epoch. - If None, whether to evaluate is merely decided by ``interval``. - Default: None. - interval (int): Evaluation interval (by epochs). Default: 1. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be save in best.json. - Options are the evaluation metrics to the test dataset. e.g., - ``bbox_mAP``, ``segm_mAP`` for bbox detection and instance - segmentation. ``AR@100`` for proposal recall. If ``save_best`` is - ``auto``, the first key will be used. The interval of - ``CheckpointHook`` should device EvalHook. Default: None. - rule (str, optional): Comparison rule for best score. If set to None, - it will infer a reasonable rule. Keys such as 'mAP' or 'AR' will - be inferred by 'greater' rule. Keys contain 'loss' will be inferred - by 'less' rule. Options are 'greater', 'less'. Default: None. - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - """ - - rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y} - init_value_map = {'greater': -inf, 'less': inf} - greater_keys = ['mAP', 'AR'] - less_keys = ['loss'] - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - **eval_kwargs): - if not isinstance(dataloader, DataLoader): - raise TypeError('dataloader must be a pytorch DataLoader, but got' - f' {type(dataloader)}') - if not interval > 0: - raise ValueError(f'interval must be positive, but got {interval}') - if start is not None and start < 0: - warnings.warn( - f'The evaluation start epoch {start} is smaller than 0, ' - f'use 0 instead', UserWarning) - start = 0 - self.dataloader = dataloader - self.interval = interval - self.by_epoch = by_epoch - self.start = start - assert isinstance(save_best, str) or save_best is None - self.save_best = save_best - self.eval_kwargs = eval_kwargs - self.initial_epoch_flag = True - - self.logger = get_root_logger() - - if self.save_best is not None: - self._init_rule(rule, self.save_best) - - def _init_rule(self, rule, key_indicator): - """Initialize rule, key_indicator, comparison_func, and best score. - - Args: - rule (str | None): Comparison rule for best score. - key_indicator (str | None): Key indicator to determine the - comparison rule. - """ - if rule not in self.rule_map and rule is not None: - raise KeyError(f'rule must be greater, less or None, ' - f'but got {rule}.') - - if rule is None: - if key_indicator != 'auto': - if any(key in key_indicator for key in self.greater_keys): - rule = 'greater' - elif any(key in key_indicator for key in self.less_keys): - rule = 'less' - else: - raise ValueError(f'Cannot infer the rule for key ' - f'{key_indicator}, thus a specific rule ' - f'must be specified.') - self.rule = rule - self.key_indicator = key_indicator - if self.rule is not None: - self.compare_func = self.rule_map[self.rule] - - def before_run(self, runner): - if self.save_best is not None: - if runner.meta is None: - warnings.warn('runner.meta is None. Creating a empty one.') - runner.meta = dict() - runner.meta.setdefault('hook_msgs', dict()) - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training.""" - if not self.initial_epoch_flag: - return - if self.start is not None and runner.epoch >= self.start: - self.after_train_epoch(runner) - self.initial_epoch_flag = False - - def evaluation_flag(self, runner): - """Judge whether to perform_evaluation after this epoch. - - Returns: - bool: The flag indicating whether to perform evaluation. - """ - if self.start is None: - if not self.every_n_epochs(runner, self.interval): - # No evaluation during the interval epochs. - return False - elif (runner.epoch + 1) < self.start: - # No evaluation if start is larger than the current epoch. - return False - else: - # Evaluation only at epochs 3, 5, 7... if start==3 and interval==2 - if (runner.epoch + 1 - self.start) % self.interval: - return False - return True - - def after_train_epoch(self, runner): - if not self.by_epoch or not self.evaluation_flag(runner): - return - from mmdet.apis import single_gpu_test - results = single_gpu_test(runner.model, self.dataloader, show=False) - key_score = self.evaluate(runner, results) - if self.save_best: - self.save_best_checkpoint(runner, key_score) - - def after_train_iter(self, runner): - if self.by_epoch or not self.every_n_iters(runner, self.interval): - return - from mmdet.apis import single_gpu_test - results = single_gpu_test(runner.model, self.dataloader, show=False) - key_score = self.evaluate(runner, results) - if self.save_best: - self.save_best_checkpoint(runner, key_score) - - def save_best_checkpoint(self, runner, key_score): - best_score = runner.meta['hook_msgs'].get( - 'best_score', self.init_value_map[self.rule]) - if self.compare_func(key_score, best_score): - best_score = key_score - runner.meta['hook_msgs']['best_score'] = best_score - last_ckpt = runner.meta['hook_msgs']['last_ckpt'] - runner.meta['hook_msgs']['best_ckpt'] = last_ckpt - mmcv.symlink( - last_ckpt, - osp.join(runner.work_dir, f'best_{self.key_indicator}.pth')) - time_stamp = runner.epoch + 1 if self.by_epoch else runner.iter + 1 - self.logger.info(f'Now best checkpoint is epoch_{time_stamp}.pth.' - f'Best {self.key_indicator} is {best_score:0.4f}') - - def evaluate(self, runner, results): - eval_res = self.dataloader.dataset.evaluate( - results, logger=runner.logger, **self.eval_kwargs) - for name, val in eval_res.items(): - runner.log_buffer.output[name] = val - runner.log_buffer.ready = True - if self.save_best is not None: - if self.key_indicator == 'auto': - # infer from eval_results - self._init_rule(self.rule, list(eval_res.keys())[0]) - return eval_res[self.key_indicator] - else: - return None - - -class DistEvalHook(EvalHook): - """Distributed evaluation hook. - - Notes: - If new arguments are added, tools/test.py may be effected. - - Attributes: - dataloader (DataLoader): A PyTorch dataloader. - start (int, optional): Evaluation starting epoch. It enables evaluation - before the training starts if ``start`` <= the resuming epoch. - If None, whether to evaluate is merely decided by ``interval``. - Default: None. - interval (int): Evaluation interval (by epochs). Default: 1. - tmpdir (str | None): Temporary directory to save the results of all - processes. Default: None. - gpu_collect (bool): Whether to use gpu or cpu to collect results. - Default: False. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be save in best.json. - Options are the evaluation metrics to the test dataset. e.g., - ``bbox_mAP``, ``segm_mAP`` for bbox detection and instance - segmentation. ``AR@100`` for proposal recall. If ``save_best`` is - ``auto``, the first key will be used. The interval of - ``CheckpointHook`` should device EvalHook. Default: None. - rule (str | None): Comparison rule for best score. If set to None, - it will infer a reasonable rule. Default: 'None'. - broadcast_bn_buffer (bool): Whether to broadcast the - buffer(running_mean and running_var) of rank 0 to other rank - before evaluation. Default: True. - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - """ - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - tmpdir=None, - gpu_collect=False, - save_best=None, - rule=None, - broadcast_bn_buffer=True, - **eval_kwargs): - super().__init__( - dataloader, - start=start, - interval=interval, - by_epoch=by_epoch, - save_best=save_best, - rule=rule, - **eval_kwargs) - self.broadcast_bn_buffer = broadcast_bn_buffer - self.tmpdir = tmpdir - self.gpu_collect = gpu_collect - - def _broadcast_bn_buffer(self, runner): - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - def after_train_epoch(self, runner): - if not self.by_epoch or not self.evaluation_flag(runner): - return - - if self.broadcast_bn_buffer: - self._broadcast_bn_buffer(runner) - - from mmdet.apis import multi_gpu_test - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - key_score = self.evaluate(runner, results) - if self.save_best: - self.save_best_checkpoint(runner, key_score) - - def after_train_iter(self, runner): - if self.by_epoch or not self.every_n_iters(runner, self.interval): - return - - if self.broadcast_bn_buffer: - self._broadcast_bn_buffer(runner) - - from mmdet.apis import multi_gpu_test - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - key_score = self.evaluate(runner, results) - if self.save_best: - self.save_best_checkpoint(runner, key_score) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/dmnet_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/dmnet_r50-d8.py deleted file mode 100644 index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/dmnet_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DMHead', - in_channels=2048, - in_index=3, - channels=512, - filter_sizes=(1, 3, 5, 7), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=dict(type='SyncBN', requires_grad=True), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/adpro/dpt-depth04/README.md b/spaces/adpro/dpt-depth04/README.md deleted file mode 100644 index a2df32f52be298450622acdf691911580499139c..0000000000000000000000000000000000000000 --- a/spaces/adpro/dpt-depth04/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dpt Depth Estimation -emoji: ⚡ -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false -duplicated_from: adpro/dpt-depth01 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/aijack/jojo/e4e/training/ranger.py b/spaces/aijack/jojo/e4e/training/ranger.py deleted file mode 100644 index 3d63264dda6df0ee40cac143440f0b5f8977a9ad..0000000000000000000000000000000000000000 --- a/spaces/aijack/jojo/e4e/training/ranger.py +++ /dev/null @@ -1,164 +0,0 @@ -# Ranger deep learning optimizer - RAdam + Lookahead + Gradient Centralization, combined into one optimizer. - -# https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer -# and/or -# https://github.com/lessw2020/Best-Deep-Learning-Optimizers - -# Ranger has now been used to capture 12 records on the FastAI leaderboard. - -# This version = 20.4.11 - -# Credits: -# Gradient Centralization --> https://arxiv.org/abs/2004.01461v2 (a new optimization technique for DNNs), github: https://github.com/Yonghongwei/Gradient-Centralization -# RAdam --> https://github.com/LiyuanLucasLiu/RAdam -# Lookahead --> rewritten by lessw2020, but big thanks to Github @LonePatient and @RWightman for ideas from their code. -# Lookahead paper --> MZhang,G Hinton https://arxiv.org/abs/1907.08610 - -# summary of changes: -# 4/11/20 - add gradient centralization option. Set new testing benchmark for accuracy with it, toggle with use_gc flag at init. -# full code integration with all updates at param level instead of group, moves slow weights into state dict (from generic weights), -# supports group learning rates (thanks @SHolderbach), fixes sporadic load from saved model issues. -# changes 8/31/19 - fix references to *self*.N_sma_threshold; -# changed eps to 1e-5 as better default than 1e-8. - -import math -import torch -from torch.optim.optimizer import Optimizer - - -class Ranger(Optimizer): - - def __init__(self, params, lr=1e-3, # lr - alpha=0.5, k=6, N_sma_threshhold=5, # Ranger options - betas=(.95, 0.999), eps=1e-5, weight_decay=0, # Adam options - use_gc=True, gc_conv_only=False - # Gradient centralization on or off, applied to conv layers only or conv + fc layers - ): - - # parameter checks - if not 0.0 <= alpha <= 1.0: - raise ValueError(f'Invalid slow update rate: {alpha}') - if not 1 <= k: - raise ValueError(f'Invalid lookahead steps: {k}') - if not lr > 0: - raise ValueError(f'Invalid Learning Rate: {lr}') - if not eps > 0: - raise ValueError(f'Invalid eps: {eps}') - - # parameter comments: - # beta1 (momentum) of .95 seems to work better than .90... - # N_sma_threshold of 5 seems better in testing than 4. - # In both cases, worth testing on your dataset (.90 vs .95, 4 vs 5) to make sure which works best for you. - - # prep defaults and init torch.optim base - defaults = dict(lr=lr, alpha=alpha, k=k, step_counter=0, betas=betas, N_sma_threshhold=N_sma_threshhold, - eps=eps, weight_decay=weight_decay) - super().__init__(params, defaults) - - # adjustable threshold - self.N_sma_threshhold = N_sma_threshhold - - # look ahead params - - self.alpha = alpha - self.k = k - - # radam buffer for state - self.radam_buffer = [[None, None, None] for ind in range(10)] - - # gc on or off - self.use_gc = use_gc - - # level of gradient centralization - self.gc_gradient_threshold = 3 if gc_conv_only else 1 - - def __setstate__(self, state): - super(Ranger, self).__setstate__(state) - - def step(self, closure=None): - loss = None - - # Evaluate averages and grad, update param tensors - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - - if grad.is_sparse: - raise RuntimeError('Ranger optimizer does not support sparse gradients') - - p_data_fp32 = p.data.float() - - state = self.state[p] # get state dict for this param - - if len(state) == 0: # if first time to run...init dictionary with our desired entries - # if self.first_run_check==0: - # self.first_run_check=1 - # print("Initializing slow buffer...should not see this at load from saved model!") - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - - # look ahead weight storage now in state dict - state['slow_buffer'] = torch.empty_like(p.data) - state['slow_buffer'].copy_(p.data) - - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32) - - # begin computations - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - # GC operation for Conv layers and FC layers - if grad.dim() > self.gc_gradient_threshold: - grad.add_(-grad.mean(dim=tuple(range(1, grad.dim())), keepdim=True)) - - state['step'] += 1 - - # compute variance mov avg - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - # compute mean moving avg - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - buffered = self.radam_buffer[int(state['step'] % 10)] - - if state['step'] == buffered[0]: - N_sma, step_size = buffered[1], buffered[2] - else: - buffered[0] = state['step'] - beta2_t = beta2 ** state['step'] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t) - buffered[1] = N_sma - if N_sma > self.N_sma_threshhold: - step_size = math.sqrt( - (1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / ( - N_sma_max - 2)) / (1 - beta1 ** state['step']) - else: - step_size = 1.0 / (1 - beta1 ** state['step']) - buffered[2] = step_size - - if group['weight_decay'] != 0: - p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32) - - # apply lr - if N_sma > self.N_sma_threshhold: - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_(-step_size * group['lr'], exp_avg, denom) - else: - p_data_fp32.add_(-step_size * group['lr'], exp_avg) - - p.data.copy_(p_data_fp32) - - # integrated look ahead... - # we do it at the param level instead of group level - if state['step'] % group['k'] == 0: - slow_p = state['slow_buffer'] # get access to slow param tensor - slow_p.add_(self.alpha, p.data - slow_p) # (fast weights - slow weights) * alpha - p.data.copy_(slow_p) # copy interpolated weights to RAdam param tensor - - return loss \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/maskformer_transformer_decoder.py b/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/maskformer_transformer_decoder.py deleted file mode 100644 index 79f09fa43f2f5a33c3422a6bb999b20763ab8b5e..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/maskformer_transformer_decoder.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d -from detectron2.utils.registry import Registry - -from .position_encoding import PositionEmbeddingSine -from .transformer import Transformer - - -TRANSFORMER_DECODER_REGISTRY = Registry("TRANSFORMER_MODULE") -TRANSFORMER_DECODER_REGISTRY.__doc__ = """ -Registry for transformer module in MaskFormer. -""" - - -def build_transformer_decoder(cfg, in_channels, mask_classification=True): - """ - Build a instance embedding branch from `cfg.MODEL.INS_EMBED_HEAD.NAME`. - """ - name = cfg.MODEL.MASK_FORMER.TRANSFORMER_DECODER_NAME - return TRANSFORMER_DECODER_REGISTRY.get(name)(cfg, in_channels, mask_classification) - - -@TRANSFORMER_DECODER_REGISTRY.register() -class StandardTransformerDecoder(nn.Module): - @configurable - def __init__( - self, - in_channels, - mask_classification=True, - *, - num_classes: int, - hidden_dim: int, - num_queries: int, - nheads: int, - dropout: float, - dim_feedforward: int, - enc_layers: int, - dec_layers: int, - pre_norm: bool, - deep_supervision: bool, - mask_dim: int, - enforce_input_project: bool, - ): - """ - NOTE: this interface is experimental. - Args: - in_channels: channels of the input features - mask_classification: whether to add mask classifier or not - num_classes: number of classes - hidden_dim: Transformer feature dimension - num_queries: number of queries - nheads: number of heads - dropout: dropout in Transformer - dim_feedforward: feature dimension in feedforward network - enc_layers: number of Transformer encoder layers - dec_layers: number of Transformer decoder layers - pre_norm: whether to use pre-LayerNorm or not - deep_supervision: whether to add supervision to every decoder layers - mask_dim: mask feature dimension - enforce_input_project: add input project 1x1 conv even if input - channels and hidden dim is identical - """ - super().__init__() - - self.mask_classification = mask_classification - - # positional encoding - N_steps = hidden_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - transformer = Transformer( - d_model=hidden_dim, - dropout=dropout, - nhead=nheads, - dim_feedforward=dim_feedforward, - num_encoder_layers=enc_layers, - num_decoder_layers=dec_layers, - normalize_before=pre_norm, - return_intermediate_dec=deep_supervision, - ) - - self.num_queries = num_queries - self.transformer = transformer - hidden_dim = transformer.d_model - - self.query_embed = nn.Embedding(num_queries, hidden_dim) - - if in_channels != hidden_dim or enforce_input_project: - self.input_proj = Conv2d(in_channels, hidden_dim, kernel_size=1) - weight_init.c2_xavier_fill(self.input_proj) - else: - self.input_proj = nn.Sequential() - self.aux_loss = deep_supervision - - # output FFNs - if self.mask_classification: - self.class_embed = nn.Linear(hidden_dim, num_classes + 1) - self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3) - - @classmethod - def from_config(cls, cfg, in_channels, mask_classification): - ret = {} - ret["in_channels"] = in_channels - ret["mask_classification"] = mask_classification - - ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - ret["hidden_dim"] = cfg.MODEL.MASK_FORMER.HIDDEN_DIM - ret["num_queries"] = cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES - # Transformer parameters: - ret["nheads"] = cfg.MODEL.MASK_FORMER.NHEADS - ret["dropout"] = cfg.MODEL.MASK_FORMER.DROPOUT - ret["dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD - ret["enc_layers"] = cfg.MODEL.MASK_FORMER.ENC_LAYERS - ret["dec_layers"] = cfg.MODEL.MASK_FORMER.DEC_LAYERS - ret["pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM - ret["deep_supervision"] = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION - ret["enforce_input_project"] = cfg.MODEL.MASK_FORMER.ENFORCE_INPUT_PROJ - - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - - return ret - - def forward(self, x, mask_features, mask=None): - if mask is not None: - mask = F.interpolate(mask[None].float(), size=x.shape[-2:]).to(torch.bool)[0] - pos = self.pe_layer(x, mask) - - src = x - hs, memory = self.transformer(self.input_proj(src), mask, self.query_embed.weight, pos) - - if self.mask_classification: - outputs_class = self.class_embed(hs) - out = {"pred_logits": outputs_class[-1]} - else: - out = {} - - if self.aux_loss: - # [l, bs, queries, embed] - mask_embed = self.mask_embed(hs) - outputs_seg_masks = torch.einsum("lbqc,bchw->lbqhw", mask_embed, mask_features) - out["pred_masks"] = outputs_seg_masks[-1] - out["aux_outputs"] = self._set_aux_loss( - outputs_class if self.mask_classification else None, outputs_seg_masks - ) - else: - # FIXME h_boxes takes the last one computed, keep this in mind - # [bs, queries, embed] - mask_embed = self.mask_embed(hs[-1]) - outputs_seg_masks = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features) - out["pred_masks"] = outputs_seg_masks - return out - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_seg_masks): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - if self.mask_classification: - return [ - {"pred_logits": a, "pred_masks": b} - for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1]) - ] - else: - return [{"pred_masks": b} for b in outputs_seg_masks[:-1]] - - -class MLP(nn.Module): - """Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x diff --git a/spaces/akhaliq/lama/saicinpainting/evaluation/losses/lpips.py b/spaces/akhaliq/lama/saicinpainting/evaluation/losses/lpips.py deleted file mode 100644 index b5f19b747f2457902695213f7efcde4fdc306c1f..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/evaluation/losses/lpips.py +++ /dev/null @@ -1,891 +0,0 @@ -############################################################ -# The contents below have been combined using files in the # -# following repository: # -# https://github.com/richzhang/PerceptualSimilarity # -############################################################ - -############################################################ -# __init__.py # -############################################################ - -import numpy as np -from skimage.metrics import structural_similarity -import torch - -from saicinpainting.utils import get_shape - - -class PerceptualLoss(torch.nn.Module): - def __init__(self, model='net-lin', net='alex', colorspace='rgb', model_path=None, spatial=False, use_gpu=True): - # VGG using our perceptually-learned weights (LPIPS metric) - # def __init__(self, model='net', net='vgg', use_gpu=True): # "default" way of using VGG as a perceptual loss - super(PerceptualLoss, self).__init__() - self.use_gpu = use_gpu - self.spatial = spatial - self.model = DistModel() - self.model.initialize(model=model, net=net, use_gpu=use_gpu, colorspace=colorspace, - model_path=model_path, spatial=self.spatial) - - def forward(self, pred, target, normalize=True): - """ - Pred and target are Variables. - If normalize is True, assumes the images are between [0,1] and then scales them between [-1,+1] - If normalize is False, assumes the images are already between [-1,+1] - Inputs pred and target are Nx3xHxW - Output pytorch Variable N long - """ - - if normalize: - target = 2 * target - 1 - pred = 2 * pred - 1 - - return self.model(target, pred) - - -def normalize_tensor(in_feat, eps=1e-10): - norm_factor = torch.sqrt(torch.sum(in_feat ** 2, dim=1, keepdim=True)) - return in_feat / (norm_factor + eps) - - -def l2(p0, p1, range=255.): - return .5 * np.mean((p0 / range - p1 / range) ** 2) - - -def psnr(p0, p1, peak=255.): - return 10 * np.log10(peak ** 2 / np.mean((1. * p0 - 1. * p1) ** 2)) - - -def dssim(p0, p1, range=255.): - return (1 - compare_ssim(p0, p1, data_range=range, multichannel=True)) / 2. - - -def rgb2lab(in_img, mean_cent=False): - from skimage import color - img_lab = color.rgb2lab(in_img) - if (mean_cent): - img_lab[:, :, 0] = img_lab[:, :, 0] - 50 - return img_lab - - -def tensor2np(tensor_obj): - # change dimension of a tensor object into a numpy array - return tensor_obj[0].cpu().float().numpy().transpose((1, 2, 0)) - - -def np2tensor(np_obj): - # change dimenion of np array into tensor array - return torch.Tensor(np_obj[:, :, :, np.newaxis].transpose((3, 2, 0, 1))) - - -def tensor2tensorlab(image_tensor, to_norm=True, mc_only=False): - # image tensor to lab tensor - from skimage import color - - img = tensor2im(image_tensor) - img_lab = color.rgb2lab(img) - if (mc_only): - img_lab[:, :, 0] = img_lab[:, :, 0] - 50 - if (to_norm and not mc_only): - img_lab[:, :, 0] = img_lab[:, :, 0] - 50 - img_lab = img_lab / 100. - - return np2tensor(img_lab) - - -def tensorlab2tensor(lab_tensor, return_inbnd=False): - from skimage import color - import warnings - warnings.filterwarnings("ignore") - - lab = tensor2np(lab_tensor) * 100. - lab[:, :, 0] = lab[:, :, 0] + 50 - - rgb_back = 255. * np.clip(color.lab2rgb(lab.astype('float')), 0, 1) - if (return_inbnd): - # convert back to lab, see if we match - lab_back = color.rgb2lab(rgb_back.astype('uint8')) - mask = 1. * np.isclose(lab_back, lab, atol=2.) - mask = np2tensor(np.prod(mask, axis=2)[:, :, np.newaxis]) - return (im2tensor(rgb_back), mask) - else: - return im2tensor(rgb_back) - - -def rgb2lab(input): - from skimage import color - return color.rgb2lab(input / 255.) - - -def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255. / 2.): - image_numpy = image_tensor[0].cpu().float().numpy() - image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor - return image_numpy.astype(imtype) - - -def im2tensor(image, imtype=np.uint8, cent=1., factor=255. / 2.): - return torch.Tensor((image / factor - cent) - [:, :, :, np.newaxis].transpose((3, 2, 0, 1))) - - -def tensor2vec(vector_tensor): - return vector_tensor.data.cpu().numpy()[:, :, 0, 0] - - -def voc_ap(rec, prec, use_07_metric=False): - """ ap = voc_ap(rec, prec, [use_07_metric]) - Compute VOC AP given precision and recall. - If use_07_metric is true, uses the - VOC 07 11 point method (default:False). - """ - if use_07_metric: - # 11 point metric - ap = 0. - for t in np.arange(0., 1.1, 0.1): - if np.sum(rec >= t) == 0: - p = 0 - else: - p = np.max(prec[rec >= t]) - ap = ap + p / 11. - else: - # correct AP calculation - # first append sentinel values at the end - mrec = np.concatenate(([0.], rec, [1.])) - mpre = np.concatenate(([0.], prec, [0.])) - - # compute the precision envelope - for i in range(mpre.size - 1, 0, -1): - mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) - - # to calculate area under PR curve, look for points - # where X axis (recall) changes value - i = np.where(mrec[1:] != mrec[:-1])[0] - - # and sum (\Delta recall) * prec - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) - return ap - - -def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255. / 2.): - # def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=1.): - image_numpy = image_tensor[0].cpu().float().numpy() - image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor - return image_numpy.astype(imtype) - - -def im2tensor(image, imtype=np.uint8, cent=1., factor=255. / 2.): - # def im2tensor(image, imtype=np.uint8, cent=1., factor=1.): - return torch.Tensor((image / factor - cent) - [:, :, :, np.newaxis].transpose((3, 2, 0, 1))) - - -############################################################ -# base_model.py # -############################################################ - - -class BaseModel(torch.nn.Module): - def __init__(self): - super().__init__() - - def name(self): - return 'BaseModel' - - def initialize(self, use_gpu=True): - self.use_gpu = use_gpu - - def forward(self): - pass - - def get_image_paths(self): - pass - - def optimize_parameters(self): - pass - - def get_current_visuals(self): - return self.input - - def get_current_errors(self): - return {} - - def save(self, label): - pass - - # helper saving function that can be used by subclasses - def save_network(self, network, path, network_label, epoch_label): - save_filename = '%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(path, save_filename) - torch.save(network.state_dict(), save_path) - - # helper loading function that can be used by subclasses - def load_network(self, network, network_label, epoch_label): - save_filename = '%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(self.save_dir, save_filename) - print('Loading network from %s' % save_path) - network.load_state_dict(torch.load(save_path, map_location='cpu')) - - def update_learning_rate(): - pass - - def get_image_paths(self): - return self.image_paths - - def save_done(self, flag=False): - np.save(os.path.join(self.save_dir, 'done_flag'), flag) - np.savetxt(os.path.join(self.save_dir, 'done_flag'), [flag, ], fmt='%i') - - -############################################################ -# dist_model.py # -############################################################ - -import os -from collections import OrderedDict -from scipy.ndimage import zoom -from tqdm import tqdm - - -class DistModel(BaseModel): - def name(self): - return self.model_name - - def initialize(self, model='net-lin', net='alex', colorspace='Lab', pnet_rand=False, pnet_tune=False, - model_path=None, - use_gpu=True, printNet=False, spatial=False, - is_train=False, lr=.0001, beta1=0.5, version='0.1'): - ''' - INPUTS - model - ['net-lin'] for linearly calibrated network - ['net'] for off-the-shelf network - ['L2'] for L2 distance in Lab colorspace - ['SSIM'] for ssim in RGB colorspace - net - ['squeeze','alex','vgg'] - model_path - if None, will look in weights/[NET_NAME].pth - colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM - use_gpu - bool - whether or not to use a GPU - printNet - bool - whether or not to print network architecture out - spatial - bool - whether to output an array containing varying distances across spatial dimensions - spatial_shape - if given, output spatial shape. if None then spatial shape is determined automatically via spatial_factor (see below). - spatial_factor - if given, specifies upsampling factor relative to the largest spatial extent of a convolutional layer. if None then resized to size of input images. - spatial_order - spline order of filter for upsampling in spatial mode, by default 1 (bilinear). - is_train - bool - [True] for training mode - lr - float - initial learning rate - beta1 - float - initial momentum term for adam - version - 0.1 for latest, 0.0 was original (with a bug) - ''' - BaseModel.initialize(self, use_gpu=use_gpu) - - self.model = model - self.net = net - self.is_train = is_train - self.spatial = spatial - self.model_name = '%s [%s]' % (model, net) - - if (self.model == 'net-lin'): # pretrained net + linear layer - self.net = PNetLin(pnet_rand=pnet_rand, pnet_tune=pnet_tune, pnet_type=net, - use_dropout=True, spatial=spatial, version=version, lpips=True) - kw = dict(map_location='cpu') - if (model_path is None): - import inspect - model_path = os.path.abspath( - os.path.join(os.path.dirname(__file__), '..', '..', '..', 'models', 'lpips_models', f'{net}.pth')) - - if (not is_train): - self.net.load_state_dict(torch.load(model_path, **kw), strict=False) - - elif (self.model == 'net'): # pretrained network - self.net = PNetLin(pnet_rand=pnet_rand, pnet_type=net, lpips=False) - elif (self.model in ['L2', 'l2']): - self.net = L2(use_gpu=use_gpu, colorspace=colorspace) # not really a network, only for testing - self.model_name = 'L2' - elif (self.model in ['DSSIM', 'dssim', 'SSIM', 'ssim']): - self.net = DSSIM(use_gpu=use_gpu, colorspace=colorspace) - self.model_name = 'SSIM' - else: - raise ValueError("Model [%s] not recognized." % self.model) - - self.trainable_parameters = list(self.net.parameters()) - - if self.is_train: # training mode - # extra network on top to go from distances (d0,d1) => predicted human judgment (h*) - self.rankLoss = BCERankingLoss() - self.trainable_parameters += list(self.rankLoss.net.parameters()) - self.lr = lr - self.old_lr = lr - self.optimizer_net = torch.optim.Adam(self.trainable_parameters, lr=lr, betas=(beta1, 0.999)) - else: # test mode - self.net.eval() - - # if (use_gpu): - # self.net.to(gpu_ids[0]) - # self.net = torch.nn.DataParallel(self.net, device_ids=gpu_ids) - # if (self.is_train): - # self.rankLoss = self.rankLoss.to(device=gpu_ids[0]) # just put this on GPU0 - - if (printNet): - print('---------- Networks initialized -------------') - print_network(self.net) - print('-----------------------------------------------') - - def forward(self, in0, in1, retPerLayer=False): - ''' Function computes the distance between image patches in0 and in1 - INPUTS - in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1] - OUTPUT - computed distances between in0 and in1 - ''' - - return self.net(in0, in1, retPerLayer=retPerLayer) - - # ***** TRAINING FUNCTIONS ***** - def optimize_parameters(self): - self.forward_train() - self.optimizer_net.zero_grad() - self.backward_train() - self.optimizer_net.step() - self.clamp_weights() - - def clamp_weights(self): - for module in self.net.modules(): - if (hasattr(module, 'weight') and module.kernel_size == (1, 1)): - module.weight.data = torch.clamp(module.weight.data, min=0) - - def set_input(self, data): - self.input_ref = data['ref'] - self.input_p0 = data['p0'] - self.input_p1 = data['p1'] - self.input_judge = data['judge'] - - # if (self.use_gpu): - # self.input_ref = self.input_ref.to(device=self.gpu_ids[0]) - # self.input_p0 = self.input_p0.to(device=self.gpu_ids[0]) - # self.input_p1 = self.input_p1.to(device=self.gpu_ids[0]) - # self.input_judge = self.input_judge.to(device=self.gpu_ids[0]) - - # self.var_ref = Variable(self.input_ref, requires_grad=True) - # self.var_p0 = Variable(self.input_p0, requires_grad=True) - # self.var_p1 = Variable(self.input_p1, requires_grad=True) - - def forward_train(self): # run forward pass - # print(self.net.module.scaling_layer.shift) - # print(torch.norm(self.net.module.net.slice1[0].weight).item(), torch.norm(self.net.module.lin0.model[1].weight).item()) - - assert False, "We shoud've not get here when using LPIPS as a metric" - - self.d0 = self(self.var_ref, self.var_p0) - self.d1 = self(self.var_ref, self.var_p1) - self.acc_r = self.compute_accuracy(self.d0, self.d1, self.input_judge) - - self.var_judge = Variable(1. * self.input_judge).view(self.d0.size()) - - self.loss_total = self.rankLoss(self.d0, self.d1, self.var_judge * 2. - 1.) - - return self.loss_total - - def backward_train(self): - torch.mean(self.loss_total).backward() - - def compute_accuracy(self, d0, d1, judge): - ''' d0, d1 are Variables, judge is a Tensor ''' - d1_lt_d0 = (d1 < d0).cpu().data.numpy().flatten() - judge_per = judge.cpu().numpy().flatten() - return d1_lt_d0 * judge_per + (1 - d1_lt_d0) * (1 - judge_per) - - def get_current_errors(self): - retDict = OrderedDict([('loss_total', self.loss_total.data.cpu().numpy()), - ('acc_r', self.acc_r)]) - - for key in retDict.keys(): - retDict[key] = np.mean(retDict[key]) - - return retDict - - def get_current_visuals(self): - zoom_factor = 256 / self.var_ref.data.size()[2] - - ref_img = tensor2im(self.var_ref.data) - p0_img = tensor2im(self.var_p0.data) - p1_img = tensor2im(self.var_p1.data) - - ref_img_vis = zoom(ref_img, [zoom_factor, zoom_factor, 1], order=0) - p0_img_vis = zoom(p0_img, [zoom_factor, zoom_factor, 1], order=0) - p1_img_vis = zoom(p1_img, [zoom_factor, zoom_factor, 1], order=0) - - return OrderedDict([('ref', ref_img_vis), - ('p0', p0_img_vis), - ('p1', p1_img_vis)]) - - def save(self, path, label): - if (self.use_gpu): - self.save_network(self.net.module, path, '', label) - else: - self.save_network(self.net, path, '', label) - self.save_network(self.rankLoss.net, path, 'rank', label) - - def update_learning_rate(self, nepoch_decay): - lrd = self.lr / nepoch_decay - lr = self.old_lr - lrd - - for param_group in self.optimizer_net.param_groups: - param_group['lr'] = lr - - print('update lr [%s] decay: %f -> %f' % (type, self.old_lr, lr)) - self.old_lr = lr - - -def score_2afc_dataset(data_loader, func, name=''): - ''' Function computes Two Alternative Forced Choice (2AFC) score using - distance function 'func' in dataset 'data_loader' - INPUTS - data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside - func - callable distance function - calling d=func(in0,in1) should take 2 - pytorch tensors with shape Nx3xXxY, and return numpy array of length N - OUTPUTS - [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators - [1] - dictionary with following elements - d0s,d1s - N arrays containing distances between reference patch to perturbed patches - gts - N array in [0,1], preferred patch selected by human evaluators - (closer to "0" for left patch p0, "1" for right patch p1, - "0.6" means 60pct people preferred right patch, 40pct preferred left) - scores - N array in [0,1], corresponding to what percentage function agreed with humans - CONSTS - N - number of test triplets in data_loader - ''' - - d0s = [] - d1s = [] - gts = [] - - for data in tqdm(data_loader.load_data(), desc=name): - d0s += func(data['ref'], data['p0']).data.cpu().numpy().flatten().tolist() - d1s += func(data['ref'], data['p1']).data.cpu().numpy().flatten().tolist() - gts += data['judge'].cpu().numpy().flatten().tolist() - - d0s = np.array(d0s) - d1s = np.array(d1s) - gts = np.array(gts) - scores = (d0s < d1s) * (1. - gts) + (d1s < d0s) * gts + (d1s == d0s) * .5 - - return (np.mean(scores), dict(d0s=d0s, d1s=d1s, gts=gts, scores=scores)) - - -def score_jnd_dataset(data_loader, func, name=''): - ''' Function computes JND score using distance function 'func' in dataset 'data_loader' - INPUTS - data_loader - CustomDatasetDataLoader object - contains a JNDDataset inside - func - callable distance function - calling d=func(in0,in1) should take 2 - pytorch tensors with shape Nx3xXxY, and return pytorch array of length N - OUTPUTS - [0] - JND score in [0,1], mAP score (area under precision-recall curve) - [1] - dictionary with following elements - ds - N array containing distances between two patches shown to human evaluator - sames - N array containing fraction of people who thought the two patches were identical - CONSTS - N - number of test triplets in data_loader - ''' - - ds = [] - gts = [] - - for data in tqdm(data_loader.load_data(), desc=name): - ds += func(data['p0'], data['p1']).data.cpu().numpy().tolist() - gts += data['same'].cpu().numpy().flatten().tolist() - - sames = np.array(gts) - ds = np.array(ds) - - sorted_inds = np.argsort(ds) - ds_sorted = ds[sorted_inds] - sames_sorted = sames[sorted_inds] - - TPs = np.cumsum(sames_sorted) - FPs = np.cumsum(1 - sames_sorted) - FNs = np.sum(sames_sorted) - TPs - - precs = TPs / (TPs + FPs) - recs = TPs / (TPs + FNs) - score = voc_ap(recs, precs) - - return (score, dict(ds=ds, sames=sames)) - - -############################################################ -# networks_basic.py # -############################################################ - -import torch.nn as nn -from torch.autograd import Variable -import numpy as np - - -def spatial_average(in_tens, keepdim=True): - return in_tens.mean([2, 3], keepdim=keepdim) - - -def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W - in_H = in_tens.shape[2] - scale_factor = 1. * out_H / in_H - - return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens) - - -# Learned perceptual metric -class PNetLin(nn.Module): - def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, - version='0.1', lpips=True): - super(PNetLin, self).__init__() - - self.pnet_type = pnet_type - self.pnet_tune = pnet_tune - self.pnet_rand = pnet_rand - self.spatial = spatial - self.lpips = lpips - self.version = version - self.scaling_layer = ScalingLayer() - - if (self.pnet_type in ['vgg', 'vgg16']): - net_type = vgg16 - self.chns = [64, 128, 256, 512, 512] - elif (self.pnet_type == 'alex'): - net_type = alexnet - self.chns = [64, 192, 384, 256, 256] - elif (self.pnet_type == 'squeeze'): - net_type = squeezenet - self.chns = [64, 128, 256, 384, 384, 512, 512] - self.L = len(self.chns) - - self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune) - - if (lpips): - self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout) - self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout) - self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout) - self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout) - self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout) - self.lins = [self.lin0, self.lin1, self.lin2, self.lin3, self.lin4] - if (self.pnet_type == 'squeeze'): # 7 layers for squeezenet - self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout) - self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout) - self.lins += [self.lin5, self.lin6] - - def forward(self, in0, in1, retPerLayer=False): - # v0.0 - original release had a bug, where input was not scaled - in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version == '0.1' else ( - in0, in1) - outs0, outs1 = self.net(in0_input), self.net(in1_input) - feats0, feats1, diffs = {}, {}, {} - - for kk in range(self.L): - feats0[kk], feats1[kk] = normalize_tensor(outs0[kk]), normalize_tensor(outs1[kk]) - diffs[kk] = (feats0[kk] - feats1[kk]) ** 2 - - if (self.lpips): - if (self.spatial): - res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)] - else: - if (self.spatial): - res = [upsample(diffs[kk].sum(dim=1, keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(diffs[kk].sum(dim=1, keepdim=True), keepdim=True) for kk in range(self.L)] - - val = res[0] - for l in range(1, self.L): - val += res[l] - - if (retPerLayer): - return (val, res) - else: - return val - - -class ScalingLayer(nn.Module): - def __init__(self): - super(ScalingLayer, self).__init__() - self.register_buffer('shift', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer('scale', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def forward(self, inp): - return (inp - self.shift) / self.scale - - -class NetLinLayer(nn.Module): - ''' A single linear layer which does a 1x1 conv ''' - - def __init__(self, chn_in, chn_out=1, use_dropout=False): - super(NetLinLayer, self).__init__() - - layers = [nn.Dropout(), ] if (use_dropout) else [] - layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False), ] - self.model = nn.Sequential(*layers) - - -class Dist2LogitLayer(nn.Module): - ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) ''' - - def __init__(self, chn_mid=32, use_sigmoid=True): - super(Dist2LogitLayer, self).__init__() - - layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True), ] - layers += [nn.LeakyReLU(0.2, True), ] - layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True), ] - layers += [nn.LeakyReLU(0.2, True), ] - layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True), ] - if (use_sigmoid): - layers += [nn.Sigmoid(), ] - self.model = nn.Sequential(*layers) - - def forward(self, d0, d1, eps=0.1): - return self.model(torch.cat((d0, d1, d0 - d1, d0 / (d1 + eps), d1 / (d0 + eps)), dim=1)) - - -class BCERankingLoss(nn.Module): - def __init__(self, chn_mid=32): - super(BCERankingLoss, self).__init__() - self.net = Dist2LogitLayer(chn_mid=chn_mid) - # self.parameters = list(self.net.parameters()) - self.loss = torch.nn.BCELoss() - - def forward(self, d0, d1, judge): - per = (judge + 1.) / 2. - self.logit = self.net(d0, d1) - return self.loss(self.logit, per) - - -# L2, DSSIM metrics -class FakeNet(nn.Module): - def __init__(self, use_gpu=True, colorspace='Lab'): - super(FakeNet, self).__init__() - self.use_gpu = use_gpu - self.colorspace = colorspace - - -class L2(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert (in0.size()[0] == 1) # currently only supports batchSize 1 - - if (self.colorspace == 'RGB'): - (N, C, X, Y) = in0.size() - value = torch.mean(torch.mean(torch.mean((in0 - in1) ** 2, dim=1).view(N, 1, X, Y), dim=2).view(N, 1, 1, Y), - dim=3).view(N) - return value - elif (self.colorspace == 'Lab'): - value = l2(tensor2np(tensor2tensorlab(in0.data, to_norm=False)), - tensor2np(tensor2tensorlab(in1.data, to_norm=False)), range=100.).astype('float') - ret_var = Variable(torch.Tensor((value,))) - # if (self.use_gpu): - # ret_var = ret_var.cuda() - return ret_var - - -class DSSIM(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert (in0.size()[0] == 1) # currently only supports batchSize 1 - - if (self.colorspace == 'RGB'): - value = dssim(1. * tensor2im(in0.data), 1. * tensor2im(in1.data), range=255.).astype('float') - elif (self.colorspace == 'Lab'): - value = dssim(tensor2np(tensor2tensorlab(in0.data, to_norm=False)), - tensor2np(tensor2tensorlab(in1.data, to_norm=False)), range=100.).astype('float') - ret_var = Variable(torch.Tensor((value,))) - # if (self.use_gpu): - # ret_var = ret_var.cuda() - return ret_var - - -def print_network(net): - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - print('Network', net) - print('Total number of parameters: %d' % num_params) - - -############################################################ -# pretrained_networks.py # -############################################################ - -from collections import namedtuple -import torch -from torchvision import models as tv - - -class squeezenet(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(squeezenet, self).__init__() - pretrained_features = tv.squeezenet1_1(pretrained=pretrained).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.slice6 = torch.nn.Sequential() - self.slice7 = torch.nn.Sequential() - self.N_slices = 7 - for x in range(2): - self.slice1.add_module(str(x), pretrained_features[x]) - for x in range(2, 5): - self.slice2.add_module(str(x), pretrained_features[x]) - for x in range(5, 8): - self.slice3.add_module(str(x), pretrained_features[x]) - for x in range(8, 10): - self.slice4.add_module(str(x), pretrained_features[x]) - for x in range(10, 11): - self.slice5.add_module(str(x), pretrained_features[x]) - for x in range(11, 12): - self.slice6.add_module(str(x), pretrained_features[x]) - for x in range(12, 13): - self.slice7.add_module(str(x), pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1 = h - h = self.slice2(h) - h_relu2 = h - h = self.slice3(h) - h_relu3 = h - h = self.slice4(h) - h_relu4 = h - h = self.slice5(h) - h_relu5 = h - h = self.slice6(h) - h_relu6 = h - h = self.slice7(h) - h_relu7 = h - vgg_outputs = namedtuple("SqueezeOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5', 'relu6', 'relu7']) - out = vgg_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5, h_relu6, h_relu7) - - return out - - -class alexnet(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(alexnet, self).__init__() - alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.N_slices = 5 - for x in range(2): - self.slice1.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(2, 5): - self.slice2.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(5, 8): - self.slice3.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(8, 10): - self.slice4.add_module(str(x), alexnet_pretrained_features[x]) - for x in range(10, 12): - self.slice5.add_module(str(x), alexnet_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1 = h - h = self.slice2(h) - h_relu2 = h - h = self.slice3(h) - h_relu3 = h - h = self.slice4(h) - h_relu4 = h - h = self.slice5(h) - h_relu5 = h - alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5']) - out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5) - - return out - - -class vgg16(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(vgg16, self).__init__() - vgg_pretrained_features = tv.vgg16(pretrained=pretrained).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.N_slices = 5 - for x in range(4): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(4, 9): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(9, 16): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(16, 23): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(23, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1_2 = h - h = self.slice2(h) - h_relu2_2 = h - h = self.slice3(h) - h_relu3_3 = h - h = self.slice4(h) - h_relu4_3 = h - h = self.slice5(h) - h_relu5_3 = h - vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3']) - out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3) - - return out - - -class resnet(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True, num=18): - super(resnet, self).__init__() - if (num == 18): - self.net = tv.resnet18(pretrained=pretrained) - elif (num == 34): - self.net = tv.resnet34(pretrained=pretrained) - elif (num == 50): - self.net = tv.resnet50(pretrained=pretrained) - elif (num == 101): - self.net = tv.resnet101(pretrained=pretrained) - elif (num == 152): - self.net = tv.resnet152(pretrained=pretrained) - self.N_slices = 5 - - self.conv1 = self.net.conv1 - self.bn1 = self.net.bn1 - self.relu = self.net.relu - self.maxpool = self.net.maxpool - self.layer1 = self.net.layer1 - self.layer2 = self.net.layer2 - self.layer3 = self.net.layer3 - self.layer4 = self.net.layer4 - - def forward(self, X): - h = self.conv1(X) - h = self.bn1(h) - h = self.relu(h) - h_relu1 = h - h = self.maxpool(h) - h = self.layer1(h) - h_conv2 = h - h = self.layer2(h) - h_conv3 = h - h = self.layer3(h) - h_conv4 = h - h = self.layer4(h) - h_conv5 = h - - outputs = namedtuple("Outputs", ['relu1', 'conv2', 'conv3', 'conv4', 'conv5']) - out = outputs(h_relu1, h_conv2, h_conv3, h_conv4, h_conv5) - - return out diff --git a/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/data/utils/loudness_extraction.py b/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/data/utils/loudness_extraction.py deleted file mode 100644 index 92360648bc5d2c20c9b3f265aa3a574af6c7079f..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/neural-waveshaping-synthesis/neural_waveshaping_synthesis/data/utils/loudness_extraction.py +++ /dev/null @@ -1,89 +0,0 @@ -from typing import Callable, Optional -import warnings - -import gin -import librosa -import numpy as np - -from .upsampling import linear_interpolation - - -def compute_power_spectrogram( - audio: np.ndarray, - n_fft: int, - hop_length: int, - window: str, - epsilon: float, -): - spectrogram = librosa.stft(audio, n_fft=n_fft, hop_length=hop_length, window=window) - magnitude_spectrogram = np.abs(spectrogram) - power_spectrogram = librosa.amplitude_to_db( - magnitude_spectrogram, ref=np.max, amin=epsilon - ) - return power_spectrogram - - -def perform_perceptual_weighting( - power_spectrogram_in_db: np.ndarray, sample_rate: float, n_fft: int -): - centre_frequencies = librosa.fft_frequencies(sample_rate, n_fft) - - # We know that we will get a log(0) warning here due to the DC component -- we can - # safely ignore as it is clipped to the default min dB value of -80.0 dB - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - weights = librosa.A_weighting(centre_frequencies) - - weights = np.expand_dims(weights, axis=1) - weighted_spectrogram = power_spectrogram_in_db # + weights - return weighted_spectrogram - - -@gin.configurable -def extract_perceptual_loudness( - audio: np.ndarray, - sample_rate: float = 16000, - n_fft: int = 2048, - hop_length: int = 512, - window: str = "hann", - epsilon: float = 1e-5, - interpolate_fn: Optional[Callable] = linear_interpolation, - normalise: bool = True, -): - power_spectrogram = compute_power_spectrogram( - audio, n_fft=n_fft, hop_length=hop_length, window=window, epsilon=epsilon - ) - perceptually_weighted_spectrogram = perform_perceptual_weighting( - power_spectrogram, sample_rate=sample_rate, n_fft=n_fft - ) - loudness = np.mean(perceptually_weighted_spectrogram, axis=0) - if interpolate_fn: - loudness = interpolate_fn( - loudness, n_fft, hop_length, original_length=audio.size - ) - - if normalise: - loudness = (loudness + 80) / 80 - - return loudness - - -@gin.configurable -def extract_rms( - audio: np.ndarray, - window_size: int = 2048, - hop_length: int = 512, - sample_rate: Optional[float] = 16000.0, - interpolate_fn: Optional[Callable] = linear_interpolation, -): - # pad audio to centre frames - padded_audio = np.pad(audio, (window_size // 2, window_size // 2)) - frames = librosa.util.frame(padded_audio, window_size, hop_length) - squared = frames ** 2 - mean = np.mean(squared, axis=0) - root = np.sqrt(mean) - if interpolate_fn: - assert sample_rate is not None, "Must provide sample rate if upsampling" - root = interpolate_fn(root, window_size, hop_length, original_length=audio.size) - - return root diff --git a/spaces/alicelouis/NSCLC_classification/app.py b/spaces/alicelouis/NSCLC_classification/app.py deleted file mode 100644 index 4ae48a36d06e37679f576acc0c965d1d043588de..0000000000000000000000000000000000000000 --- a/spaces/alicelouis/NSCLC_classification/app.py +++ /dev/null @@ -1,626 +0,0 @@ -import numpy as np -from transformers import BeitImageProcessor, BeitForImageClassification -from PIL import Image -import PIL.Image as Image -import csv - -from streamlit_echarts import st_echarts -from st_on_hover_tabs import on_hover_tabs -import streamlit as st - -st.set_page_config(layout="wide") - -import warnings -warnings.filterwarnings('ignore') -from torchvision import transforms -from datasets import load_dataset -from pytorch_grad_cam import run_dff_on_image, GradCAM -from pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget -from pytorch_grad_cam.utils.image import show_cam_on_image -import cv2 -import torch -from torch import nn -from typing import List, Callable, Optional -import os -import pandas as pd -import pydicom - -labels = ["adenocarcinoma","large.cell","normal","squamous.cell"] -model_name_or_path = 'alicelouis/BeiT_NSCLC_lr2e-5' -st.markdown(''' - -''', unsafe_allow_html=True) - -@st.cache_resource(show_spinner=False,ttl=1800,max_entries=2) -def FeatureExtractor(model_name_or_path): - feature_extractor = BeitImageProcessor.from_pretrained(model_name_or_path) - return feature_extractor - - -@st.cache_resource(show_spinner=False,ttl=1800,max_entries=2) -def LoadModel(model_name_or_path): - model = BeitForImageClassification.from_pretrained( - model_name_or_path, - num_labels=len(labels), - id2label={int(i): c for i, c in enumerate(labels)}, - label2id={c: int(i) for i, c in enumerate(labels)}, - ignore_mismatched_sizes=True) - return model - - -# Model wrapper to return a tensor -class HuggingfaceToTensorModelWrapper(torch.nn.Module): - def __init__(self, model): - super(HuggingfaceToTensorModelWrapper, self).__init__() - self.model = model - - def forward(self, x): - return self.model(x).logits - -# """ Translate the category name to the category index. -# Some models aren't trained on Imagenet but on even larger "data"sets, -# so we can't just assume that 761 will always be remote-control. - -# """ -def category_name_to_index(model, category_name): - name_to_index = dict((v, k) for k, v in model.config.id2label.items()) - return name_to_index[category_name] - -# """ Helper function to run GradCAM on an image and create a visualization. -# (note to myself: this is probably useful enough to move into the package) -# If several targets are passed in targets_for_gradcam, -# e.g different categories, -# a visualization for each of them will be created. - -# """ -def print_top_categories(model, img_tensor, top_k=5): - feature_extractor = FeatureExtractor(model_name_or_path) - inputs = feature_extractor(images=img_tensor, return_tensors="pt") - outputs = model(**inputs) - logits = outputs.logits - indices = logits.cpu()[0, :].detach().numpy().argsort()[-top_k :][::-1] - probabilities = nn.functional.softmax(logits, dim=-1) - topK = dict() - for i in indices: - topK[model.config.id2label[i]] = probabilities[0][i].item()*100 - return topK - -def reshape_transform_vit_huggingface(x): - activations = x[:, 1:, :] - - activations = activations.view(activations.shape[0], - 14, 14, activations.shape[2]) - - activations = activations.transpose(2, 3).transpose(1, 2) - - return activations - - -def count_system(): - count_system = [] - with open('count_class.txt', 'r') as f: - for line in f: - if line.strip() == '0': - continue - else: - count_system.append(line.strip()) - f.close() - if len(count_system) != 0: - return int(len(count_system)) - elif len(count_system) == 0: - return int(0) - - -def count_class(count_classes): - a = 0 - b = 0 - c = 0 - d = 0 - for i in range(len(count_classes)): - if count_classes[i] == "Adeno": - a += 1 - elif count_classes[i] == "Normal": - b += 1 - elif count_classes[i] == "Large": - c += 1 - elif count_classes[i] == "Squamous": - d += 1 - count_classes = [] - count_classes.append(str(a)) - count_classes.append(str(b)) - count_classes.append(str(c)) - count_classes.append(str(d)) - with open("count_class.txt", "w") as f: - for count in count_classes: - f.write(count + "\n") - -# Define CSS styling for centering -centered_style = """ - display: flex; - justify-content: center; -""" - -st.markdown( - """ -
-

- 🏥 Lung Cancer Classification with Vision Transformer : จำแนกมะเร็งปอด 🫁 -

-
- """, unsafe_allow_html=True) - -with open("assets/css/style.css") as f: - st.markdown(f"",unsafe_allow_html=True) -with open("assets/webfonts/font.txt") as f: - st.markdown(f.read(),unsafe_allow_html=True) -# end def - -with st.sidebar: - tabs = on_hover_tabs(tabName=['Home','Upload', 'Analytics', 'More Information', 'Reset'], - iconName=['home','upload', 'analytics', 'informations', 'refresh'], - styles={'navtab': {'background-color': '#111', 'color': '#818181', 'font-size': '18px', - 'transition': '.3s', 'white-space': 'nowrap', 'text-transform': 'uppercase'}, - 'tabOptionsStyle': - {':hover :hover': {'color': 'red', 'cursor': 'pointer'}}, 'iconStyle': - {'position': 'fixed', 'left': '7.5px', 'text-align': 'left'}, 'tabStyle': - {'list-style-type': 'none', 'margin-bottom': '30px', 'padding-left': '30px'}}, - key="1",default_choice=0) - st.markdown( - """ -
-

ได้รับทุนสนับสนุน 2,000 บาท

-

National Software Contest ครั้งที่ 25

-

ประจำปีงบประมาณ 2566

-
- """, unsafe_allow_html=True) -data_base = [] -if tabs == 'Home': - st.image('How_to_use.png',use_column_width=True) -elif tabs == 'Upload': #and count_system () != 1: - uploaded_file = st.file_uploader("อัปโหลดไฟล์ภาพ", type=["jpg", "jpeg", "png", "dcm"], accept_multiple_files=True) - name_of_files = [] - name_of_files_new = [] - for n in uploaded_file: - file_name = n.name - name_of_files.append(file_name) - with open("save_name.txt", "w") as f: - for name in name_of_files: - f.write(name + "\n") - for j in range(len(name_of_files)): - if name_of_files[j].endswith('.dcm'): - name_of_files_new.append(name_of_files[j][:-4] + '.png') - else: - name_of_files_new.append(name_of_files[j]) - for i in range(len(uploaded_file)): - if name_of_files[i].endswith('.dcm'): - ds = pydicom.dcmread(uploaded_file[i]) - new_image = ds.pixel_array.astype(float) - scaled_image = (np.maximum(new_image, 0) / new_image.max()) * 255.0 - scaled_image = np.uint8(scaled_image) - gray_scale = Image.fromarray(scaled_image) - final_image = gray_scale.convert('RGB') - final_image.resize((200,200)) - final_image.save(r'./dcm_png/{}.png'.format(name_of_files[i])) - feature_extractor = FeatureExtractor(model_name_or_path) - model = LoadModel(model_name_or_path) - if name_of_files[i].endswith('.dcm'): - img = Image.open(r'./dcm_png/{}.png'.format(name_of_files[i])) - else: - img = Image.open(uploaded_file[i]) - img_out = img.resize((224,224)) - img_out = np.array(img_out) - # โหลดโมเดลที่เซฟ - image = img.resize((224,224)) - img_tensor = transforms.ToTensor()(image) - def run_grad_cam_on_image(model: torch.nn.Module, - target_layer: torch.nn.Module, - targets_for_gradcam: List[Callable], - reshape_transform: Optional[Callable], - input_tensor: torch.nn.Module=img_tensor, - input_image: Image=image, - method: Callable=GradCAM): - with method(model=HuggingfaceToTensorModelWrapper(model), - target_layers=[target_layer], - reshape_transform=reshape_transform) as cam: - # Replicate the tensor for each of the categories we want to create Grad-CAM for: - repeated_tensor = input_tensor[None, :].repeat(len(targets_for_gradcam), 1, 1, 1) - - batch_results = cam(input_tensor=repeated_tensor, - targets=targets_for_gradcam) - results = [] - for grayscale_cam in batch_results: - visualization = show_cam_on_image(np.float32(input_image)/255, - grayscale_cam, - use_rgb=True) - # Make it weight less in the notebook: - visualization = cv2.resize(visualization, - (visualization.shape[1]//2, visualization.shape[0]//2)) - results.append(visualization) - return np.hstack(results) - inputs = feature_extractor(images=image, return_tensors="pt") - targets_for_gradcam = [ClassifierOutputTarget(category_name_to_index(model, "adenocarcinoma")), - ClassifierOutputTarget(category_name_to_index(model, "large.cell")), - ClassifierOutputTarget(category_name_to_index(model, "normal")), - ClassifierOutputTarget(category_name_to_index(model, "squamous.cell")) - ] - target_layer_dff = model.beit.layernorm - target_layer_gradcam = model.beit.encoder.layer[-2].output - image_resized = image - tensor_resized = transforms.ToTensor()(image_resized) - outputs = model(**inputs) - logits = outputs.logits - # model predicts one of the 4 classes - predicted_class_idx = logits.argmax(-1).item() - className = labels[predicted_class_idx] - # display the images on streamlit - dff_image = Image.fromarray(run_dff_on_image(model=model, - target_layer=target_layer_dff, - classifier=model.classifier, - img_pil=image_resized, - img_tensor=tensor_resized, - reshape_transform=reshape_transform_vit_huggingface, - n_components=4, - top_k=4)) - # dff_image.save(r"./save_images/dff_image.png") - # gradcam_image.save(r"./save_images/gradcam_image.png") - topK = print_top_categories(model, tensor_resized) - df = pd.DataFrame.from_dict(topK, orient='index') - list_to_be_sorted= [] - for x, y in topK.items(): - dic = dict() - dic["value"] = y - dic["name"] = x - list_to_be_sorted.append(dic) - data_base.append(y) - - if list_to_be_sorted[0]['name'] == "adenocarcinoma": - dff_image.save(r"./Adenocarcinoma/{}".format(name_of_files_new[i])) - image_path = name_of_files_new[i] - with Image.open(r"./Adenocarcinoma/{}".format(image_path)) as image: - width, height = image.size - new_width = 2 * width // 3 - cropped_image = image.crop((0, 0, new_width, height)) - cropped_image.save(r"./Adenocarcinoma/{}".format(image_path)) - elif list_to_be_sorted[0]['name'] == "large.cell": - dff_image.save(r"./Large cell carcinoma/{}".format(name_of_files_new[i])) - image_path = name_of_files_new[i] - with Image.open(r"./Large cell carcinoma/{}".format(image_path)) as image: - width, height = image.size - new_width = 2 * width // 3 - cropped_image = image.crop((0, 0, new_width, height)) - cropped_image.save(r"./Large cell carcinoma/{}".format(image_path)) - #dff_image.save(r".\Large cell carcinoma\{}".format(name_of_files_new[i])) - elif list_to_be_sorted[0]['name'] == "normal": - dff_image.save(r"./Normal/{}".format(name_of_files_new[i])) - image_path = name_of_files_new[i] - with Image.open(r"./Normal/{}".format(image_path)) as image: - width, height = image.size - new_width = 2 * width // 3 - cropped_image = image.crop((0, 0, new_width, height)) - cropped_image.save(r"./Normal/{}".format(image_path)) - #dff_image.save(r"./Normal/{}".format(name_of_files_new[i])) - elif list_to_be_sorted[0]['name'] == "squamous.cell": - dff_image.save(r"./Squamous cell carcinoma/{}".format(name_of_files_new[i])) - image_path = name_of_files_new[i] - with Image.open(r"./Squamous cell carcinoma/{}".format(image_path)) as image: - width, height = image.size - new_width = 2 * width // 3 - cropped_image = image.crop((0, 0, new_width, height)) - cropped_image.save(r"./Squamous cell carcinoma/{}".format(image_path)) - #dff_image.save(r".\Squamous cell carcinoma\{}".format(name_of_files_new[i])) - # st.image(dff_image, use_column_width=True) - # st.image(gradcam_image, use_column_width=True) - st.balloons() - - # Create a container for the two columns - container = st.container() - # Create two columns within the container - col1, col2 = container.columns(2) - col3, col4 = container.columns(2) - col5, col6 = container.columns(2) - # Add the first subheader to the first column - count_classes = [] #Adenocarcinoma, Normal, Large cell carcinoma, Squamous cell carcinoma - with col1: - st.markdown("

Adenocarcinoma

".format(centered_style), unsafe_allow_html=True) - # Add the second subheader to the second column - folder_path = r"./Adenocarcinoma/" - image_files = [f for f in os.listdir(folder_path) if f.endswith('.png') or f.endswith('.jpg')] - # Display the images in a loop - for i in range(0, len(image_files), 2): - col7, col8 = st.columns([1, 1]) - with col7: - if i < len(image_files): - image1 = Image.open(os.path.join(folder_path, image_files[i])) - st.image(image1, use_column_width=True) - st.write(f"

{image_files[i]}

", unsafe_allow_html=True) - count_classes.append("Adeno") - with col8: - if i+1 < len(image_files): - image2 = Image.open(os.path.join(folder_path, image_files[i+1])) - st.image(image2, use_column_width=True) - st.write(f"

{image_files[i+1]}

", unsafe_allow_html=True) - count_classes.append("Adeno") - with col2: - st.markdown("

Normal

".format(centered_style), unsafe_allow_html=True) - folder_path = r"./Normal/" - image_files = [f for f in os.listdir(folder_path) if f.endswith('.png') or f.endswith('.jpg')] - # Display the images in a loop - for i in range(0, len(image_files), 2): - col9, col10 = st.columns([1, 1]) - with col9: - if i < len(image_files): - image1 = Image.open(os.path.join(folder_path, image_files[i])) - st.image(image1, use_column_width=True) - st.write(f"

{image_files[i]}

", unsafe_allow_html=True) - count_classes.append("Normal") - with col10: - if i+1 < len(image_files): - image2 = Image.open(os.path.join(folder_path, image_files[i+1])) - st.image(image2, use_column_width=True) - st.write(f"

{image_files[i+1]}

", unsafe_allow_html=True) - count_classes.append("Normal") - with col3: - st.markdown("") - with col4: - st.markdown("") - - with col5: - st.markdown("

Large cell carcinoma

".format(centered_style), unsafe_allow_html=True) - folder_path = r"./Large cell carcinoma/" - image_files = [f for f in os.listdir(folder_path) if f.endswith('.png') or f.endswith('.jpg')] - # Display the images in a loop - for i in range(0, len(image_files), 2): - col11, col12 = st.columns([1, 1]) - with col11: - if i < len(image_files): - image1 = Image.open(os.path.join(folder_path, image_files[i])) - st.image(image1, use_column_width=True) - st.write(f"

{image_files[i]}

", unsafe_allow_html=True) - count_classes.append("Large") - with col12: - if i+1 < len(image_files): - image2 = Image.open(os.path.join(folder_path, image_files[i+1])) - st.image(image2, use_column_width=True) - st.write(f"

{image_files[i+1]}

", unsafe_allow_html=True) - count_classes.append("Large") - with col6: - st.markdown("

Squamous cell carcinoma

".format(centered_style), unsafe_allow_html=True) - folder_path = r"./Squamous cell carcinoma/" - image_files = [f for f in os.listdir(folder_path) if f.endswith('.png') or f.endswith('.jpg')] - # Display the images in a loop - for i in range(0, len(image_files), 2): - col13, col14 = st.columns([1, 1]) - with col13: - if i < len(image_files): - image1 = Image.open(os.path.join(folder_path, image_files[i])) - st.image(image1, use_column_width=True) - st.write(f"

{image_files[i]}

", unsafe_allow_html=True) - count_classes.append("Squamous") - with col14: - if i+1 < len(image_files): - image2 = Image.open(os.path.join(folder_path, image_files[i+1])) - st.image(image2, use_column_width=True) - st.write(f"

{image_files[i+1]}

", unsafe_allow_html=True) - count_classes.append("Squamous") - count_class(count_classes) - -elif tabs == 'Analytics' and count_system() > 0: - data_base = [] - data_base_max = [] - #max_value = max(data_base) - #max_index = data_base.index(max_value) - with open('count_class.txt', 'r') as f: - for line in f: - data_base.append(line.strip()) - data_base_max.append(int(line.strip())) - max_value = max(data_base_max) # Find the maximum value in the list - max_index = data_base_max.index(max_value) - max_indices = [i for i, value in enumerate(data_base_max) if value == max_value] - if len(max_indices) > 1: - max_index = 4 - option = { - "tooltip": { - "trigger": 'axis', - "axisPointer": { - # Use axis to trigger tooltip - "type": 'shadow' # 'shadow' as default; can also be 'line' or 'shadow' - } - }, - "legend": {}, - "grid": { - "left": '3%', - "right": '4%', - "bottom": '3%', - "containLabel": True - }, - "xAxis": { - "type": 'value' - }, - "yAxis": { - "type": 'category', - "data": ['Results'] - }, - "series": [ - { - "name": 'Adenocarcinoma', - "type": 'bar', - "stack": 'total', - "label": { - "show": True - }, - "emphasis": { - "focus": 'series' - }, - "data": [data_base[0]] - }, - { - "name": 'Normal', - "type": 'bar', - "stack": 'total', - "label": { - "show": True - }, - "emphasis": { - "focus": 'series' - }, - "data": [data_base[1]] - }, - { - "name": 'Large.Cell', - "type": 'bar', - "stack": 'total', - "label": { - "show": True - }, - "emphasis": { - "focus": 'series' - }, - "data": [data_base[2]] - }, - { - "name": 'Squamous.Cell', - "type": 'bar', - "stack": 'total', - "label": { - "show": True - }, - "emphasis": { - "focus": 'series' - }, - "data": [data_base[3]] - }, - ] -} - st_echarts(options=option) - if max_index == 0: - st.markdown("

Adenocarcinoma

".format(centered_style), unsafe_allow_html=True) - elif max_index == 1: - st.markdown("

Normal

".format(centered_style), unsafe_allow_html=True) - elif max_index == 2: - st.markdown("

Large cell carcinoma

".format(centered_style), unsafe_allow_html=True) - elif max_index == 3: - st.markdown("

Squamous cell carcinoma

".format(centered_style), unsafe_allow_html=True) - -elif tabs == 'Analytics' and count_system() == 0: - st.markdown( - """ -
-

🖼️ Image Analytics Not Detected ❌

-
- """, unsafe_allow_html=True) - -elif tabs == 'More Information': - st.markdown( - """ -
-

💻 Organizers 🖱️

-
- """, unsafe_allow_html=True) - st.markdown( - """ -
- - - -
- """, unsafe_allow_html=True) - st.markdown( - """ -
-

👑 Santipab Tongchan\nCall : 090-2471512 \n "stdm4522@pccbr.ac.th"

-

Phakkhaphon Artburai\nCall : 091-0197314 \n "stdm4321@pccbr.ac.th"

-

Natthawee Naewkumpol\nCall : 061-9487722 \n "stdm4605@pccbr.ac.th"

-
- """, unsafe_allow_html=True) - st.markdown( - """ -
-

Princess Chulabhorn Science High School Buriram

-
- """, unsafe_allow_html=True) - -elif tabs == 'Reset': - def clear_folder(folder_name): - # Check if the folder exists - if not os.path.exists(folder_name): - print(f"{folder_name} does not exist.") - return - # Get a list of all files in the folder and its subdirectories - files = [] - for dirpath, dirnames, filenames in os.walk(folder_name): - for filename in filenames: - files.append(os.path.join(dirpath, filename)) - - # Delete all files in the list - for file in files: - os.remove(file) - clear_folder('Adenocarcinoma') - clear_folder('Large cell carcinoma') - clear_folder('Normal') - clear_folder('Squamous cell carcinoma') - clear_folder('dcm_png') - #clear data in count_class - with open('count_class.txt', 'w') as file: - file.write('') - st.markdown( - """ -
-

🔃 The information has been cleared. ✅

-
- """, unsafe_allow_html=True) diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/os/unix/pa_unix_hostapis.c b/spaces/amarchheda/ChordDuplicate/portaudio/src/os/unix/pa_unix_hostapis.c deleted file mode 100644 index 7f1a51f9044e719816302cda4c47d8a78e2ba46b..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/os/unix/pa_unix_hostapis.c +++ /dev/null @@ -1,103 +0,0 @@ -/* - * $Id$ - * Portable Audio I/O Library UNIX initialization table - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2002 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup unix_src -*/ - -#include "pa_hostapi.h" - -PaError PaJack_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); -PaError PaAlsa_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); -PaError PaOSS_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); -/* Added for IRIX, Pieter, oct 2, 2003: */ -PaError PaSGI_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); -/* Linux AudioScience HPI */ -PaError PaAsiHpi_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); -PaError PaMacCore_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); -PaError PaSkeleton_Initialize( PaUtilHostApiRepresentation **hostApi, PaHostApiIndex index ); - -/** Note that on Linux, ALSA is placed before OSS so that the former is preferred over the latter. - */ - -PaUtilHostApiInitializer *paHostApiInitializers[] = - { -#ifdef __linux__ - -#if PA_USE_ALSA - PaAlsa_Initialize, -#endif - -#if PA_USE_OSS - PaOSS_Initialize, -#endif - -#else /* __linux__ */ - -#if PA_USE_OSS - PaOSS_Initialize, -#endif - -#if PA_USE_ALSA - PaAlsa_Initialize, -#endif - -#endif /* __linux__ */ - -#if PA_USE_JACK - PaJack_Initialize, -#endif - /* Added for IRIX, Pieter, oct 2, 2003: */ -#if PA_USE_SGI - PaSGI_Initialize, -#endif - -#if PA_USE_ASIHPI - PaAsiHpi_Initialize, -#endif - -#if PA_USE_COREAUDIO - PaMacCore_Initialize, -#endif - -#if PA_USE_SKELETON - PaSkeleton_Initialize, -#endif - - 0 /* NULL terminated array */ - }; diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Bing.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Bing.py deleted file mode 100644 index 87e04ac82293c7e22068af431ac407bdee435a1b..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Bing.py +++ /dev/null @@ -1,349 +0,0 @@ -import os -import json -import random -import json -import os -import uuid -import ssl -import certifi -import aiohttp -import asyncio - -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://bing.com/chat' -model = ['gpt-4'] -supports_stream = True -needs_auth = False - -ssl_context = ssl.create_default_context() -ssl_context.load_verify_locations(certifi.where()) - - -class optionsSets: - optionSet: dict = { - 'tone': str, - 'optionsSets': list - } - - jailbreak: dict = { - "optionsSets": [ - 'saharasugg', - 'enablenewsfc', - 'clgalileo', - 'gencontentv3', - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3precise" - # "harmonyv3", - "dtappid", - "cricinfo", - "cricinfov2", - "dv3sugg", - "nojbfedge" - ] - } - - -class Defaults: - delimiter = '\x1e' - ip_address = f'13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}' - - allowedMessageTypes = [ - 'Chat', - 'Disengaged', - 'AdsQuery', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - 'ActionRequest', - 'Context', - 'Progress', - 'AdsQuery', - 'SemanticSerp' - ] - - sliceIds = [ - - # "222dtappid", - # "225cricinfo", - # "224locals0" - - 'winmuid3tf', - 'osbsdusgreccf', - 'ttstmout', - 'crchatrev', - 'winlongmsgtf', - 'ctrlworkpay', - 'norespwtf', - 'tempcacheread', - 'temptacache', - '505scss0', - '508jbcars0', - '515enbotdets0', - '5082tsports', - '515vaoprvs', - '424dagslnv1s0', - 'kcimgattcf', - '427startpms0' - ] - - location = { - 'locale': 'en-US', - 'market': 'en-US', - 'region': 'US', - 'locationHints': [ - { - 'country': 'United States', - 'state': 'California', - 'city': 'Los Angeles', - 'timezoneoffset': 8, - 'countryConfidence': 8, - 'Center': { - 'Latitude': 34.0536909, - 'Longitude': -118.242766 - }, - 'RegionType': 2, - 'SourceType': 1 - } - ], - } - - -def _format(msg: dict) -> str: - return json.dumps(msg, ensure_ascii=False) + Defaults.delimiter - - -async def create_conversation(): - for _ in range(5): - create = requests.get('https://www.bing.com/turing/conversation/create', - headers={ - 'authority': 'edgeservices.bing.com', - 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', - 'accept-language': 'en-US,en;q=0.9', - 'cache-control': 'max-age=0', - 'sec-ch-ua': '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - 'sec-ch-ua-arch': '"x86"', - 'sec-ch-ua-bitness': '"64"', - 'sec-ch-ua-full-version': '"110.0.1587.69"', - 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-model': '""', - 'sec-ch-ua-platform': '"Windows"', - 'sec-ch-ua-platform-version': '"15.0.0"', - 'sec-fetch-dest': 'document', - 'sec-fetch-mode': 'navigate', - 'sec-fetch-site': 'none', - 'sec-fetch-user': '?1', - 'upgrade-insecure-requests': '1', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69', - 'x-edge-shopping-flag': '1', - 'x-forwarded-for': Defaults.ip_address - }) - - conversationId = create.json().get('conversationId') - clientId = create.json().get('clientId') - conversationSignature = create.json().get('conversationSignature') - - if not conversationId or not clientId or not conversationSignature and _ == 4: - raise Exception('Failed to create conversation.') - - return conversationId, clientId, conversationSignature - - -async def stream_generate(prompt: str, mode: optionsSets.optionSet = optionsSets.jailbreak, context: bool or str = False): - timeout = aiohttp.ClientTimeout(total=900) - session = aiohttp.ClientSession(timeout=timeout) - - conversationId, clientId, conversationSignature = await create_conversation() - - wss = await session.ws_connect('wss://sydney.bing.com/sydney/ChatHub', ssl=ssl_context, autoping=False, - headers={ - 'accept': 'application/json', - 'accept-language': 'en-US,en;q=0.9', - 'content-type': 'application/json', - 'sec-ch-ua': '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - 'sec-ch-ua-arch': '"x86"', - 'sec-ch-ua-bitness': '"64"', - 'sec-ch-ua-full-version': '"109.0.1518.78"', - 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-model': '', - 'sec-ch-ua-platform': '"Windows"', - 'sec-ch-ua-platform-version': '"15.0.0"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'x-ms-client-request-id': str(uuid.uuid4()), - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - 'Referer': 'https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx', - 'Referrer-Policy': 'origin-when-cross-origin', - 'x-forwarded-for': Defaults.ip_address - }) - - await wss.send_str(_format({'protocol': 'json', 'version': 1})) - await wss.receive(timeout=900) - - struct = { - 'arguments': [ - { - **mode, - 'source': 'cib', - 'allowedMessageTypes': Defaults.allowedMessageTypes, - 'sliceIds': Defaults.sliceIds, - 'traceId': os.urandom(16).hex(), - 'isStartOfSession': True, - 'message': Defaults.location | { - 'author': 'user', - 'inputMethod': 'Keyboard', - 'text': prompt, - 'messageType': 'Chat' - }, - 'conversationSignature': conversationSignature, - 'participant': { - 'id': clientId - }, - 'conversationId': conversationId - } - ], - 'invocationId': '0', - 'target': 'chat', - 'type': 4 - } - - if context: - struct['arguments'][0]['previousMessages'] = [ - { - "author": "user", - "description": context, - "contextType": "WebPage", - "messageType": "Context", - "messageId": "discover-web--page-ping-mriduna-----" - } - ] - - await wss.send_str(_format(struct)) - - final = False - draw = False - resp_txt = '' - result_text = '' - resp_txt_no_link = '' - cache_text = '' - - while not final: - msg = await wss.receive(timeout=900) - objects = msg.data.split(Defaults.delimiter) - - for obj in objects: - if obj is None or not obj: - continue - - response = json.loads(obj) - if response.get('type') == 1 and response['arguments'][0].get('messages',): - if not draw: - if (response['arguments'][0]['messages'][0]['contentOrigin'] != 'Apology') and not draw: - resp_txt = result_text + \ - response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0].get( - 'text', '') - resp_txt_no_link = result_text + \ - response['arguments'][0]['messages'][0].get( - 'text', '') - - if response['arguments'][0]['messages'][0].get('messageType',): - resp_txt = ( - resp_txt - + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text') - + '\n' - ) - result_text = ( - result_text - + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text') - + '\n' - ) - - if cache_text.endswith(' '): - final = True - if wss and not wss.closed: - await wss.close() - if session and not session.closed: - await session.close() - - yield (resp_txt.replace(cache_text, '')) - cache_text = resp_txt - - elif response.get('type') == 2: - if response['item']['result'].get('error'): - if wss and not wss.closed: - await wss.close() - if session and not session.closed: - await session.close() - - raise Exception( - f"{response['item']['result']['value']}: {response['item']['result']['message']}") - - if draw: - cache = response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text'] - response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text'] = ( - cache + resp_txt) - - if (response['item']['messages'][-1]['contentOrigin'] == 'Apology' and resp_txt): - response['item']['messages'][-1]['text'] = resp_txt_no_link - response['item']['messages'][-1]['adaptiveCards'][0]['body'][0]['text'] = resp_txt - - # print('Preserved the message from being deleted', file=sys.stderr) - - final = True - if wss and not wss.closed: - await wss.close() - if session and not session.closed: - await session.close() - - -def run(generator): - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - gen = generator.__aiter__() - - while True: - try: - next_val = loop.run_until_complete(gen.__anext__()) - yield next_val - - except StopAsyncIteration: - break - #print('Done') - -def convert(messages): - context = "" - - for message in messages: - context += "[%s](#message)\n%s\n\n" % (message['role'], - message['content']) - - return context - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - if len(messages) < 2: - prompt = messages[0]['content'] - context = False - - else: - prompt = messages[-1]['content'] - context = convert(messages[:-1]) - - response = run(stream_generate(prompt, optionsSets.jailbreak, context)) - for token in response: - yield (token) - - #print('Done') - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Phind.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Phind.py deleted file mode 100644 index 9fa8ec821f701d7841432e498a11ac9dd017978c..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Phind.py +++ /dev/null @@ -1,36 +0,0 @@ -import os -import json -import time -import subprocess - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://phind.com' -model = ['gpt-4'] -supports_stream = True - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - path = os.path.dirname(os.path.realpath(__file__)) - config = json.dumps({ - 'model': model, - 'messages': messages}, separators=(',', ':')) - - cmd = ['python', f'{path}/helpers/phind.py', config] - - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - - for line in iter(p.stdout.readline, b''): - if b'Just a moment...' in line: - os.system('clear' if os.name == 'posix' else 'cls') - yield 'Clouflare error, please try again...' - os._exit(0) - - else: - if b'ping - 2023-' in line: - continue - - yield line.decode('cp1251') #[:-1] - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/antigonus/cosmos/README.md b/spaces/antigonus/cosmos/README.md deleted file mode 100644 index 13ba842d153b6f61b2453e8890512cb9e1af9380..0000000000000000000000000000000000000000 --- a/spaces/antigonus/cosmos/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Cosmos -emoji: 🚀 -colorFrom: purple -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/anzorq/riffusion-demo/share_btn.py b/spaces/anzorq/riffusion-demo/share_btn.py deleted file mode 100644 index f0cfc111493df6b742a77c4d9e8203abafeb4906..0000000000000000000000000000000000000000 --- a/spaces/anzorq/riffusion-demo/share_btn.py +++ /dev/null @@ -1,87 +0,0 @@ -community_icon_html = """ """ - -loading_icon_html = """ """ - -share_js = """async () => { - - async function uploadFile(file) { - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getVideoFile(videoEl) { - const res = await fetch(videoEl.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `video-${{ videoId }}.mp4`; - return new File([blob], fileName, { type: 'video/mp4' }); - } - - async function getAudioFile(videoEl) { - const res = await fetch(videoEl.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `audio-${{ videoId }}.wav`; - return new File([blob], fileName, { type: 'audio/wav' }); - } - - - const gradioAppEL = document.querySelector('body > gradio-app'); - - const prompt_1 = gradioAppEL.querySelector('#riff-prompt_1 textarea').value; - const prompt_2 = gradioAppEL.querySelector('#riff-prompt_2 textarea').value; - const feel = gradioAppEL.querySelector("#riff-feel select").value; - const seed = gradioAppEL.querySelector("#riff-seed input").value; - - const videoEl = gradioAppEL.querySelector('#riff-video video'); - - const title = prompt_2 ? `From [${prompt_1}] to [${prompt_2}]` : prompt_1; - - const shareBtnEl = gradioAppEL.querySelector('#share-btn'); - const shareIconEl = gradioAppEL.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioAppEL.querySelector('#share-btn-loading-icon'); - - if (!videoEl) { - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const videoFile = await getVideoFile(videoEl); - const dataVideoFile = await uploadFile(videoFile); - - const descriptionMd = `Prompt: ${title} -Feel: ${feel} -Seed: ${seed} - -#### Video: -${dataVideoFile} -`; - - const params = new URLSearchParams({ - title: title, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/anzorq/riffusion-demo/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/arch-123/bingo/src/components/chat-scroll-anchor.tsx b/spaces/arch-123/bingo/src/components/chat-scroll-anchor.tsx deleted file mode 100644 index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/components/chat-scroll-anchor.tsx +++ /dev/null @@ -1,29 +0,0 @@ -'use client' - -import * as React from 'react' -import { useInView } from 'react-intersection-observer' - -import { useAtBottom } from '@/lib/hooks/use-at-bottom' - -interface ChatScrollAnchorProps { - trackVisibility?: boolean -} - -export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) { - const isAtBottom = useAtBottom() - const { ref, entry, inView } = useInView({ - trackVisibility, - delay: 100, - rootMargin: '0px 0px -150px 0px' - }) - - React.useEffect(() => { - if (isAtBottom && trackVisibility && !inView) { - entry?.target.scrollIntoView({ - block: 'start' - }) - } - }, [inView, entry, isAtBottom, trackVisibility]) - - return
-} diff --git a/spaces/artificialguybr/video-dubbing/whisper/CHANGELOG.md b/spaces/artificialguybr/video-dubbing/whisper/CHANGELOG.md deleted file mode 100644 index 50c053687698c70d36b57cd3e81d4373e89cd638..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/whisper/CHANGELOG.md +++ /dev/null @@ -1,69 +0,0 @@ -# CHANGELOG - -## [v20230918](https://github.com/openai/whisper/releases/tag/v20230918) - -* Add .pre-commit-config.yaml ([#1528](https://github.com/openai/whisper/pull/1528)) -* fix doc of TextDecoder ([#1526](https://github.com/openai/whisper/pull/1526)) -* Update model-card.md ([#1643](https://github.com/openai/whisper/pull/1643)) -* word timing tweaks ([#1559](https://github.com/openai/whisper/pull/1559)) -* Avoid rearranging all caches ([#1483](https://github.com/openai/whisper/pull/1483)) -* Improve timestamp heuristics. ([#1461](https://github.com/openai/whisper/pull/1461)) -* fix condition_on_previous_text ([#1224](https://github.com/openai/whisper/pull/1224)) -* Fix numba depreceation notice ([#1233](https://github.com/openai/whisper/pull/1233)) -* Updated README.md to provide more insight on BLEU and specific appendices ([#1236](https://github.com/openai/whisper/pull/1236)) -* Avoid computing higher temperatures on no_speech segments ([#1279](https://github.com/openai/whisper/pull/1279)) -* Dropped unused execute bit from mel_filters.npz. ([#1254](https://github.com/openai/whisper/pull/1254)) -* Drop ffmpeg-python dependency and call ffmpeg directly. ([#1242](https://github.com/openai/whisper/pull/1242)) -* Python 3.11 ([#1171](https://github.com/openai/whisper/pull/1171)) -* Update decoding.py ([#1219](https://github.com/openai/whisper/pull/1219)) -* Update decoding.py ([#1155](https://github.com/openai/whisper/pull/1155)) -* Update README.md to reference tiktoken ([#1105](https://github.com/openai/whisper/pull/1105)) -* Implement max line width and max line count, and make word highlighting optional ([#1184](https://github.com/openai/whisper/pull/1184)) -* Squash long words at window and sentence boundaries. ([#1114](https://github.com/openai/whisper/pull/1114)) -* python-publish.yml: bump actions version to fix node warning ([#1211](https://github.com/openai/whisper/pull/1211)) -* Update tokenizer.py ([#1163](https://github.com/openai/whisper/pull/1163)) - -## [v20230314](https://github.com/openai/whisper/releases/tag/v20230314) - -* abort find_alignment on empty input ([#1090](https://github.com/openai/whisper/pull/1090)) -* Fix truncated words list when the replacement character is decoded ([#1089](https://github.com/openai/whisper/pull/1089)) -* fix github language stats getting dominated by jupyter notebook ([#1076](https://github.com/openai/whisper/pull/1076)) -* Fix alignment between the segments and the list of words ([#1087](https://github.com/openai/whisper/pull/1087)) -* Use tiktoken ([#1044](https://github.com/openai/whisper/pull/1044)) - -## [v20230308](https://github.com/openai/whisper/releases/tag/v20230308) - -* kwargs in decode() for convenience ([#1061](https://github.com/openai/whisper/pull/1061)) -* fix all_tokens handling that caused more repetitions and discrepancy in JSON ([#1060](https://github.com/openai/whisper/pull/1060)) -* fix typo in CHANGELOG.md - -## [v20230307](https://github.com/openai/whisper/releases/tag/v20230307) - -* Fix the repetition/hallucination issue identified in #1046 ([#1052](https://github.com/openai/whisper/pull/1052)) -* Use triton==2.0.0 ([#1053](https://github.com/openai/whisper/pull/1053)) -* Install triton in x86_64 linux only ([#1051](https://github.com/openai/whisper/pull/1051)) -* update setup.py to specify python >= 3.8 requirement - -## [v20230306](https://github.com/openai/whisper/releases/tag/v20230306) - -* remove auxiliary audio extension ([#1021](https://github.com/openai/whisper/pull/1021)) -* apply formatting with `black`, `isort`, and `flake8` ([#1038](https://github.com/openai/whisper/pull/1038)) -* word-level timestamps in `transcribe()` ([#869](https://github.com/openai/whisper/pull/869)) -* Decoding improvements ([#1033](https://github.com/openai/whisper/pull/1033)) -* Update README.md ([#894](https://github.com/openai/whisper/pull/894)) -* Fix infinite loop caused by incorrect timestamp tokens prediction ([#914](https://github.com/openai/whisper/pull/914)) -* drop python 3.7 support ([#889](https://github.com/openai/whisper/pull/889)) - -## [v20230124](https://github.com/openai/whisper/releases/tag/v20230124) - -* handle printing even if sys.stdout.buffer is not available ([#887](https://github.com/openai/whisper/pull/887)) -* Add TSV formatted output in transcript, using integer start/end time in milliseconds ([#228](https://github.com/openai/whisper/pull/228)) -* Added `--output_format` option ([#333](https://github.com/openai/whisper/pull/333)) -* Handle `XDG_CACHE_HOME` properly for `download_root` ([#864](https://github.com/openai/whisper/pull/864)) -* use stdout for printing transcription progress ([#867](https://github.com/openai/whisper/pull/867)) -* Fix bug where mm is mistakenly replaced with hmm in e.g. 20mm ([#659](https://github.com/openai/whisper/pull/659)) -* print '?' if a letter can't be encoded using the system default encoding ([#859](https://github.com/openai/whisper/pull/859)) - -## [v20230117](https://github.com/openai/whisper/releases/tag/v20230117) - -The first versioned release available on [PyPI](https://pypi.org/project/openai-whisper/) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/formdata.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/formdata.py deleted file mode 100644 index e7cd24ca9f7afb2bd31f1c653d9e15acb4fedc8b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/formdata.py +++ /dev/null @@ -1,172 +0,0 @@ -import io -from typing import Any, Iterable, List, Optional -from urllib.parse import urlencode - -from multidict import MultiDict, MultiDictProxy - -from . import hdrs, multipart, payload -from .helpers import guess_filename -from .payload import Payload - -__all__ = ("FormData",) - - -class FormData: - """Helper class for form body generation. - - Supports multipart/form-data and application/x-www-form-urlencoded. - """ - - def __init__( - self, - fields: Iterable[Any] = (), - quote_fields: bool = True, - charset: Optional[str] = None, - ) -> None: - self._writer = multipart.MultipartWriter("form-data") - self._fields: List[Any] = [] - self._is_multipart = False - self._is_processed = False - self._quote_fields = quote_fields - self._charset = charset - - if isinstance(fields, dict): - fields = list(fields.items()) - elif not isinstance(fields, (list, tuple)): - fields = (fields,) - self.add_fields(*fields) - - @property - def is_multipart(self) -> bool: - return self._is_multipart - - def add_field( - self, - name: str, - value: Any, - *, - content_type: Optional[str] = None, - filename: Optional[str] = None, - content_transfer_encoding: Optional[str] = None, - ) -> None: - - if isinstance(value, io.IOBase): - self._is_multipart = True - elif isinstance(value, (bytes, bytearray, memoryview)): - if filename is None and content_transfer_encoding is None: - filename = name - - type_options: MultiDict[str] = MultiDict({"name": name}) - if filename is not None and not isinstance(filename, str): - raise TypeError( - "filename must be an instance of str. " "Got: %s" % filename - ) - if filename is None and isinstance(value, io.IOBase): - filename = guess_filename(value, name) - if filename is not None: - type_options["filename"] = filename - self._is_multipart = True - - headers = {} - if content_type is not None: - if not isinstance(content_type, str): - raise TypeError( - "content_type must be an instance of str. " "Got: %s" % content_type - ) - headers[hdrs.CONTENT_TYPE] = content_type - self._is_multipart = True - if content_transfer_encoding is not None: - if not isinstance(content_transfer_encoding, str): - raise TypeError( - "content_transfer_encoding must be an instance" - " of str. Got: %s" % content_transfer_encoding - ) - headers[hdrs.CONTENT_TRANSFER_ENCODING] = content_transfer_encoding - self._is_multipart = True - - self._fields.append((type_options, headers, value)) - - def add_fields(self, *fields: Any) -> None: - to_add = list(fields) - - while to_add: - rec = to_add.pop(0) - - if isinstance(rec, io.IOBase): - k = guess_filename(rec, "unknown") - self.add_field(k, rec) # type: ignore[arg-type] - - elif isinstance(rec, (MultiDictProxy, MultiDict)): - to_add.extend(rec.items()) - - elif isinstance(rec, (list, tuple)) and len(rec) == 2: - k, fp = rec - self.add_field(k, fp) # type: ignore[arg-type] - - else: - raise TypeError( - "Only io.IOBase, multidict and (name, file) " - "pairs allowed, use .add_field() for passing " - "more complex parameters, got {!r}".format(rec) - ) - - def _gen_form_urlencoded(self) -> payload.BytesPayload: - # form data (x-www-form-urlencoded) - data = [] - for type_options, _, value in self._fields: - data.append((type_options["name"], value)) - - charset = self._charset if self._charset is not None else "utf-8" - - if charset == "utf-8": - content_type = "application/x-www-form-urlencoded" - else: - content_type = "application/x-www-form-urlencoded; " "charset=%s" % charset - - return payload.BytesPayload( - urlencode(data, doseq=True, encoding=charset).encode(), - content_type=content_type, - ) - - def _gen_form_data(self) -> multipart.MultipartWriter: - """Encode a list of fields using the multipart/form-data MIME format""" - if self._is_processed: - raise RuntimeError("Form data has been processed already") - for dispparams, headers, value in self._fields: - try: - if hdrs.CONTENT_TYPE in headers: - part = payload.get_payload( - value, - content_type=headers[hdrs.CONTENT_TYPE], - headers=headers, - encoding=self._charset, - ) - else: - part = payload.get_payload( - value, headers=headers, encoding=self._charset - ) - except Exception as exc: - raise TypeError( - "Can not serialize value type: %r\n " - "headers: %r\n value: %r" % (type(value), headers, value) - ) from exc - - if dispparams: - part.set_content_disposition( - "form-data", quote_fields=self._quote_fields, **dispparams - ) - # FIXME cgi.FieldStorage doesn't likes body parts with - # Content-Length which were sent via chunked transfer encoding - assert part.headers is not None - part.headers.popall(hdrs.CONTENT_LENGTH, None) - - self._writer.append_payload(part) - - self._is_processed = True - return self._writer - - def __call__(self) -> Payload: - if self._is_multipart: - return self._gen_form_data() - else: - return self._gen_form_urlencoded() diff --git a/spaces/arxnov/anotest/text/mandarin.py b/spaces/arxnov/anotest/text/mandarin.py deleted file mode 100644 index 093d8826809aa2681f6088174427337a59e0c882..0000000000000000000000000000000000000000 --- a/spaces/arxnov/anotest/text/mandarin.py +++ /dev/null @@ -1,329 +0,0 @@ -import os -import sys -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an -import logging - -logging.getLogger('jieba').setLevel(logging.WARNING) -jieba.initialize() - - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text \ No newline at end of file diff --git a/spaces/aryadytm/remove-photo-background/src/models/__init__.py b/spaces/aryadytm/remove-photo-background/src/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/asafAdge/Detic/tools/get_cc_tags.py b/spaces/asafAdge/Detic/tools/get_cc_tags.py deleted file mode 100644 index 00bd6180ab7c5a6cbb0533a8a174e6de2f3b19b7..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/tools/get_cc_tags.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -from collections import defaultdict - -# This mapping is extracted from the official LVIS mapping: -# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json -COCO_SYNSET_CATEGORIES = [ - {"synset": "person.n.01", "coco_cat_id": 1}, - {"synset": "bicycle.n.01", "coco_cat_id": 2}, - {"synset": "car.n.01", "coco_cat_id": 3}, - {"synset": "motorcycle.n.01", "coco_cat_id": 4}, - {"synset": "airplane.n.01", "coco_cat_id": 5}, - {"synset": "bus.n.01", "coco_cat_id": 6}, - {"synset": "train.n.01", "coco_cat_id": 7}, - {"synset": "truck.n.01", "coco_cat_id": 8}, - {"synset": "boat.n.01", "coco_cat_id": 9}, - {"synset": "traffic_light.n.01", "coco_cat_id": 10}, - {"synset": "fireplug.n.01", "coco_cat_id": 11}, - {"synset": "stop_sign.n.01", "coco_cat_id": 13}, - {"synset": "parking_meter.n.01", "coco_cat_id": 14}, - {"synset": "bench.n.01", "coco_cat_id": 15}, - {"synset": "bird.n.01", "coco_cat_id": 16}, - {"synset": "cat.n.01", "coco_cat_id": 17}, - {"synset": "dog.n.01", "coco_cat_id": 18}, - {"synset": "horse.n.01", "coco_cat_id": 19}, - {"synset": "sheep.n.01", "coco_cat_id": 20}, - {"synset": "beef.n.01", "coco_cat_id": 21}, - {"synset": "elephant.n.01", "coco_cat_id": 22}, - {"synset": "bear.n.01", "coco_cat_id": 23}, - {"synset": "zebra.n.01", "coco_cat_id": 24}, - {"synset": "giraffe.n.01", "coco_cat_id": 25}, - {"synset": "backpack.n.01", "coco_cat_id": 27}, - {"synset": "umbrella.n.01", "coco_cat_id": 28}, - {"synset": "bag.n.04", "coco_cat_id": 31}, - {"synset": "necktie.n.01", "coco_cat_id": 32}, - {"synset": "bag.n.06", "coco_cat_id": 33}, - {"synset": "frisbee.n.01", "coco_cat_id": 34}, - {"synset": "ski.n.01", "coco_cat_id": 35}, - {"synset": "snowboard.n.01", "coco_cat_id": 36}, - {"synset": "ball.n.06", "coco_cat_id": 37}, - {"synset": "kite.n.03", "coco_cat_id": 38}, - {"synset": "baseball_bat.n.01", "coco_cat_id": 39}, - {"synset": "baseball_glove.n.01", "coco_cat_id": 40}, - {"synset": "skateboard.n.01", "coco_cat_id": 41}, - {"synset": "surfboard.n.01", "coco_cat_id": 42}, - {"synset": "tennis_racket.n.01", "coco_cat_id": 43}, - {"synset": "bottle.n.01", "coco_cat_id": 44}, - {"synset": "wineglass.n.01", "coco_cat_id": 46}, - {"synset": "cup.n.01", "coco_cat_id": 47}, - {"synset": "fork.n.01", "coco_cat_id": 48}, - {"synset": "knife.n.01", "coco_cat_id": 49}, - {"synset": "spoon.n.01", "coco_cat_id": 50}, - {"synset": "bowl.n.03", "coco_cat_id": 51}, - {"synset": "banana.n.02", "coco_cat_id": 52}, - {"synset": "apple.n.01", "coco_cat_id": 53}, - {"synset": "sandwich.n.01", "coco_cat_id": 54}, - {"synset": "orange.n.01", "coco_cat_id": 55}, - {"synset": "broccoli.n.01", "coco_cat_id": 56}, - {"synset": "carrot.n.01", "coco_cat_id": 57}, - # {"synset": "frank.n.02", "coco_cat_id": 58}, - {"synset": "sausage.n.01", "coco_cat_id": 58}, - {"synset": "pizza.n.01", "coco_cat_id": 59}, - {"synset": "doughnut.n.02", "coco_cat_id": 60}, - {"synset": "cake.n.03", "coco_cat_id": 61}, - {"synset": "chair.n.01", "coco_cat_id": 62}, - {"synset": "sofa.n.01", "coco_cat_id": 63}, - {"synset": "pot.n.04", "coco_cat_id": 64}, - {"synset": "bed.n.01", "coco_cat_id": 65}, - {"synset": "dining_table.n.01", "coco_cat_id": 67}, - {"synset": "toilet.n.02", "coco_cat_id": 70}, - {"synset": "television_receiver.n.01", "coco_cat_id": 72}, - {"synset": "laptop.n.01", "coco_cat_id": 73}, - {"synset": "mouse.n.04", "coco_cat_id": 74}, - {"synset": "remote_control.n.01", "coco_cat_id": 75}, - {"synset": "computer_keyboard.n.01", "coco_cat_id": 76}, - {"synset": "cellular_telephone.n.01", "coco_cat_id": 77}, - {"synset": "microwave.n.02", "coco_cat_id": 78}, - {"synset": "oven.n.01", "coco_cat_id": 79}, - {"synset": "toaster.n.02", "coco_cat_id": 80}, - {"synset": "sink.n.01", "coco_cat_id": 81}, - {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82}, - {"synset": "book.n.01", "coco_cat_id": 84}, - {"synset": "clock.n.01", "coco_cat_id": 85}, - {"synset": "vase.n.01", "coco_cat_id": 86}, - {"synset": "scissors.n.01", "coco_cat_id": 87}, - {"synset": "teddy.n.01", "coco_cat_id": 88}, - {"synset": "hand_blower.n.01", "coco_cat_id": 89}, - {"synset": "toothbrush.n.01", "coco_cat_id": 90}, -] - -def map_name(x): - x = x.replace('_', ' ') - if '(' in x: - x = x[:x.find('(')] - return x.lower().strip() - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--cc_ann', default='datasets/cc3m/train_image_info.json') - parser.add_argument('--out_path', default='datasets/cc3m/train_image_info_tags.json') - parser.add_argument('--keep_images', action='store_true') - parser.add_argument('--allcaps', action='store_true') - parser.add_argument('--cat_path', default='') - parser.add_argument('--convert_caption', action='store_true') - # parser.add_argument('--lvis_ann', default='datasets/lvis/lvis_v1_val.json') - args = parser.parse_args() - - # lvis_data = json.load(open(args.lvis_ann, 'r')) - cc_data = json.load(open(args.cc_ann, 'r')) - if args.convert_caption: - num_caps = 0 - caps = defaultdict(list) - for x in cc_data['annotations']: - caps[x['image_id']].append(x['caption']) - for x in cc_data['images']: - x['captions'] = caps[x['id']] - num_caps += len(x['captions']) - print('# captions', num_caps) - - if args.cat_path != '': - print('Loading', args.cat_path) - cats = json.load(open(args.cat_path))['categories'] - if 'synonyms' not in cats[0]: - cocoid2synset = {x['coco_cat_id']: x['synset'] \ - for x in COCO_SYNSET_CATEGORIES} - synset2synonyms = {x['synset']: x['synonyms'] \ - for x in cc_data['categories']} - for x in cats: - synonyms = synset2synonyms[cocoid2synset[x['id']]] - x['synonyms'] = synonyms - x['frequency'] = 'f' - cc_data['categories'] = cats - - id2cat = {x['id']: x for x in cc_data['categories']} - class_count = {x['id']: 0 for x in cc_data['categories']} - class_data = {x['id']: [' ' + map_name(xx) + ' ' for xx in x['synonyms']] \ - for x in cc_data['categories']} - num_examples = 5 - examples = {x['id']: [] for x in cc_data['categories']} - - print('class_data', class_data) - - images = [] - for i, x in enumerate(cc_data['images']): - if i % 10000 == 0: - print(i, len(cc_data['images'])) - if args.allcaps: - caption = (' '.join(x['captions'])).lower() - else: - caption = x['captions'][0].lower() - x['pos_category_ids'] = [] - for cat_id, cat_names in class_data.items(): - find = False - for c in cat_names: - if c in caption or caption.startswith(c[1:]) \ - or caption.endswith(c[:-1]): - find = True - break - if find: - x['pos_category_ids'].append(cat_id) - class_count[cat_id] += 1 - if len(examples[cat_id]) < num_examples: - examples[cat_id].append(caption) - if len(x['pos_category_ids']) > 0 or args.keep_images: - images.append(x) - - zero_class = [] - for cat_id, count in class_count.items(): - print(id2cat[cat_id]['name'], count, end=', ') - if count == 0: - zero_class.append(id2cat[cat_id]) - print('==') - print('zero class', zero_class) - - # for freq in ['r', 'c', 'f']: - # print('#cats', freq, len([x for x in cc_data['categories'] \ - # if x['frequency'] == freq] and class_count[x['id']] > 0)) - - for freq in ['r', 'c', 'f']: - print('#Images', freq, sum([v for k, v in class_count.items() \ - if id2cat[k]['frequency'] == freq])) - - try: - out_data = {'images': images, 'categories': cc_data['categories'], \ - 'annotations': []} - for k, v in out_data.items(): - print(k, len(v)) - if args.keep_images and not args.out_path.endswith('_full.json'): - args.out_path = args.out_path[:-5] + '_full.json' - print('Writing to', args.out_path) - json.dump(out_data, open(args.out_path, 'w')) - except: - pass diff --git a/spaces/avivdm1/AutoGPT/autogpt/agent/agent.py b/spaces/avivdm1/AutoGPT/autogpt/agent/agent.py deleted file mode 100644 index ee7885f8844022597321fa6b492430ec34c0d6b9..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/autogpt/agent/agent.py +++ /dev/null @@ -1,197 +0,0 @@ -from colorama import Fore, Style - -from autogpt.app import execute_command, get_command -from autogpt.chat import chat_with_ai, create_chat_message -from autogpt.config import Config -from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques -from autogpt.json_utils.utilities import validate_json -from autogpt.logs import logger, print_assistant_thoughts -from autogpt.speech import say_text -from autogpt.spinner import Spinner -from autogpt.utils import clean_input - - -class Agent: - """Agent class for interacting with Auto-GPT. - - Attributes: - ai_name: The name of the agent. - memory: The memory object to use. - full_message_history: The full message history. - next_action_count: The number of actions to execute. - system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully. - Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals. - - triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is: - Determine which next command to use, and respond using the format specified above: - The triggering prompt is not part of the system prompt because between the system prompt and the triggering - prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve. - SYSTEM PROMPT - CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant) - TRIGGERING PROMPT - - The triggering prompt reminds the AI about its short term meta task (defining the next task) - """ - - def __init__( - self, - ai_name, - memory, - full_message_history, - next_action_count, - system_prompt, - triggering_prompt, - ): - self.ai_name = ai_name - self.memory = memory - self.full_message_history = full_message_history - self.next_action_count = next_action_count - self.system_prompt = system_prompt - self.triggering_prompt = triggering_prompt - - def start_interaction_loop(self): - # Interaction Loop - cfg = Config() - loop_count = 0 - command_name = None - arguments = None - user_input = "" - - while True: - # Discontinue if continuous limit is reached - loop_count += 1 - if ( - cfg.continuous_mode - and cfg.continuous_limit > 0 - and loop_count > cfg.continuous_limit - ): - logger.typewriter_log( - "Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}" - ) - break - - # Send message to AI, get response - with Spinner("Thinking... "): - assistant_reply = chat_with_ai( - self.system_prompt, - self.triggering_prompt, - self.full_message_history, - self.memory, - cfg.fast_token_limit, - ) # TODO: This hardcodes the model to use GPT3.5. Make this an argument - - assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply) - - # Print Assistant thoughts - if assistant_reply_json != {}: - validate_json(assistant_reply_json, "llm_response_format_1") - # Get command name and arguments - try: - print_assistant_thoughts(self.ai_name, assistant_reply_json) - command_name, arguments = get_command(assistant_reply_json) - # command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"] - if cfg.speak_mode: - say_text(f"I want to execute {command_name}") - except Exception as e: - logger.error("Error: \n", str(e)) - - if not cfg.continuous_mode and self.next_action_count == 0: - ### GET USER AUTHORIZATION TO EXECUTE COMMAND ### - # Get key press: Prompt the user to press enter to continue or escape - # to exit - logger.typewriter_log( - "NEXT ACTION: ", - Fore.CYAN, - f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} " - f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}", - ) - print( - "Enter 'y' to authorise command, 'y -N' to run N continuous " - "commands, 'n' to exit program, or enter feedback for " - f"{self.ai_name}...", - flush=True, - ) - while True: - console_input = clean_input( - Fore.MAGENTA + "Input:" + Style.RESET_ALL - ) - if console_input.lower().strip() == "y": - user_input = "GENERATE NEXT COMMAND JSON" - break - elif console_input.lower().strip() == "": - print("Invalid input format.") - continue - elif console_input.lower().startswith("y -"): - try: - self.next_action_count = abs( - int(console_input.split(" ")[1]) - ) - user_input = "GENERATE NEXT COMMAND JSON" - except ValueError: - print( - "Invalid input format. Please enter 'y -n' where n is" - " the number of continuous tasks." - ) - continue - break - elif console_input.lower() == "n": - user_input = "EXIT" - break - else: - user_input = console_input - command_name = "human_feedback" - break - - if user_input == "GENERATE NEXT COMMAND JSON": - logger.typewriter_log( - "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=", - Fore.MAGENTA, - "", - ) - elif user_input == "EXIT": - print("Exiting...", flush=True) - break - else: - # Print command - logger.typewriter_log( - "NEXT ACTION: ", - Fore.CYAN, - f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}" - f" ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}", - ) - - # Execute command - if command_name is not None and command_name.lower().startswith("error"): - result = ( - f"Command {command_name} threw the following error: {arguments}" - ) - elif command_name == "human_feedback": - result = f"Human feedback: {user_input}" - else: - result = ( - f"Command {command_name} returned: " - f"{execute_command(command_name, arguments)}" - ) - if self.next_action_count > 0: - self.next_action_count -= 1 - - memory_to_add = ( - f"Assistant Reply: {assistant_reply} " - f"\nResult: {result} " - f"\nHuman Feedback: {user_input} " - ) - - self.memory.add(memory_to_add) - - # Check if there's a result from the command append it to the message - # history - if result is not None: - self.full_message_history.append(create_chat_message("system", result)) - logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result) - else: - self.full_message_history.append( - create_chat_message("system", "Unable to execute command") - ) - logger.typewriter_log( - "SYSTEM: ", Fore.YELLOW, "Unable to execute command" - ) diff --git a/spaces/awacke1/Docker-Examples-Top-5-Demo/app.py b/spaces/awacke1/Docker-Examples-Top-5-Demo/app.py deleted file mode 100644 index c65b04dad3c575051c7182e1ff5c46efe4612019..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Docker-Examples-Top-5-Demo/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import streamlit as st - - -st.markdown(""" - -Streamlit: https://huggingface.co/spaces/DockerTemplates/streamlit-docker-example -Gradio: https://huggingface.co/spaces/sayakpaul/demo-docker-gradio -HTTP w GO: https://huggingface.co/spaces/XciD/test-docker-go?q=Adrien -Secrets: https://huggingface.co/spaces/DockerTemplates/secret-example -Fast API: https://huggingface.co/spaces/DockerTemplates/fastapi_t5 - -# Github Actions Deploy to ACA: - -🐋 Create a Dockerfile for Gradio deployment 🐋 - -1️⃣ Start by specifying the base image for your container. For Python: - -FROM python:3.8-slim-buster - -2️⃣ Set the working directory for the container and copy the necessary files: - -WORKDIR /app -COPY . /app - -3️⃣ Install the necessary dependencies, including Gradio: - -RUN pip install --upgrade pip && \ - pip install gradio - -4️⃣ Specify the command to run when the container starts: - -CMD ["python", "app.py"] - -:rocket: Build and push your container image to Azure Container Registry :rocket: - -:green_book: Set up a GitHub Actions workflow for deployment :green_book: - -Use azure/login action for Azure authentication and azure/container-apps-deploy-action for deployment. Provide necessary inputs like container app name, Azure Container Registry name, and container image tag. - -Here's an example GitHub Actions workflow: -name: Deploy to Azure Container Apps - -on: push: branches: - main - -env: AZURE_CONTAINER_APP_NAME: AZURE_CONTAINER_REGISTRY_NAME: IMAGE_TAG: ${{ github.sha }} - -jobs: deploy: runs-on: ubuntu-latest steps: - name: Check out code uses: actions/checkout@v2 - - - name: Login to Azure - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} - - - name: Deploy to Azure Container Apps - uses: azure/container-apps-deploy-action@v1 - with: - containerAppName: ${{ env.AZURE_CONTAINER_APP_NAME }} - imageName: ${{ env.AZURE_CONTAINER_REGISTRY_NAME }}.azurecr.io/myimage:${{ env.IMAGE_TAG }} - -:arrow_forward: **After your GitHub Actions workflow is set up, follow these steps to get your app running on Azure Container Apps** :arrow_forward: - -5️⃣ **Commit and push your changes** :file_folder: - -Once you've made all necessary changes to your Dockerfile and GitHub Actions workflow file, commit and push them to your repository. - -```bash -git add . -git commit -m "Setup Dockerfile and GitHub Actions workflow" -git push origin main -6️⃣ Watch your GitHub Actions workflow 👀 - -Go to the "Actions" tab in your GitHub repository to see your workflow in action. If everything is set up correctly, you should see your workflow running and completing without errors. - -7️⃣ Check your app on Azure Container Apps 🏁 - -Once the GitHub Actions workflow has completed, your app should be deployed to Azure Container Apps. You can check the status of your app in the Azure portal. - -8️⃣ Enjoy your Gradio app running smoothly on Azure Container Apps 🎉 - -You've successfully deployed your Gradio app to Azure Container Apps using a Docker container and GitHub Actions! - -""") \ No newline at end of file diff --git a/spaces/awacke1/Easy-Button-Zero-Shot-Text-Classifier-facebook-bart-large-mnli/app.py b/spaces/awacke1/Easy-Button-Zero-Shot-Text-Classifier-facebook-bart-large-mnli/app.py deleted file mode 100644 index cfacc07883617b067f228dae529087d3c61e2bc8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Easy-Button-Zero-Shot-Text-Classifier-facebook-bart-large-mnli/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/bart-large-mnli").launch() \ No newline at end of file diff --git a/spaces/awacke1/Engineering-Magic-Picture-Dice-Vocabulary-Game/README.md b/spaces/awacke1/Engineering-Magic-Picture-Dice-Vocabulary-Game/README.md deleted file mode 100644 index d40865f7330b7479439fa82c715e558caab6f76e..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Engineering-Magic-Picture-Dice-Vocabulary-Game/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Engineering Magic Picture Dice Vocabulary Game -emoji: 📊 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/SMART-FHIR-Assessment-Blood-Pressure/README.md b/spaces/awacke1/SMART-FHIR-Assessment-Blood-Pressure/README.md deleted file mode 100644 index 0e7cef91125f006982f9bc95a8c171ff5482808a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/SMART-FHIR-Assessment-Blood-Pressure/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SMART FHIR Streamlit 1 -emoji: 💻 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/SuperSimple2LinerText2Speech/app.py b/spaces/awacke1/SuperSimple2LinerText2Speech/app.py deleted file mode 100644 index 624711103fff0eb591bc05f07ae20c47fbe03cd2..0000000000000000000000000000000000000000 --- a/spaces/awacke1/SuperSimple2LinerText2Speech/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/fastspeech2-en-ljspeech").launch() \ No newline at end of file diff --git a/spaces/awacke1/Text-to-Speech-facebook-fastspeech2-en-ljspeech/app.py b/spaces/awacke1/Text-to-Speech-facebook-fastspeech2-en-ljspeech/app.py deleted file mode 100644 index 624711103fff0eb591bc05f07ae20c47fbe03cd2..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Text-to-Speech-facebook-fastspeech2-en-ljspeech/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/fastspeech2-en-ljspeech").launch() \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/3MFLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/3MFLoader.js deleted file mode 100644 index 31ce4066fcf411e2c975e2e3fcac591522ef789d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/3MFLoader.js +++ /dev/null @@ -1,614 +0,0 @@ -/** - * @author technohippy / https://github.com/technohippy - */ - -THREE.ThreeMFLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - this.availableExtensions = []; - -}; - -THREE.ThreeMFLoader.prototype = { - - constructor: THREE.ThreeMFLoader, - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - var loader = new THREE.FileLoader( scope.manager ); - loader.setPath( scope.path ); - loader.setResponseType( 'arraybuffer' ); - loader.load( url, function ( buffer ) { - - onLoad( scope.parse( buffer ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - parse: function ( data ) { - - var scope = this; - - function loadDocument( data ) { - - var zip = null; - var file = null; - - var relsName; - var modelPartNames = []; - var printTicketPartNames = []; - var texturesPartNames = []; - var otherPartNames = []; - - var rels; - var modelParts = {}; - var printTicketParts = {}; - var texturesParts = {}; - var otherParts = {}; - - try { - - zip = new JSZip( data ); // eslint-disable-line no-undef - - } catch ( e ) { - - if ( e instanceof ReferenceError ) { - - console.error( 'THREE.ThreeMFLoader: jszip missing and file is compressed.' ); - return null; - - } - - } - - for ( file in zip.files ) { - - if ( file.match( /\.rels$/ ) ) { - - relsName = file; - - } else if ( file.match( /^3D\/.*\.model$/ ) ) { - - modelPartNames.push( file ); - - } else if ( file.match( /^3D\/Metadata\/.*\.xml$/ ) ) { - - printTicketPartNames.push( file ); - - } else if ( file.match( /^3D\/Textures\/.*/ ) ) { - - texturesPartNames.push( file ); - - } else if ( file.match( /^3D\/Other\/.*/ ) ) { - - otherPartNames.push( file ); - - } - - } - - var relsView = new Uint8Array( zip.file( relsName ).asArrayBuffer() ); - var relsFileText = THREE.LoaderUtils.decodeText( relsView ); - rels = parseRelsXml( relsFileText ); - - for ( var i = 0; i < modelPartNames.length; i ++ ) { - - var modelPart = modelPartNames[ i ]; - var view = new Uint8Array( zip.file( modelPart ).asArrayBuffer() ); - - var fileText = THREE.LoaderUtils.decodeText( view ); - var xmlData = new DOMParser().parseFromString( fileText, 'application/xml' ); - - if ( xmlData.documentElement.nodeName.toLowerCase() !== 'model' ) { - - console.error( 'THREE.ThreeMFLoader: Error loading 3MF - no 3MF document found: ', modelPart ); - - } - - var modelNode = xmlData.querySelector( 'model' ); - var extensions = {}; - - for ( var i = 0; i < modelNode.attributes.length; i ++ ) { - - var attr = modelNode.attributes[ i ]; - if ( attr.name.match( /^xmlns:(.+)$/ ) ) { - - extensions[ attr.value ] = RegExp.$1; - - } - - } - - var modelData = parseModelNode( modelNode ); - modelData[ 'xml' ] = modelNode; - - if ( 0 < Object.keys( extensions ).length ) { - - modelData[ 'extensions' ] = extensions; - - } - - modelParts[ modelPart ] = modelData; - - } - - for ( var i = 0; i < texturesPartNames.length; i ++ ) { - - var texturesPartName = texturesPartNames[ i ]; - texturesParts[ texturesPartName ] = zip.file( texturesPartName ).asBinary(); - - } - - return { - rels: rels, - model: modelParts, - printTicket: printTicketParts, - texture: texturesParts, - other: otherParts - }; - - } - - function parseRelsXml( relsFileText ) { - - var relsXmlData = new DOMParser().parseFromString( relsFileText, 'application/xml' ); - var relsNode = relsXmlData.querySelector( 'Relationship' ); - var target = relsNode.getAttribute( 'Target' ); - var id = relsNode.getAttribute( 'Id' ); - var type = relsNode.getAttribute( 'Type' ); - - return { - target: target, - id: id, - type: type - }; - - } - - function parseMetadataNodes( metadataNodes ) { - - var metadataData = {}; - - for ( var i = 0; i < metadataNodes.length; i ++ ) { - - var metadataNode = metadataNodes[ i ]; - var name = metadataNode.getAttribute( 'name' ); - var validNames = [ - 'Title', - 'Designer', - 'Description', - 'Copyright', - 'LicenseTerms', - 'Rating', - 'CreationDate', - 'ModificationDate' - ]; - - if ( 0 <= validNames.indexOf( name ) ) { - - metadataData[ name ] = metadataNode.textContent; - - } - - } - - return metadataData; - - } - - function parseBasematerialsNode( basematerialsNode ) { - } - - function parseMeshNode( meshNode, extensions ) { - - var meshData = {}; - - var vertices = []; - var vertexNodes = meshNode.querySelectorAll( 'vertices vertex' ); - - for ( var i = 0; i < vertexNodes.length; i ++ ) { - - var vertexNode = vertexNodes[ i ]; - var x = vertexNode.getAttribute( 'x' ); - var y = vertexNode.getAttribute( 'y' ); - var z = vertexNode.getAttribute( 'z' ); - - vertices.push( parseFloat( x ), parseFloat( y ), parseFloat( z ) ); - - } - - meshData[ 'vertices' ] = new Float32Array( vertices.length ); - - for ( var i = 0; i < vertices.length; i ++ ) { - - meshData[ 'vertices' ][ i ] = vertices[ i ]; - - } - - var triangleProperties = []; - var triangles = []; - var triangleNodes = meshNode.querySelectorAll( 'triangles triangle' ); - - for ( var i = 0; i < triangleNodes.length; i ++ ) { - - var triangleNode = triangleNodes[ i ]; - var v1 = triangleNode.getAttribute( 'v1' ); - var v2 = triangleNode.getAttribute( 'v2' ); - var v3 = triangleNode.getAttribute( 'v3' ); - var p1 = triangleNode.getAttribute( 'p1' ); - var p2 = triangleNode.getAttribute( 'p2' ); - var p3 = triangleNode.getAttribute( 'p3' ); - var pid = triangleNode.getAttribute( 'pid' ); - - triangles.push( parseInt( v1, 10 ), parseInt( v2, 10 ), parseInt( v3, 10 ) ); - - var triangleProperty = {}; - - if ( p1 ) { - - triangleProperty[ 'p1' ] = parseInt( p1, 10 ); - - } - - if ( p2 ) { - - triangleProperty[ 'p2' ] = parseInt( p2, 10 ); - - } - - if ( p3 ) { - - triangleProperty[ 'p3' ] = parseInt( p3, 10 ); - - } - - if ( pid ) { - - triangleProperty[ 'pid' ] = pid; - - } - - if ( 0 < Object.keys( triangleProperty ).length ) { - - triangleProperties.push( triangleProperty ); - - } - - } - - meshData[ 'triangleProperties' ] = triangleProperties; - meshData[ 'triangles' ] = new Uint32Array( triangles.length ); - - for ( var i = 0; i < triangles.length; i ++ ) { - - meshData[ 'triangles' ][ i ] = triangles[ i ]; - - } - - return meshData; - - } - - function parseComponentsNode( componentsNode ) { - - } - - function parseObjectNode( objectNode ) { - - var objectData = { - type: objectNode.getAttribute( 'type' ) - }; - - var id = objectNode.getAttribute( 'id' ); - - if ( id ) { - - objectData[ 'id' ] = id; - - } - - var pid = objectNode.getAttribute( 'pid' ); - - if ( pid ) { - - objectData[ 'pid' ] = pid; - - } - - var pindex = objectNode.getAttribute( 'pindex' ); - - if ( pindex ) { - - objectData[ 'pindex' ] = pindex; - - } - - var thumbnail = objectNode.getAttribute( 'thumbnail' ); - - if ( thumbnail ) { - - objectData[ 'thumbnail' ] = thumbnail; - - } - - var partnumber = objectNode.getAttribute( 'partnumber' ); - - if ( partnumber ) { - - objectData[ 'partnumber' ] = partnumber; - - } - - var name = objectNode.getAttribute( 'name' ); - - if ( name ) { - - objectData[ 'name' ] = name; - - } - - var meshNode = objectNode.querySelector( 'mesh' ); - - if ( meshNode ) { - - objectData[ 'mesh' ] = parseMeshNode( meshNode ); - - } - - var componentsNode = objectNode.querySelector( 'components' ); - - if ( componentsNode ) { - - objectData[ 'components' ] = parseComponentsNode( componentsNode ); - - } - - return objectData; - - } - - function parseResourcesNode( resourcesNode ) { - - var resourcesData = {}; - var basematerialsNode = resourcesNode.querySelector( 'basematerials' ); - - if ( basematerialsNode ) { - - resourcesData[ 'basematerial' ] = parseBasematerialsNode( basematerialsNode ); - - } - - resourcesData[ 'object' ] = {}; - var objectNodes = resourcesNode.querySelectorAll( 'object' ); - - for ( var i = 0; i < objectNodes.length; i ++ ) { - - var objectNode = objectNodes[ i ]; - var objectData = parseObjectNode( objectNode ); - resourcesData[ 'object' ][ objectData[ 'id' ] ] = objectData; - - } - - return resourcesData; - - } - - function parseBuildNode( buildNode ) { - - var buildData = []; - var itemNodes = buildNode.querySelectorAll( 'item' ); - - for ( var i = 0; i < itemNodes.length; i ++ ) { - - var itemNode = itemNodes[ i ]; - var buildItem = { - objectid: itemNode.getAttribute( 'objectid' ) - }; - var transform = itemNode.getAttribute( 'transform' ); - - if ( transform ) { - - var t = []; - transform.split( ' ' ).forEach( function ( s ) { - - t.push( parseFloat( s ) ); - - } ); - var mat4 = new THREE.Matrix4(); - buildItem[ 'transform' ] = mat4.set( - t[ 0 ], t[ 3 ], t[ 6 ], t[ 9 ], - t[ 1 ], t[ 4 ], t[ 7 ], t[ 10 ], - t[ 2 ], t[ 5 ], t[ 8 ], t[ 11 ], - 0.0, 0.0, 0.0, 1.0 - ); - - } - - buildData.push( buildItem ); - - } - - return buildData; - - } - - function parseModelNode( modelNode ) { - - var modelData = { unit: modelNode.getAttribute( 'unit' ) || 'millimeter' }; - var metadataNodes = modelNode.querySelectorAll( 'metadata' ); - - if ( metadataNodes ) { - - modelData[ 'metadata' ] = parseMetadataNodes( metadataNodes ); - - } - - var resourcesNode = modelNode.querySelector( 'resources' ); - - if ( resourcesNode ) { - - modelData[ 'resources' ] = parseResourcesNode( resourcesNode ); - - } - - var buildNode = modelNode.querySelector( 'build' ); - - if ( buildNode ) { - - modelData[ 'build' ] = parseBuildNode( buildNode ); - - } - - return modelData; - - } - - function buildMesh( meshData, data3mf ) { - - var geometry = new THREE.BufferGeometry(); - geometry.setIndex( new THREE.BufferAttribute( meshData[ 'triangles' ], 1 ) ); - geometry.addAttribute( 'position', new THREE.BufferAttribute( meshData[ 'vertices' ], 3 ) ); - - if ( meshData[ 'colors' ] ) { - - geometry.addAttribute( 'color', new THREE.BufferAttribute( meshData[ 'colors' ], 3 ) ); - - } - - geometry.computeBoundingSphere(); - - var materialOpts = { - flatShading: true - }; - - if ( meshData[ 'colors' ] && 0 < meshData[ 'colors' ].length ) { - - materialOpts[ 'vertexColors' ] = THREE.VertexColors; - - } else { - - materialOpts[ 'color' ] = 0xaaaaff; - - } - - var material = new THREE.MeshPhongMaterial( materialOpts ); - return new THREE.Mesh( geometry, material ); - - } - - function applyExtensions( extensions, meshData, modelXml, data3mf ) { - - if ( ! extensions ) { - - return; - - } - - var availableExtensions = []; - var keys = Object.keys( extensions ); - - for ( var i = 0; i < keys.length; i ++ ) { - - var ns = keys[ i ]; - - for ( var j = 0; j < scope.availableExtensions.length; j ++ ) { - - var extension = scope.availableExtensions[ j ]; - - if ( extension.ns === ns ) { - - availableExtensions.push( extension ); - - } - - } - - } - - for ( var i = 0; i < availableExtensions.length; i ++ ) { - - var extension = availableExtensions[ i ]; - extension.apply( modelXml, extensions[ extension[ 'ns' ] ], meshData ); - - } - - } - - function buildMeshes( data3mf ) { - - var modelsData = data3mf.model; - var meshes = {}; - var modelsKeys = Object.keys( modelsData ); - - for ( var i = 0; i < modelsKeys.length; i ++ ) { - - var modelsKey = modelsKeys[ i ]; - var modelData = modelsData[ modelsKey ]; - var modelXml = modelData[ 'xml' ]; - var extensions = modelData[ 'extensions' ]; - - var objectIds = Object.keys( modelData[ 'resources' ][ 'object' ] ); - - for ( var j = 0; j < objectIds.length; j ++ ) { - - var objectId = objectIds[ j ]; - var objectData = modelData[ 'resources' ][ 'object' ][ objectId ]; - var meshData = objectData[ 'mesh' ]; - applyExtensions( extensions, meshData, modelXml, data3mf ); - meshes[ objectId ] = buildMesh( meshData, data3mf ); - - } - - } - - return meshes; - - } - - function build( meshes, refs, data3mf ) { - - var group = new THREE.Group(); - var buildData = data3mf.model[ refs[ 'target' ].substring( 1 ) ][ 'build' ]; - - for ( var i = 0; i < buildData.length; i ++ ) { - - var buildItem = buildData[ i ]; - var mesh = meshes[ buildItem[ 'objectid' ] ]; - - if ( buildItem[ 'transform' ] ) { - - mesh.geometry.applyMatrix( buildItem[ 'transform' ] ); - - } - - group.add( mesh ); - - } - - return group; - - } - - var data3mf = loadDocument( data ); - var meshes = buildMeshes( data3mf ); - - return build( meshes, data3mf[ 'rels' ], data3mf ); - - }, - - addExtension: function ( extension ) { - - this.availableExtensions.push( extension ); - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/RaytracingWorker.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/RaytracingWorker.js deleted file mode 100644 index 831e02dc61ca3655f944ef458d9afc4927d9b5c4..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/RaytracingWorker.js +++ /dev/null @@ -1,546 +0,0 @@ -var BLOCK = 128; -var startX, startY; - -var scene, camera, renderer, loader, sceneId; - -importScripts( '../../../build/three.js' ); - - -self.onmessage = function ( e ) { - - var data = e.data; - if ( ! data ) return; - - if ( data.init ) { - - var - width = data.init[ 0 ], - height = data.init[ 1 ]; - - BLOCK = data.blockSize; - - if ( ! renderer ) renderer = new THREE.RaytracingRendererWorker(); - if ( ! loader ) loader = new THREE.ObjectLoader(); - - renderer.setSize( width, height ); - - // TODO fix passing maxRecursionDepth as parameter. - // if (data.maxRecursionDepth) maxRecursionDepth = data.maxRecursionDepth; - - } - - if ( data.scene ) { - - scene = loader.parse( data.scene ); - camera = loader.parse( data.camera ); - - var meta = data.annex; - scene.traverse( function ( o ) { - - if ( o.isPointLight ) { - - o.physicalAttenuation = true; - - } - - var mat = o.material; - - if ( ! mat ) return; - - var material = meta[ mat.uuid ]; - - for ( var m in material ) { - - mat[ m ] = material[ m ]; - - } - - } ); - - sceneId = data.sceneId; - - } - - if ( data.render && scene && camera ) { - - startX = data.x; - startY = data.y; - renderer.render( scene, camera ); - - } - -}; - -/** - * DOM-less version of Raytracing Renderer - * @author mrdoob / http://mrdoob.com/ - * @author alteredq / http://alteredqualia.com/ - * @author zz95 / http://github.com/zz85 - */ - -THREE.RaytracingRendererWorker = function () { - - console.log( 'THREE.RaytracingRendererWorker', THREE.REVISION ); - - var maxRecursionDepth = 3; - - var canvasWidth, canvasHeight; - var canvasWidthHalf, canvasHeightHalf; - var origin = new THREE.Vector3(); - var direction = new THREE.Vector3(); - - var cameraPosition = new THREE.Vector3(); - - var raycaster = new THREE.Raycaster( origin, direction ); - var ray = raycaster.ray; - - var raycasterLight = new THREE.Raycaster(); - var rayLight = raycasterLight.ray; - - var perspective; - var cameraNormalMatrix = new THREE.Matrix3(); - - var objects; - var lights = []; - var cache = {}; - - this.setSize = function ( width, height ) { - - canvasWidth = width; - canvasHeight = height; - - canvasWidthHalf = Math.floor( canvasWidth / 2 ); - canvasHeightHalf = Math.floor( canvasHeight / 2 ); - - }; - - // - - var spawnRay = ( function () { - - var diffuseColor = new THREE.Color(); - var specularColor = new THREE.Color(); - var lightColor = new THREE.Color(); - var schlick = new THREE.Color(); - - var lightContribution = new THREE.Color(); - - var eyeVector = new THREE.Vector3(); - var lightVector = new THREE.Vector3(); - var normalVector = new THREE.Vector3(); - var halfVector = new THREE.Vector3(); - - var localPoint = new THREE.Vector3(); - var reflectionVector = new THREE.Vector3(); - - var tmpVec = new THREE.Vector3(); - - var tmpColor = []; - - for ( var i = 0; i < maxRecursionDepth; i ++ ) { - - tmpColor[ i ] = new THREE.Color(); - - } - - return function spawnRay( rayOrigin, rayDirection, outputColor, recursionDepth ) { - - outputColor.setRGB( 0, 0, 0 ); - - // - - ray.origin = rayOrigin; - ray.direction = rayDirection; - - var intersections = raycaster.intersectObjects( objects, true ); - - // ray didn't find anything - // (here should come setting of background color?) - - if ( intersections.length === 0 ) return; - - // ray hit - - var intersection = intersections[ 0 ]; - - var point = intersection.point; - var object = intersection.object; - var material = object.material; - var face = intersection.face; - - var geometry = object.geometry; - - // - - var _object = cache[ object.id ]; - - eyeVector.subVectors( ray.origin, point ).normalize(); - - // resolve pixel diffuse color - - if ( material.isMeshLambertMaterial || - material.isMeshPhongMaterial || - material.isMeshBasicMaterial ) { - - diffuseColor.copyGammaToLinear( material.color ); - - } else { - - diffuseColor.setRGB( 1, 1, 1 ); - - } - - if ( material.vertexColors === THREE.FaceColors ) { - - diffuseColor.multiply( face.color ); - - } - - // compute light shading - - rayLight.origin.copy( point ); - - if ( material.isMeshBasicMaterial ) { - - for ( var i = 0, l = lights.length; i < l; i ++ ) { - - var light = lights[ i ]; - - lightVector.setFromMatrixPosition( light.matrixWorld ); - lightVector.sub( point ); - - rayLight.direction.copy( lightVector ).normalize(); - - var intersections = raycasterLight.intersectObjects( objects, true ); - - // point in shadow - - if ( intersections.length > 0 ) continue; - - // point visible - - outputColor.add( diffuseColor ); - - } - - } else if ( material.isMeshLambertMaterial || material.isMeshPhongMaterial ) { - - var normalComputed = false; - - for ( var i = 0, l = lights.length; i < l; i ++ ) { - - var light = lights[ i ]; - - lightVector.setFromMatrixPosition( light.matrixWorld ); - lightVector.sub( point ); - - rayLight.direction.copy( lightVector ).normalize(); - - var intersections = raycasterLight.intersectObjects( objects, true ); - - // point in shadow - - if ( intersections.length > 0 ) continue; - - // point lit - - if ( normalComputed === false ) { - - // the same normal can be reused for all lights - // (should be possible to cache even more) - - localPoint.copy( point ).applyMatrix4( _object.inverseMatrix ); - computePixelNormal( normalVector, localPoint, material.flatShading, face, geometry ); - normalVector.applyMatrix3( _object.normalMatrix ).normalize(); - - normalComputed = true; - - } - - lightColor.copyGammaToLinear( light.color ); - - // compute attenuation - - var attenuation = 1.0; - - if ( light.physicalAttenuation === true ) { - - attenuation = lightVector.length(); - attenuation = 1.0 / ( attenuation * attenuation ); - - } - - lightVector.normalize(); - - // compute diffuse - - var dot = Math.max( normalVector.dot( lightVector ), 0 ); - var diffuseIntensity = dot * light.intensity; - - lightContribution.copy( diffuseColor ); - lightContribution.multiply( lightColor ); - lightContribution.multiplyScalar( diffuseIntensity * attenuation ); - - outputColor.add( lightContribution ); - - // compute specular - - if ( material.isMeshPhongMaterial ) { - - halfVector.addVectors( lightVector, eyeVector ).normalize(); - - var dotNormalHalf = Math.max( normalVector.dot( halfVector ), 0.0 ); - var specularIntensity = Math.max( Math.pow( dotNormalHalf, material.shininess ), 0.0 ) * diffuseIntensity; - - var specularNormalization = ( material.shininess + 2.0 ) / 8.0; - - specularColor.copyGammaToLinear( material.specular ); - - var alpha = Math.pow( Math.max( 1.0 - lightVector.dot( halfVector ), 0.0 ), 5.0 ); - - schlick.r = specularColor.r + ( 1.0 - specularColor.r ) * alpha; - schlick.g = specularColor.g + ( 1.0 - specularColor.g ) * alpha; - schlick.b = specularColor.b + ( 1.0 - specularColor.b ) * alpha; - - lightContribution.copy( schlick ); - lightContribution.multiply( lightColor ); - lightContribution.multiplyScalar( specularNormalization * specularIntensity * attenuation ); - - outputColor.add( lightContribution ); - - } - - } - - } - - // reflection / refraction - - var reflectivity = material.reflectivity; - - if ( ( material.mirror || material.glass ) && reflectivity > 0 && recursionDepth < maxRecursionDepth ) { - - if ( material.mirror ) { - - reflectionVector.copy( rayDirection ); - reflectionVector.reflect( normalVector ); - - } else if ( material.glass ) { - - var eta = material.refractionRatio; - - var dotNI = rayDirection.dot( normalVector ); - var k = 1.0 - eta * eta * ( 1.0 - dotNI * dotNI ); - - if ( k < 0.0 ) { - - reflectionVector.set( 0, 0, 0 ); - - } else { - - reflectionVector.copy( rayDirection ); - reflectionVector.multiplyScalar( eta ); - - var alpha = eta * dotNI + Math.sqrt( k ); - tmpVec.copy( normalVector ); - tmpVec.multiplyScalar( alpha ); - reflectionVector.sub( tmpVec ); - - } - - } - - var theta = Math.max( eyeVector.dot( normalVector ), 0.0 ); - var rf0 = reflectivity; - var fresnel = rf0 + ( 1.0 - rf0 ) * Math.pow( ( 1.0 - theta ), 5.0 ); - - var weight = fresnel; - - var zColor = tmpColor[ recursionDepth ]; - - spawnRay( point, reflectionVector, zColor, recursionDepth + 1 ); - - if ( material.specular !== undefined ) { - - zColor.multiply( material.specular ); - - } - - zColor.multiplyScalar( weight ); - outputColor.multiplyScalar( 1 - weight ); - outputColor.add( zColor ); - - } - - }; - - }() ); - - var computePixelNormal = ( function () { - - var vA = new THREE.Vector3(); - var vB = new THREE.Vector3(); - var vC = new THREE.Vector3(); - - var tmpVec1 = new THREE.Vector3(); - var tmpVec2 = new THREE.Vector3(); - var tmpVec3 = new THREE.Vector3(); - - return function computePixelNormal( outputVector, point, flatShading, face, geometry ) { - - var faceNormal = face.normal; - - if ( flatShading === true ) { - - outputVector.copy( faceNormal ); - - } else { - - var positions = geometry.attributes.position; - var normals = geometry.attributes.normal; - - vA.fromBufferAttribute( positions, face.a ); - vB.fromBufferAttribute( positions, face.b ); - vC.fromBufferAttribute( positions, face.c ); - - // compute barycentric coordinates - - tmpVec3.crossVectors( tmpVec1.subVectors( vB, vA ), tmpVec2.subVectors( vC, vA ) ); - var areaABC = faceNormal.dot( tmpVec3 ); - - tmpVec3.crossVectors( tmpVec1.subVectors( vB, point ), tmpVec2.subVectors( vC, point ) ); - var areaPBC = faceNormal.dot( tmpVec3 ); - var a = areaPBC / areaABC; - - tmpVec3.crossVectors( tmpVec1.subVectors( vC, point ), tmpVec2.subVectors( vA, point ) ); - var areaPCA = faceNormal.dot( tmpVec3 ); - var b = areaPCA / areaABC; - - var c = 1.0 - a - b; - - // compute interpolated vertex normal - - tmpVec1.fromBufferAttribute( normals, face.a ); - tmpVec2.fromBufferAttribute( normals, face.b ); - tmpVec3.fromBufferAttribute( normals, face.c ); - - tmpVec1.multiplyScalar( a ); - tmpVec2.multiplyScalar( b ); - tmpVec3.multiplyScalar( c ); - - outputVector.addVectors( tmpVec1, tmpVec2 ); - outputVector.add( tmpVec3 ); - - } - - }; - - }() ); - - var renderBlock = ( function () { - - var blockSize = BLOCK; - - var data = new Uint8ClampedArray( blockSize * blockSize * 4 ); - - var pixelColor = new THREE.Color(); - - return function renderBlock( blockX, blockY ) { - - var index = 0; - - for ( var y = 0; y < blockSize; y ++ ) { - - for ( var x = 0; x < blockSize; x ++, index += 4 ) { - - // spawn primary ray at pixel position - - origin.copy( cameraPosition ); - - direction.set( x + blockX - canvasWidthHalf, - ( y + blockY - canvasHeightHalf ), - perspective ); - direction.applyMatrix3( cameraNormalMatrix ).normalize(); - - spawnRay( origin, direction, pixelColor, 0 ); - - // convert from linear to gamma - - data[ index + 0 ] = Math.sqrt( pixelColor.r ) * 255; - data[ index + 1 ] = Math.sqrt( pixelColor.g ) * 255; - data[ index + 2 ] = Math.sqrt( pixelColor.b ) * 255; - data[ index + 3 ] = 255; - - } - - } - - // Use transferable objects! :) - self.postMessage( { - data: data.buffer, - blockX: blockX, - blockY: blockY, - blockSize: blockSize, - sceneId: sceneId, - time: Date.now(), // time for this renderer - }, [ data.buffer ] ); - - data = new Uint8ClampedArray( blockSize * blockSize * 4 ); - - }; - - }() ); - - this.render = function ( scene, camera ) { - - // update scene graph - - if ( scene.autoUpdate === true ) scene.updateMatrixWorld(); - - // update camera matrices - - if ( camera.parent === null ) camera.updateMatrixWorld(); - - cameraPosition.setFromMatrixPosition( camera.matrixWorld ); - - // - - cameraNormalMatrix.getNormalMatrix( camera.matrixWorld ); - - perspective = 0.5 / Math.tan( THREE.Math.degToRad( camera.fov * 0.5 ) ) * canvasHeight; - - objects = scene.children; - - // collect lights and set up object matrices - - lights.length = 0; - - scene.traverse( function ( object ) { - - if ( object.isPointLight ) { - - lights.push( object ); - - } - - if ( cache[ object.id ] === undefined ) { - - cache[ object.id ] = { - normalMatrix: new THREE.Matrix3(), - inverseMatrix: new THREE.Matrix4() - }; - - } - - var _object = cache[ object.id ]; - - _object.normalMatrix.getNormalMatrix( object.matrixWorld ); - _object.inverseMatrix.getInverse( object.matrixWorld ); - - } ); - - renderBlock( startX, startY ); - - }; - -}; - -Object.assign( THREE.RaytracingRendererWorker.prototype, THREE.EventDispatcher.prototype ); diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/editings/sefa.py b/spaces/bankholdup/stylegan_petbreeder/e4e/editings/sefa.py deleted file mode 100644 index db7083ce463b765a7cf452807883a3b85fb63fa5..0000000000000000000000000000000000000000 --- a/spaces/bankholdup/stylegan_petbreeder/e4e/editings/sefa.py +++ /dev/null @@ -1,46 +0,0 @@ -import torch -import numpy as np -from tqdm import tqdm - - -def edit(generator, latents, indices, semantics=1, start_distance=-15.0, end_distance=15.0, num_samples=1, step=11): - - layers, boundaries, values = factorize_weight(generator, indices) - codes = latents.detach().cpu().numpy() # (1,18,512) - - # Generate visualization pages. - distances = np.linspace(start_distance, end_distance, step) - num_sam = num_samples - num_sem = semantics - - edited_latents = [] - for sem_id in tqdm(range(num_sem), desc='Semantic ', leave=False): - boundary = boundaries[sem_id:sem_id + 1] - for sam_id in tqdm(range(num_sam), desc='Sample ', leave=False): - code = codes[sam_id:sam_id + 1] - for col_id, d in enumerate(distances, start=1): - temp_code = code.copy() - temp_code[:, layers, :] += boundary * d - edited_latents.append(torch.from_numpy(temp_code).float().cuda()) - return torch.cat(edited_latents) - - -def factorize_weight(g_ema, layers='all'): - - weights = [] - if layers == 'all' or 0 in layers: - weight = g_ema.conv1.conv.modulation.weight.T - weights.append(weight.cpu().detach().numpy()) - - if layers == 'all': - layers = list(range(g_ema.num_layers - 1)) - else: - layers = [l - 1 for l in layers if l != 0] - - for idx in layers: - weight = g_ema.convs[idx].conv.modulation.weight.T - weights.append(weight.cpu().detach().numpy()) - weight = np.concatenate(weights, axis=1).astype(np.float32) - weight = weight / np.linalg.norm(weight, axis=0, keepdims=True) - eigen_values, eigen_vectors = np.linalg.eig(weight.dot(weight.T)) - return layers, eigen_vectors.T, eigen_values diff --git a/spaces/bigcode/bigcode-editor/share_btn.py b/spaces/bigcode/bigcode-editor/share_btn.py deleted file mode 100644 index 2587a360a189c4cc488d23b48c3cf1ca7151b04c..0000000000000000000000000000000000000000 --- a/spaces/bigcode/bigcode-editor/share_btn.py +++ /dev/null @@ -1,112 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - // const gradioEl = document.querySelector('body > gradio-app'); - const gradioEl = document.querySelector("gradio-app"); - const inputTxt = gradioEl.querySelector('#q-input textarea').value; - let outputTxt = gradioEl.querySelector('#q-output .codemirror-wrapper .cm-scroller > div:nth-of-type(2)').innerText; - outputTxt = `
${outputTxt}
` - - const titleLength = 150; - let titleTxt = inputTxt; - if(titleTxt.length > titleLength){ - titleTxt = titleTxt.slice(0, titleLength) + ' ...'; - } - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!inputTxt || !outputTxt){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const descriptionMd = `### Question: -${inputTxt} - -### Answer: - -${outputTxt}`; - - const params = { - title: titleTxt, - description: descriptionMd, - }; - - const paramsStr = Object.entries(params) - .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`) - .join('&'); - - window.open(`https://huggingface.co/spaces/bigcode/bigcode-playground/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" - -share_btn_css = """ -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { transform: rotate(0deg); } - to { transform: rotate(360deg); } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -""" \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/hybrid_video.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/hybrid_video.py deleted file mode 100644 index 76401712387cbda1bb29dbd6669fc9f774903c7e..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/hybrid_video.py +++ /dev/null @@ -1,436 +0,0 @@ -import cv2 -import os -import pathlib -import numpy as np -import random -from PIL import Image, ImageChops, ImageOps, ImageEnhance -from .video_audio_utilities import vid2frames, get_quick_vid_info, get_frame_name, get_next_frame -from .human_masking import video2humanmasks - -def delete_all_imgs_in_folder(folder_path): - files = list(pathlib.Path(folder_path).glob('*.jpg')) - files.extend(list(pathlib.Path(folder_path).glob('*.png'))) - for f in files: os.remove(f) - -def hybrid_generation(args, anim_args, root): - video_in_frame_path = os.path.join(args.outdir, 'inputframes') - hybrid_frame_path = os.path.join(args.outdir, 'hybridframes') - human_masks_path = os.path.join(args.outdir, 'human_masks') - - if anim_args.hybrid_generate_inputframes: - # create folders for the video input frames and optional hybrid frames to live in - os.makedirs(video_in_frame_path, exist_ok=True) - os.makedirs(hybrid_frame_path, exist_ok=True) - - # delete frames if overwrite = true - if anim_args.overwrite_extracted_frames: - delete_all_imgs_in_folder(hybrid_frame_path) - - # save the video frames from input video - print(f"Video to extract: {anim_args.video_init_path}") - print(f"Extracting video (1 every {anim_args.extract_nth_frame}) frames to {video_in_frame_path}...") - video_fps = vid2frames(video_path=anim_args.video_init_path, video_in_frame_path=video_in_frame_path, n=anim_args.extract_nth_frame, overwrite=anim_args.overwrite_extracted_frames, extract_from_frame=anim_args.extract_from_frame, extract_to_frame=anim_args.extract_to_frame) - - # extract alpha masks of humans from the extracted input video imgs - if anim_args.hybrid_generate_human_masks != "None": - # create a folder for the human masks imgs to live in - print(f"Checking /creating a folder for the human masks") - os.makedirs(human_masks_path, exist_ok=True) - - # delete frames if overwrite = true - if anim_args.overwrite_extracted_frames: - delete_all_imgs_in_folder(human_masks_path) - - # in case that generate_input_frames isn't selected, we won't get the video fps rate as vid2frames isn't called, So we'll check the video fps in here instead - if not anim_args.hybrid_generate_inputframes: - _, video_fps, _ = get_quick_vid_info(anim_args.video_init_path) - - # calculate the correct fps of the masked video according to the original video fps and 'extract_nth_frame' - output_fps = video_fps/anim_args.extract_nth_frame - - # generate the actual alpha masks from the input imgs - print(f"Extracting alpha humans masks from the input frames") - video2humanmasks(video_in_frame_path, human_masks_path, anim_args.hybrid_generate_human_masks, output_fps) - - # determine max frames from length of input frames - anim_args.max_frames = len([f for f in pathlib.Path(video_in_frame_path).glob('*.jpg')]) - print(f"Using {anim_args.max_frames} input frames from {video_in_frame_path}...") - - # get sorted list of inputfiles - inputfiles = sorted(pathlib.Path(video_in_frame_path).glob('*.jpg')) - - # use first frame as init - if anim_args.hybrid_use_first_frame_as_init_image: - for f in inputfiles: - args.init_image = str(f) - args.use_init = True - print(f"Using init_image from video: {args.init_image}") - break - - return args, anim_args, inputfiles - -def hybrid_composite(args, anim_args, frame_idx, prev_img, depth_model, hybrid_comp_schedules, root): - video_frame = os.path.join(args.outdir, 'inputframes', get_frame_name(anim_args.video_init_path) + f"{frame_idx:05}.jpg") - video_depth_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_vid_depth{frame_idx:05}.jpg") - depth_frame = os.path.join(args.outdir, f"{args.timestring}_depth_{frame_idx-1:05}.png") - mask_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_mask{frame_idx:05}.jpg") - comp_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_comp{frame_idx:05}.jpg") - prev_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_prev{frame_idx:05}.jpg") - prev_img = cv2.cvtColor(prev_img, cv2.COLOR_BGR2RGB) - prev_img_hybrid = Image.fromarray(prev_img) - video_image = Image.open(video_frame) - video_image = video_image.resize((args.W, args.H), Image.Resampling.LANCZOS) - hybrid_mask = None - - # composite mask types - if anim_args.hybrid_comp_mask_type == 'Depth': # get depth from last generation - hybrid_mask = Image.open(depth_frame) - elif anim_args.hybrid_comp_mask_type == 'Video Depth': # get video depth - video_depth = depth_model.predict(np.array(video_image), anim_args, root.half_precision) - depth_model.save(video_depth_frame, video_depth) - hybrid_mask = Image.open(video_depth_frame) - elif anim_args.hybrid_comp_mask_type == 'Blend': # create blend mask image - hybrid_mask = Image.blend(ImageOps.grayscale(prev_img_hybrid), ImageOps.grayscale(video_image), hybrid_comp_schedules['mask_blend_alpha']) - elif anim_args.hybrid_comp_mask_type == 'Difference': # create difference mask image - hybrid_mask = ImageChops.difference(ImageOps.grayscale(prev_img_hybrid), ImageOps.grayscale(video_image)) - - # optionally invert mask, if mask type is defined - if anim_args.hybrid_comp_mask_inverse and anim_args.hybrid_comp_mask_type != "None": - hybrid_mask = ImageOps.invert(hybrid_mask) - - # if a mask type is selected, make composition - if hybrid_mask == None: - hybrid_comp = video_image - else: - # ensure grayscale - hybrid_mask = ImageOps.grayscale(hybrid_mask) - # equalization before - if anim_args.hybrid_comp_mask_equalize in ['Before', 'Both']: - hybrid_mask = ImageOps.equalize(hybrid_mask) - # contrast - hybrid_mask = ImageEnhance.Contrast(hybrid_mask).enhance(hybrid_comp_schedules['mask_contrast']) - # auto contrast with cutoffs lo/hi - if anim_args.hybrid_comp_mask_auto_contrast: - hybrid_mask = autocontrast_grayscale(np.array(hybrid_mask), hybrid_comp_schedules['mask_auto_contrast_cutoff_low'], hybrid_comp_schedules['mask_auto_contrast_cutoff_high']) - hybrid_mask = Image.fromarray(hybrid_mask) - hybrid_mask = ImageOps.grayscale(hybrid_mask) - if anim_args.hybrid_comp_save_extra_frames: - hybrid_mask.save(mask_frame) - # equalization after - if anim_args.hybrid_comp_mask_equalize in ['After', 'Both']: - hybrid_mask = ImageOps.equalize(hybrid_mask) - # do compositing and save - hybrid_comp = Image.composite(prev_img_hybrid, video_image, hybrid_mask) - if anim_args.hybrid_comp_save_extra_frames: - hybrid_comp.save(comp_frame) - - # final blend of composite with prev_img, or just a blend if no composite is selected - hybrid_blend = Image.blend(prev_img_hybrid, hybrid_comp, hybrid_comp_schedules['alpha']) - if anim_args.hybrid_comp_save_extra_frames: - hybrid_blend.save(prev_frame) - - prev_img = cv2.cvtColor(np.array(hybrid_blend), cv2.COLOR_RGB2BGR) - - # restore to np array and return - return args, prev_img - -def get_matrix_for_hybrid_motion(frame_idx, dimensions, inputfiles, hybrid_motion): - img1 = cv2.cvtColor(get_resized_image_from_filename(str(inputfiles[frame_idx-1]), dimensions), cv2.COLOR_BGR2GRAY) - img2 = cv2.cvtColor(get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions), cv2.COLOR_BGR2GRAY) - matrix = get_transformation_matrix_from_images(img1, img2, hybrid_motion) - print(f"Calculating {hybrid_motion} RANSAC matrix for frames {frame_idx} to {frame_idx+1}") - return matrix - -def get_matrix_for_hybrid_motion_prev(frame_idx, dimensions, inputfiles, prev_img, hybrid_motion): - # first handle invalid images from cadence by returning default matrix - height, width = prev_img.shape[:2] - if height == 0 or width == 0 or prev_img != np.uint8: - return get_hybrid_motion_default_matrix(hybrid_motion) - else: - prev_img_gray = cv2.cvtColor(prev_img, cv2.COLOR_BGR2GRAY) - img = cv2.cvtColor(get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions), cv2.COLOR_BGR2GRAY) - matrix = get_transformation_matrix_from_images(prev_img_gray, img, hybrid_motion) - print(f"Calculating {hybrid_motion} RANSAC matrix for frames {frame_idx} to {frame_idx+1}") - return matrix - -def get_flow_for_hybrid_motion(frame_idx, dimensions, inputfiles, hybrid_frame_path, method, do_flow_visualization=False): - print(f"Calculating {method} optical flow for frames {frame_idx} to {frame_idx+1}") - i1 = get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions) - i2 = get_resized_image_from_filename(str(inputfiles[frame_idx+1]), dimensions) - flow = get_flow_from_images(i1, i2, method) - if do_flow_visualization: - save_flow_visualization(frame_idx, dimensions, flow, inputfiles, hybrid_frame_path) - return flow - -def get_flow_for_hybrid_motion_prev(frame_idx, dimensions, inputfiles, hybrid_frame_path, prev_img, method, do_flow_visualization=False): - print(f"Calculating {method} optical flow for frames {frame_idx} to {frame_idx+1}") - # first handle invalid images from cadence by returning default matrix - height, width = prev_img.shape[:2] - if height == 0 or width == 0: - flow = get_hybrid_motion_default_flow(dimensions) - else: - i1 = prev_img.astype(np.uint8) - i2 = get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions) - flow = get_flow_from_images(i1, i2, method) - if do_flow_visualization: - save_flow_visualization(frame_idx, dimensions, flow, inputfiles, hybrid_frame_path) - return flow - -def image_transform_ransac(image_cv2, xform, hybrid_motion, border_mode=cv2.BORDER_REPLICATE): - if hybrid_motion == "Perspective": - return image_transform_perspective(image_cv2, xform, border_mode=border_mode) - else: # Affine - return image_transform_affine(image_cv2, xform, border_mode=border_mode) - -def image_transform_optical_flow(img, flow, border_mode=cv2.BORDER_REPLICATE, flow_reverse=False): - if not flow_reverse: - flow = -flow - h, w = img.shape[:2] - flow[:, :, 0] += np.arange(w) - flow[:, :, 1] += np.arange(h)[:,np.newaxis] - return remap(img, flow, border_mode) - -def image_transform_affine(image_cv2, xform, border_mode=cv2.BORDER_REPLICATE): - return cv2.warpAffine( - image_cv2, - xform, - (image_cv2.shape[1],image_cv2.shape[0]), - borderMode=border_mode - ) - -def image_transform_perspective(image_cv2, xform, border_mode=cv2.BORDER_REPLICATE): - return cv2.warpPerspective( - image_cv2, - xform, - (image_cv2.shape[1], image_cv2.shape[0]), - borderMode=border_mode - ) - -def get_hybrid_motion_default_matrix(hybrid_motion): - if hybrid_motion == "Perspective": - arr = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) - else: - arr = np.array([[1., 0., 0.], [0., 1., 0.]]) - return arr - -def get_hybrid_motion_default_flow(dimensions): - cols, rows = dimensions - flow = np.zeros((rows, cols, 2), np.float32) - return flow - -def get_transformation_matrix_from_images(img1, img2, hybrid_motion, max_corners=200, quality_level=0.01, min_distance=30, block_size=3): - # Detect feature points in previous frame - prev_pts = cv2.goodFeaturesToTrack(img1, - maxCorners=max_corners, - qualityLevel=quality_level, - minDistance=min_distance, - blockSize=block_size) - - if prev_pts is None or len(prev_pts) < 8 or img1 is None or img2 is None: - return get_hybrid_motion_default_matrix(hybrid_motion) - - # Get optical flow - curr_pts, status, err = cv2.calcOpticalFlowPyrLK(img1, img2, prev_pts, None) - - # Filter only valid points - idx = np.where(status==1)[0] - prev_pts = prev_pts[idx] - curr_pts = curr_pts[idx] - - if len(prev_pts) < 8 or len(curr_pts) < 8: - return get_hybrid_motion_default_matrix(hybrid_motion) - - if hybrid_motion == "Perspective": # Perspective - Find the transformation between points - transformation_matrix, mask = cv2.findHomography(prev_pts, curr_pts, cv2.RANSAC, 5.0) - return transformation_matrix - else: # Affine - Compute a rigid transformation (without depth, only scale + rotation + translation) - transformation_rigid_matrix, rigid_mask = cv2.estimateAffinePartial2D(prev_pts, curr_pts) - return transformation_rigid_matrix - -def get_flow_from_images(i1, i2, method): - if method =="DIS Medium": - r = get_flow_from_images_DIS(i1, i2, cv2.DISOPTICAL_FLOW_PRESET_MEDIUM) - elif method =="DIS Fast": - r = get_flow_from_images_DIS(i1, i2, cv2.DISOPTICAL_FLOW_PRESET_FAST) - elif method =="DIS UltraFast": - r = get_flow_from_images_DIS(i1, i2, cv2.DISOPTICAL_FLOW_PRESET_ULTRAFAST) - elif method == "DenseRLOF": # requires running opencv-contrib-python (full opencv) INSTEAD of opencv-python - r = get_flow_from_images_Dense_RLOF(i1, i2) - elif method == "SF": # requires running opencv-contrib-python (full opencv) INSTEAD of opencv-python - r = get_flow_from_images_SF(i1, i2) - elif method =="Farneback Fine": - r = get_flow_from_images_Farneback(i1, i2, 'fine') - else: # Farneback Normal: - r = get_flow_from_images_Farneback(i1, i2) - return r - -def get_flow_from_images_DIS(i1, i2, preset): - i1 = cv2.cvtColor(i1, cv2.COLOR_BGR2GRAY) - i2 = cv2.cvtColor(i2, cv2.COLOR_BGR2GRAY) - dis=cv2.DISOpticalFlow_create(preset) - return dis.calc(i1, i2, None) - -def get_flow_from_images_Dense_RLOF(i1, i2, last_flow=None): - return cv2.optflow.calcOpticalFlowDenseRLOF(i1, i2, flow = last_flow) - -def get_flow_from_images_SF(i1, i2, last_flow=None, layers = 3, averaging_block_size = 2, max_flow = 4): - return cv2.optflow.calcOpticalFlowSF(i1, i2, layers, averaging_block_size, max_flow) - -def get_flow_from_images_Farneback(i1, i2, preset="normal", last_flow=None, pyr_scale = 0.5, levels = 3, winsize = 15, iterations = 3, poly_n = 5, poly_sigma = 1.2, flags = 0): - flags = cv2.OPTFLOW_FARNEBACK_GAUSSIAN # Specify the operation flags - pyr_scale = 0.5 # The image scale (<1) to build pyramids for each image - if preset == "fine": - levels = 13 # The number of pyramid layers, including the initial image - winsize = 77 # The averaging window size - iterations = 13 # The number of iterations at each pyramid level - poly_n = 15 # The size of the pixel neighborhood used to find polynomial expansion in each pixel - poly_sigma = 0.8 # The standard deviation of the Gaussian used to smooth derivatives used as a basis for the polynomial expansion - else: # "normal" - levels = 5 # The number of pyramid layers, including the initial image - winsize = 21 # The averaging window size - iterations = 5 # The number of iterations at each pyramid level - poly_n = 7 # The size of the pixel neighborhood used to find polynomial expansion in each pixel - poly_sigma = 1.2 # The standard deviation of the Gaussian used to smooth derivatives used as a basis for the polynomial expansion - i1 = cv2.cvtColor(i1, cv2.COLOR_BGR2GRAY) - i2 = cv2.cvtColor(i2, cv2.COLOR_BGR2GRAY) - flags = 0 # flags = cv2.OPTFLOW_USE_INITIAL_FLOW - flow = cv2.calcOpticalFlowFarneback(i1, i2, last_flow, pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags) - return flow - -def save_flow_visualization(frame_idx, dimensions, flow, inputfiles, hybrid_frame_path): - flow_img_file = os.path.join(hybrid_frame_path, f"flow{frame_idx:05}.jpg") - flow_img = cv2.imread(str(inputfiles[frame_idx])) - flow_img = cv2.resize(flow_img, (dimensions[0], dimensions[1]), cv2.INTER_AREA) - flow_img = cv2.cvtColor(flow_img, cv2.COLOR_RGB2GRAY) - flow_img = cv2.cvtColor(flow_img, cv2.COLOR_GRAY2BGR) - flow_img = draw_flow_lines_in_grid_in_color(flow_img, flow) - flow_img = cv2.cvtColor(flow_img, cv2.COLOR_BGR2RGB) - cv2.imwrite(flow_img_file, flow_img) - print(f"Saved optical flow visualization: {flow_img_file}") - -def draw_flow_lines_in_grid_in_color(img, flow, step=8, magnitude_multiplier=1, min_magnitude = 1, max_magnitude = 10000): - flow = flow * magnitude_multiplier - h, w = img.shape[:2] - y, x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int) - fx, fy = flow[y,x].T - lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1, 2, 2) - lines = np.int32(lines + 0.5) - vis = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - vis = cv2.cvtColor(vis, cv2.COLOR_GRAY2BGR) - - mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1]) - hsv = np.zeros((flow.shape[0], flow.shape[1], 3), dtype=np.uint8) - hsv[...,0] = ang*180/np.pi/2 - hsv[...,1] = 255 - hsv[...,2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX) - bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) - vis = cv2.add(vis, bgr) - - # Iterate through the lines - for (x1, y1), (x2, y2) in lines: - # Calculate the magnitude of the line - magnitude = np.sqrt((x2 - x1)**2 + (y2 - y1)**2) - - # Only draw the line if it falls within the magnitude range - if min_magnitude <= magnitude <= max_magnitude: - b = int(bgr[y1, x1, 0]) - g = int(bgr[y1, x1, 1]) - r = int(bgr[y1, x1, 2]) - color = (b, g, r) - cv2.arrowedLine(vis, (x1, y1), (x2, y2), color, thickness=1, tipLength=0.1) - return vis - -def draw_flow_lines_in_color(img, flow, threshold=3, magnitude_multiplier=1, min_magnitude = 0, max_magnitude = 10000): - # h, w = img.shape[:2] - vis = img.copy() # Create a copy of the input image - - # Find the locations in the flow field where the magnitude of the flow is greater than the threshold - mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1]) - idx = np.where(mag > threshold) - - # Create HSV image - hsv = np.zeros((flow.shape[0], flow.shape[1], 3), dtype=np.uint8) - hsv[...,0] = ang*180/np.pi/2 - hsv[...,1] = 255 - hsv[...,2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX) - - # Convert HSV image to BGR - bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) - - # Add color from bgr - vis = cv2.add(vis, bgr) - - # Draw an arrow at each of these locations to indicate the direction of the flow - for i, (y, x) in enumerate(zip(idx[0], idx[1])): - # Calculate the magnitude of the line - x2 = x + magnitude_multiplier * int(flow[y, x, 0]) - y2 = y + magnitude_multiplier * int(flow[y, x, 1]) - magnitude = np.sqrt((x2 - x)**2 + (y2 - y)**2) - - # Only draw the line if it falls within the magnitude range - if min_magnitude <= magnitude <= max_magnitude: - if i % random.randint(100, 200) == 0: - b = int(bgr[y, x, 0]) - g = int(bgr[y, x, 1]) - r = int(bgr[y, x, 2]) - color = (b, g, r) - cv2.arrowedLine(vis, (x, y), (x2, y2), color, thickness=1, tipLength=0.25) - - return vis - -def autocontrast_grayscale(image, low_cutoff=0, high_cutoff=100): - # Perform autocontrast on a grayscale np array image. - # Find the minimum and maximum values in the image - min_val = np.percentile(image, low_cutoff) - max_val = np.percentile(image, high_cutoff) - - # Scale the image so that the minimum value is 0 and the maximum value is 255 - image = 255 * (image - min_val) / (max_val - min_val) - - # Clip values that fall outside the range [0, 255] - image = np.clip(image, 0, 255) - - return image - -def get_resized_image_from_filename(im, dimensions): - img = cv2.imread(im) - return cv2.resize(img, (dimensions[0], dimensions[1]), cv2.INTER_AREA) - -def remap(img, flow, border_mode = cv2.BORDER_REFLECT_101): - # copyMakeBorder doesn't support wrap, but supports replicate. Replaces wrap with reflect101. - if border_mode == cv2.BORDER_WRAP: - border_mode = cv2.BORDER_REFLECT_101 - h, w = img.shape[:2] - displacement = int(h * 0.25), int(w * 0.25) - larger_img = cv2.copyMakeBorder(img, displacement[0], displacement[0], displacement[1], displacement[1], border_mode) - lh, lw = larger_img.shape[:2] - larger_flow = extend_flow(flow, lw, lh) - remapped_img = cv2.remap(larger_img, larger_flow, None, cv2.INTER_LINEAR, border_mode) - output_img = center_crop_image(remapped_img, w, h) - return output_img - -def center_crop_image(img, w, h): - y, x, _ = img.shape - width_indent = int((x - w) / 2) - height_indent = int((y - h) / 2) - cropped_img = img[height_indent:y-height_indent, width_indent:x-width_indent] - return cropped_img - -def extend_flow(flow, w, h): - # Get the shape of the original flow image - flow_h, flow_w = flow.shape[:2] - # Calculate the position of the image in the new image - x_offset = int((w - flow_w) / 2) - y_offset = int((h - flow_h) / 2) - # Generate the X and Y grids - x_grid, y_grid = np.meshgrid(np.arange(w), np.arange(h)) - # Create the new flow image and set it to the X and Y grids - new_flow = np.dstack((x_grid, y_grid)).astype(np.float32) - # Shift the values of the original flow by the size of the border - flow[:,:,0] += x_offset - flow[:,:,1] += y_offset - # Overwrite the middle of the grid with the original flow - new_flow[y_offset:y_offset+flow_h, x_offset:x_offset+flow_w, :] = flow - # Return the extended image - return new_flow - \ No newline at end of file diff --git a/spaces/biingshanak/vits-uma-genshin-honkai/text/__init__.py b/spaces/biingshanak/vits-uma-genshin-honkai/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/biingshanak/vits-uma-genshin-honkai/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/bilgeyucel/captionate/README.md b/spaces/bilgeyucel/captionate/README.md deleted file mode 100644 index 13c7bdfda170c27541e43540e690b6d27e9fc5c8..0000000000000000000000000000000000000000 --- a/spaces/bilgeyucel/captionate/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Captionate -emoji: 📸 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bingbing520/ChatGPT2/README.md b/spaces/bingbing520/ChatGPT2/README.md deleted file mode 100644 index fdc5804a8146ffc4f5ad7bec20ca5d26d4583f42..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.25.0 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 -duplicated_from: bingbing520/ChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Download HOT!easybinder20.md b/spaces/bioriAsaeru/text-to-voice/Download HOT!easybinder20.md deleted file mode 100644 index 025c13758da31d18b81ac0efba4fb6dfecfdda92..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download HOT!easybinder20.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

https://coub.com/stories/3138262-downloadeasybinder20-marryso-129311 https://coub.com/stories/3138261-cm-2007-editor-v0-3-new https://palinddecap.sokuhou.wiki/d/Downloadeasybinder20 https://acpradathe.playing.wiki/d/Download%20R09%20Syllabus%20Book%20Jntu%20Hyderabad%20Contact

-

Downloadeasybinder20


Download Ziphttps://urloso.com/2uyPvi



-

Downloadeasybinder20 For Access Point and HUB hann i direkt maken downloaden picklesumfrau-180-villa etikett. Downloadeasybinder20 XTU vous permet de modifier plus rapidement, même dans des secteurs difficiles tels que le BIM. Receive the latest deals, news, and status messages. Tap Privacy, Select iCloud, and then Tap Use aveure to Save Photos on https://wintermarathon.de/advert/downloadeasybinder20-new/

-

Downloadeasybinder20 das dreckige Iphone herunterladen Partition-Lösung für Windows Releasing Canadian Yamaha MX pro 2. Mac nera junge rama video downloadeasybinder20, downloadeasybinder20 apk download, dowloe easy binder20, dowloe easy binder20 for android, dowloeeasybinder20 für android, dowloeeasybinder20 deutsch, dowloeeasybinder20 für android, dowloeeasybinder20 harcore deutsch, dowloeeasybinder20 for android.The outcome of patients treated with antibiotics in the intensive care unit is determined by the presence of ventilator-associated pneumonia (VAP) at the time of first antibiotic administration. Here, we use a mathematical model to investigate the long-term effect of antibiotics in reducing VAP occurrence. Specifically, we simulate a cohort of inpatients who are receiving empiric antibiotics for possible VAP at time zero. This cohort is followed over time, with patients moving between disease status states. In one disease state, patients are susceptible to antibiotics, while in the other, they have non-susceptibility to antibiotics. Antibiotic treatments are administered to these patients in an "on" or "off" manner. We analyze the model to determine the prevalence of VAP within this cohort, and also to determine the probability of developing antibiotic resistance. This work was performed in an effort to better understand the extent to which antibiotics reduce the occurrence of ventilator-associated pneumonia. This work was performed in an effort to better understand the extent to which antibiotics reduce the occurrence of ventilator-associated pneumonia.Sorry for my ignorance. I have been buying and playing game for a long time. But I did not know some of the terms used in this game. So I tried to google them. I found some terms I did not know. But I am not sure if I can use them or not. In this game, there are a series of maps. The player start from an island or continent and he has some buildings on that continent. For example, the player can build some houses on the continent. These houses can only be built on land.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Download-LINK-Fmrte-2016-Full-Crack-26.md b/spaces/bioriAsaeru/text-to-voice/Download-LINK-Fmrte-2016-Full-Crack-26.md deleted file mode 100644 index cf822f302e7894c12bc394a26d12fee63d76c218..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download-LINK-Fmrte-2016-Full-Crack-26.md +++ /dev/null @@ -1,120 +0,0 @@ -## Download Fmrte 2016 Full Crack 26 - - - - - - - - - -**Download Fmrte 2016 Full Crack 26 [https://tinourl.com/2txnK3](https://tinourl.com/2txnK3)** - - - - - - - - - - - - I can try to help you with that. Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Download Fmrte 2016 Full Crack 26": - -# How to Download Fmrte 2016 Full Crack 26 for Free - - - -Fmrte 2016 is a popular football manager editor that allows you to customize various aspects of the game, such as players, staff, tactics, transfers, and more. However, the official version of Fmrte 2016 requires a license key that costs $5.99. If you want to download Fmrte 2016 full crack 26 for free, you will need to follow these steps: - - - -1. Download the Fmrte 2016 setup file from [this link](https://byltly.com/2tgqTZ). This is a trusted source that has been verified by many users. - -2. Install Fmrte 2016 on your computer. You can choose the default settings or customize them according to your preferences. - -3. Download the Fmrte 2016 crack file from [this link](https://blltly.com/2tactD). This is a patched version of Fmrte 2016 that bypasses the license key verification. - -4. Copy and paste the crack file into the Fmrte 2016 installation folder. This is usually located at C:\Program Files (x86)\FMRTE 16. - -5. Run Fmrte 2016 as administrator. You should see a message that says "License Activated". - -6. Enjoy using Fmrte 2016 full crack 26 for free! - - - -Note: This method is only for educational purposes and we do not condone piracy or illegal downloading. Please support the developers of Fmrte 2016 by purchasing a license key if you like their product. - -Here are a few more paragraphs for the article: - -## Why Use Fmrte 2016 Full Crack 26? - - - -Fmrte 2016 full crack 26 is a powerful tool that can enhance your gaming experience in Football Manager 2016. With Fmrte 2016, you can edit almost anything in the game, such as: - - - -- Players attributes, skills, traits, positions, contracts, injuries, bans, etc. - -- Staff attributes, roles, contracts, etc. - -- Clubs finances, reputation, facilities, sponsors, kits, etc. - -- Nations rankings, coefficients, stadiums, etc. - -- Competitions rules, stages, fixtures, etc. - -- And much more! - - - -Fmrte 2016 also has some exclusive features that are not available in the official editor or in-game editor. For example: - - - -- Wonderkid generator - create your own wonderkids with random or customized attributes. - -- Freeze player/staff - prevent any changes to a player or staff attributes during the game. - -- Inspire team - boost your team morale and confidence before a match. - -- Swap players - exchange players between clubs without any restrictions. - -- Mass edit - apply changes to multiple players or staff at once. - - - -Fmrte 2016 full crack 26 is easy to use and has a user-friendly interface. You can access Fmrte 2016 anytime during the game and make changes on the fly. You can also save and load your own presets or use the ones provided by Fmrte community. - - - -## How to Use Fmrte 2016 Full Crack 26 Safely? - - - -Fmrte 2016 full crack 26 is an unofficial and unlicensed software that modifies the game data. Therefore, it may cause some issues or errors if not used properly. Here are some tips on how to use Fmrte 2016 safely: - - - -- Backup your save game before using Fmrte 2016. You can do this by copying the save game file to another location or using the backup feature in Fmrte 2016. - -- Use Fmrte 2016 only when you need it. Do not leave it running in the background or make unnecessary changes to the game data. - -- Do not make unrealistic or excessive changes to the game data. For example, do not give a player 200 in all attributes or change a club's finances to billions of dollars. - -- Do not use Fmrte 2016 online or in multiplayer mode. This may cause desynchronization or ban from Steam or other platforms. - -- Do not update the game or Fmrte 2016 without checking if they are compatible. Every time the game is updated, Fmrte 2016 needs to be updated as well. You can check the compatibility status on Fmrte website or forum. - - - -Note: This method is only for educational purposes and we do not condone piracy or illegal downloading. Please support the developers of Fmrte 2016 by purchasing a license key if you like their product. - - dfd1c89656 - - - - - diff --git a/spaces/bioriAsaeru/text-to-voice/Dream Zindagi Tamil Movie Mp4 Video Songs Free Dow angelina aufnahmen g Listen to the Melodious Songs of Roshni Saha and Others.md b/spaces/bioriAsaeru/text-to-voice/Dream Zindagi Tamil Movie Mp4 Video Songs Free Dow angelina aufnahmen g Listen to the Melodious Songs of Roshni Saha and Others.md deleted file mode 100644 index d2e01327f4be862da48fa9e00da0b816430b2a78..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Dream Zindagi Tamil Movie Mp4 Video Songs Free Dow angelina aufnahmen g Listen to the Melodious Songs of Roshni Saha and Others.md +++ /dev/null @@ -1,6 +0,0 @@ -

Dream Zindagi Tamil Movie Mp4 Video Songs Free Dow angelina aufnahmen g


Download Filehttps://urloso.com/2uyOE0



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/bipin/mltwitter/utils.py b/spaces/bipin/mltwitter/utils.py deleted file mode 100644 index 13c549ecb5fed461b20f9c47c7e2756378728aa5..0000000000000000000000000000000000000000 --- a/spaces/bipin/mltwitter/utils.py +++ /dev/null @@ -1,29 +0,0 @@ -import requests -import streamlit as st -import streamlit.components.v1 as components - - -@st.cache -def get_tweet(url): - api = f"https://publish.twitter.com/oembed?url={url}&maxwidth=400&theme=dark" - content = requests.get(api).json() - return content - -def display_page(urls_path): - columns = st.columns([1, 1, 1]) - - with open(urls_path, "r") as f: - urls = f.readlines() - - for i in range(0, len(urls)-3, 3): - with columns[0]: - st.write("-"*10) - components.html(get_tweet(urls[i])['html'], height=283, scrolling=True) - - with columns[1]: - st.write("-"*10) - components.html(get_tweet(urls[i+1])['html'], height=283, scrolling=True) - - with columns[2]: - st.write("-"*10) - components.html(get_tweet(urls[i+2])['html'], height=283, scrolling=True) \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/_explorers.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/_explorers.py deleted file mode 100644 index 334836b72559a120feb8a15eef3fe96ce88a4edb..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/_explorers.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import treetable as tt - -from .._base_explorers import BaseExplorer - - -class LMExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['train', 'valid'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'train', - [ - tt.leaf('epoch'), - tt.leaf('duration', '.1f'), # duration in minutes - tt.leaf('ping'), - tt.leaf('ce', '.4f'), # cross entropy - tt.leaf("ppl", '.3f'), # perplexity - ], - align='>', - ), - tt.group( - 'valid', - [ - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('best_ppl', '.3f'), - ], - align='>', - ), - ] - - def process_sheep(self, sheep, history): - parts = super().process_sheep(sheep, history) - - track_by = {'ppl': 'lower'} # values should be in ['lower', 'higher'] - best_metrics = {k: (1 if v == 'lower' else -1) * float('inf') for k, v in track_by.items()} - - def comparator(mode, a, b): - return a < b if mode == 'lower' else a > b - - for metrics in history: - for key, sub in metrics.items(): - for metric in track_by: - # for the validation set, keep track of best metrics (ppl in this example) - # this is so we can conveniently compare metrics between runs in the grid - if key == 'valid' and metric in sub and comparator( - track_by[metric], sub[metric], best_metrics[metric] - ): - best_metrics[metric] = sub[metric] - - if 'valid' in parts: - parts['valid'].update({f'best_{k}': v for k, v in best_metrics.items()}) - return parts - - -class GenerationEvalExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['evaluate'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'evaluate', - [ - tt.leaf('epoch', '.3f'), - tt.leaf('duration', '.1f'), - tt.leaf('ping'), - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('fad', '.3f'), - tt.leaf('kld', '.3f'), - tt.leaf('text_consistency', '.3f'), - tt.leaf('chroma_cosine', '.3f'), - ], - align='>', - ), - ] diff --git a/spaces/brainblow/MusiCreator/tests/data/__init__.py b/spaces/brainblow/MusiCreator/tests/data/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/brainblow/MusiCreator/tests/data/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/camenduru-com/RabbitMQ/Dockerfile b/spaces/camenduru-com/RabbitMQ/Dockerfile deleted file mode 100644 index 67e036e68f41311410ef2cb4593778e25c3b7f4d..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/RabbitMQ/Dockerfile +++ /dev/null @@ -1,10 +0,0 @@ -FROM rabbitmq:3.8.9-alpine - -RUN rabbitmq-plugins enable --offline rabbitmq_web_mqtt -RUN rabbitmq-plugins disable --offline rabbitmq_prometheus -RUN rabbitmq-plugins disable --offline rabbitmq_web_dispatch -RUN rabbitmq-plugins disable --offline rabbitmq_management_agent - -ADD start.sh /start.sh -RUN chmod +x /start.sh -CMD /start.sh \ No newline at end of file diff --git a/spaces/camenduru-com/seamless/Dockerfile b/spaces/camenduru-com/seamless/Dockerfile deleted file mode 100644 index 6dfd39e298329069d7421d636020109016f78f33..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/seamless/Dockerfile +++ /dev/null @@ -1,27 +0,0 @@ -FROM nginx:latest - -# Copy the website files to the default nginx directory -COPY . /usr/share/nginx/html - -# Set the working directory -WORKDIR /usr/share/nginx/html - -# Expose port 80 for the webserver -EXPOSE 7860 - -COPY nginx.conf /etc/nginx/nginx.conf - -RUN touch /var/run/nginx.pid -RUN chmod 777 /var/run/nginx.pid - -RUN chown -R 1000:1000 /usr -RUN chmod -R 777 /usr -RUN chown -R 1000:1000 /var -RUN chmod -R 777 /var -RUN chown -R 1000:1000 /etc/nginx -RUN chmod -R 777 /etc/nginx - -#CMD cat /etc/nginx/nginx.conf - -# Start nginx when the container is run -CMD ["nginx", "-g", "daemon off;"] \ No newline at end of file diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/FtexImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/FtexImagePlugin.py deleted file mode 100644 index c46b2f28ba6c889d643db3bcf8fccade7fdc6d2d..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/FtexImagePlugin.py +++ /dev/null @@ -1,113 +0,0 @@ -""" -A Pillow loader for .ftc and .ftu files (FTEX) -Jerome Leclanche - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ - -Independence War 2: Edge Of Chaos - Texture File Format - 16 October 2001 - -The textures used for 3D objects in Independence War 2: Edge Of Chaos are in a -packed custom format called FTEX. This file format uses file extensions FTC -and FTU. -* FTC files are compressed textures (using standard texture compression). -* FTU files are not compressed. -Texture File Format -The FTC and FTU texture files both use the same format. This -has the following structure: -{header} -{format_directory} -{data} -Where: -{header} = { - u32:magic, - u32:version, - u32:width, - u32:height, - u32:mipmap_count, - u32:format_count -} - -* The "magic" number is "FTEX". -* "width" and "height" are the dimensions of the texture. -* "mipmap_count" is the number of mipmaps in the texture. -* "format_count" is the number of texture formats (different versions of the -same texture) in this file. - -{format_directory} = format_count * { u32:format, u32:where } - -The format value is 0 for DXT1 compressed textures and 1 for 24-bit RGB -uncompressed textures. -The texture data for a format starts at the position "where" in the file. - -Each set of texture data in the file has the following structure: -{data} = format_count * { u32:mipmap_size, mipmap_size * { u8 } } -* "mipmap_size" is the number of bytes in that mip level. For compressed -textures this is the size of the texture data compressed with DXT1. For 24 bit -uncompressed textures, this is 3 * width * height. Following this are the image -bytes for that mipmap level. - -Note: All data is stored in little-Endian (Intel) byte order. -""" - -import struct -from enum import IntEnum -from io import BytesIO - -from . import Image, ImageFile - -MAGIC = b"FTEX" - - -class Format(IntEnum): - DXT1 = 0 - UNCOMPRESSED = 1 - - -class FtexImageFile(ImageFile.ImageFile): - format = "FTEX" - format_description = "Texture File Format (IW2:EOC)" - - def _open(self): - if not _accept(self.fp.read(4)): - msg = "not an FTEX file" - raise SyntaxError(msg) - struct.unpack("1, whether to put stride in the - first 1x1 convolution or the bottleneck 3x3 convolution. - dilation (int): the dilation rate of the 3x3 conv layer. - """ - super().__init__(in_channels, out_channels, stride) - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - # The original MSRA ResNet models have stride in the first 1x1 conv - # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have - # stride in the 3x3 conv - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv2 = Conv2d( - bottleneck_channels, - bottleneck_channels, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - # Zero-initialize the last normalization in each residual branch, - # so that at the beginning, the residual branch starts with zeros, - # and each residual block behaves like an identity. - # See Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": - # "For BN layers, the learnable scaling coefficient γ is initialized - # to be 1, except for each residual block's last BN - # where γ is initialized to be 0." - - # nn.init.constant_(self.conv3.norm.weight, 0) - # TODO this somehow hurts performance when training GN models from scratch. - # Add it as an option when we need to use this code to train a backbone. - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - - out = self.conv2(out) - out = F.relu_(out) - - out = self.conv3(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -class DeformBottleneckBlock(CNNBlockBase): - """ - Similar to :class:`BottleneckBlock`, but with :paper:`deformable conv ` - in the 3x3 convolution. - """ - - def __init__( - self, - in_channels, - out_channels, - *, - bottleneck_channels, - stride=1, - num_groups=1, - norm="BN", - stride_in_1x1=False, - dilation=1, - deform_modulated=False, - deform_num_groups=1, - ): - super().__init__(in_channels, out_channels, stride) - self.deform_modulated = deform_modulated - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - - if deform_modulated: - deform_conv_op = ModulatedDeformConv - # offset channels are 2 or 3 (if with modulated) * kernel_size * kernel_size - offset_channels = 27 - else: - deform_conv_op = DeformConv - offset_channels = 18 - - self.conv2_offset = Conv2d( - bottleneck_channels, - offset_channels * deform_num_groups, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - dilation=dilation, - ) - self.conv2 = deform_conv_op( - bottleneck_channels, - bottleneck_channels, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - deformable_groups=deform_num_groups, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - nn.init.constant_(self.conv2_offset.weight, 0) - nn.init.constant_(self.conv2_offset.bias, 0) - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - - if self.deform_modulated: - offset_mask = self.conv2_offset(out) - offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1) - offset = torch.cat((offset_x, offset_y), dim=1) - mask = mask.sigmoid() - out = self.conv2(out, offset, mask) - else: - offset = self.conv2_offset(out) - out = self.conv2(out, offset) - out = F.relu_(out) - - out = self.conv3(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -class BasicStem(CNNBlockBase): - """ - The standard ResNet stem (layers before the first residual block), - with a conv, relu and max_pool. - """ - - def __init__(self, in_channels=3, out_channels=64, norm="BN"): - """ - Args: - norm (str or callable): norm after the first conv layer. - See :func:`layers.get_norm` for supported format. - """ - super().__init__(in_channels, out_channels, 4) - self.in_channels = in_channels - self.conv1 = Conv2d( - in_channels, - out_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False, - norm=get_norm(norm, out_channels), - ) - weight_init.c2_msra_fill(self.conv1) - - def forward(self, x): - x = self.conv1(x) - x = F.relu_(x) - x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1) - return x - - -class ResNet(Backbone): - """ - Implement :paper:`ResNet`. - """ - - def __init__(self, stem, stages, num_classes=None, out_features=None, freeze_at=0): - """ - Args: - stem (nn.Module): a stem module - stages (list[list[CNNBlockBase]]): several (typically 4) stages, - each contains multiple :class:`CNNBlockBase`. - num_classes (None or int): if None, will not perform classification. - Otherwise, will create a linear layer. - out_features (list[str]): name of the layers whose outputs should - be returned in forward. Can be anything in "stem", "linear", or "res2" ... - If None, will return the output of the last layer. - freeze_at (int): The number of stages at the beginning to freeze. - see :meth:`freeze` for detailed explanation. - """ - super().__init__() - self.stem = stem - self.num_classes = num_classes - - current_stride = self.stem.stride - self._out_feature_strides = {"stem": current_stride} - self._out_feature_channels = {"stem": self.stem.out_channels} - - self.stage_names, self.stages = [], [] - - if out_features is not None: - # Avoid keeping unused layers in this module. They consume extra memory - # and may cause allreduce to fail - num_stages = max( - [{"res2": 1, "res3": 2, "res4": 3, "res5": 4}.get(f, 0) for f in out_features] - ) - stages = stages[:num_stages] - for i, blocks in enumerate(stages): - assert len(blocks) > 0, len(blocks) - for block in blocks: - assert isinstance(block, CNNBlockBase), block - - name = "res" + str(i + 2) - stage = nn.Sequential(*blocks) - - self.add_module(name, stage) - self.stage_names.append(name) - self.stages.append(stage) - - self._out_feature_strides[name] = current_stride = int( - current_stride * np.prod([k.stride for k in blocks]) - ) - self._out_feature_channels[name] = curr_channels = blocks[-1].out_channels - self.stage_names = tuple(self.stage_names) # Make it static for scripting - - if num_classes is not None: - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.linear = nn.Linear(curr_channels, num_classes) - - # Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": - # "The 1000-way fully-connected layer is initialized by - # drawing weights from a zero-mean Gaussian with standard deviation of 0.01." - nn.init.normal_(self.linear.weight, std=0.01) - name = "linear" - - if out_features is None: - out_features = [name] - self._out_features = out_features - assert len(self._out_features) - children = [x[0] for x in self.named_children()] - for out_feature in self._out_features: - assert out_feature in children, "Available children: {}".format(", ".join(children)) - self.freeze(freeze_at) - - def forward(self, x): - """ - Args: - x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. - - Returns: - dict[str->Tensor]: names and the corresponding features - """ - assert x.dim() == 4, f"ResNet takes an input of shape (N, C, H, W). Got {x.shape} instead!" - outputs = {} - x = self.stem(x) - if "stem" in self._out_features: - outputs["stem"] = x - for name, stage in zip(self.stage_names, self.stages): - x = stage(x) - if name in self._out_features: - outputs[name] = x - if self.num_classes is not None: - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.linear(x) - if "linear" in self._out_features: - outputs["linear"] = x - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - def freeze(self, freeze_at=0): - """ - Freeze the first several stages of the ResNet. Commonly used in - fine-tuning. - - Layers that produce the same feature map spatial size are defined as one - "stage" by :paper:`FPN`. - - Args: - freeze_at (int): number of stages to freeze. - `1` means freezing the stem. `2` means freezing the stem and - one residual stage, etc. - - Returns: - nn.Module: this ResNet itself - """ - if freeze_at >= 1: - self.stem.freeze() - for idx, stage in enumerate(self.stages, start=2): - if freeze_at >= idx: - for block in stage.children(): - block.freeze() - return self - - @staticmethod - def make_stage(block_class, num_blocks, *, in_channels, out_channels, **kwargs): - """ - Create a list of blocks of the same type that forms one ResNet stage. - - Args: - block_class (type): a subclass of CNNBlockBase that's used to create all blocks in this - stage. A module of this type must not change spatial resolution of inputs unless its - stride != 1. - num_blocks (int): number of blocks in this stage - in_channels (int): input channels of the entire stage. - out_channels (int): output channels of **every block** in the stage. - kwargs: other arguments passed to the constructor of - `block_class`. If the argument name is "xx_per_block", the - argument is a list of values to be passed to each block in the - stage. Otherwise, the same argument is passed to every block - in the stage. - - Returns: - list[CNNBlockBase]: a list of block module. - - Examples: - :: - stage = ResNet.make_stage( - BottleneckBlock, 3, in_channels=16, out_channels=64, - bottleneck_channels=16, num_groups=1, - stride_per_block=[2, 1, 1], - dilations_per_block=[1, 1, 2] - ) - - Usually, layers that produce the same feature map spatial size are defined as one - "stage" (in :paper:`FPN`). Under such definition, ``stride_per_block[1:]`` should - all be 1. - """ - blocks = [] - for i in range(num_blocks): - curr_kwargs = {} - for k, v in kwargs.items(): - if k.endswith("_per_block"): - assert len(v) == num_blocks, ( - f"Argument '{k}' of make_stage should have the " - f"same length as num_blocks={num_blocks}." - ) - newk = k[: -len("_per_block")] - assert newk not in kwargs, f"Cannot call make_stage with both {k} and {newk}!" - curr_kwargs[newk] = v[i] - else: - curr_kwargs[k] = v - - blocks.append( - block_class(in_channels=in_channels, out_channels=out_channels, **curr_kwargs) - ) - in_channels = out_channels - return blocks - - @staticmethod - def make_default_stages(depth, block_class=None, **kwargs): - """ - Created list of ResNet stages from pre-defined depth (one of 18, 34, 50, 101, 152). - If it doesn't create the ResNet variant you need, please use :meth:`make_stage` - instead for fine-grained customization. - - Args: - depth (int): depth of ResNet - block_class (type): the CNN block class. Has to accept - `bottleneck_channels` argument for depth > 50. - By default it is BasicBlock or BottleneckBlock, based on the - depth. - kwargs: - other arguments to pass to `make_stage`. Should not contain - stride and channels, as they are predefined for each depth. - - Returns: - list[list[CNNBlockBase]]: modules in all stages; see arguments of - :class:`ResNet.__init__`. - """ - num_blocks_per_stage = { - 18: [2, 2, 2, 2], - 34: [3, 4, 6, 3], - 50: [3, 4, 6, 3], - 101: [3, 4, 23, 3], - 152: [3, 8, 36, 3], - }[depth] - if block_class is None: - block_class = BasicBlock if depth < 50 else BottleneckBlock - if depth < 50: - in_channels = [64, 64, 128, 256] - out_channels = [64, 128, 256, 512] - else: - in_channels = [64, 256, 512, 1024] - out_channels = [256, 512, 1024, 2048] - ret = [] - for (n, s, i, o) in zip(num_blocks_per_stage, [1, 2, 2, 2], in_channels, out_channels): - if depth >= 50: - kwargs["bottleneck_channels"] = o // 4 - ret.append( - ResNet.make_stage( - block_class=block_class, - num_blocks=n, - stride_per_block=[s] + [1] * (n - 1), - in_channels=i, - out_channels=o, - **kwargs, - ) - ) - return ret - - -ResNetBlockBase = CNNBlockBase -""" -Alias for backward compatibiltiy. -""" - - -def make_stage(*args, **kwargs): - """ - Deprecated alias for backward compatibiltiy. - """ - return ResNet.make_stage(*args, **kwargs) - - -@BACKBONE_REGISTRY.register() -def build_resnet_backbone(cfg, input_shape): - """ - Create a ResNet instance from config. - - Returns: - ResNet: a :class:`ResNet` instance. - """ - # need registration of new blocks/stems? - norm = cfg.MODEL.RESNETS.NORM - stem = BasicStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - - # fmt: off - freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT - out_features = cfg.MODEL.RESNETS.OUT_FEATURES - depth = cfg.MODEL.RESNETS.DEPTH - num_groups = cfg.MODEL.RESNETS.NUM_GROUPS - width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group - in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS - stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 - res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION - deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE - deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED - deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS - # fmt: on - assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation) - - num_blocks_per_stage = { - 18: [2, 2, 2, 2], - 34: [3, 4, 6, 3], - 50: [3, 4, 6, 3], - 101: [3, 4, 23, 3], - 152: [3, 8, 36, 3], - }[depth] - - if depth in [18, 34]: - assert out_channels == 64, "Must set MODEL.RESNETS.RES2_OUT_CHANNELS = 64 for R18/R34" - assert not any( - deform_on_per_stage - ), "MODEL.RESNETS.DEFORM_ON_PER_STAGE unsupported for R18/R34" - assert res5_dilation == 1, "Must set MODEL.RESNETS.RES5_DILATION = 1 for R18/R34" - assert num_groups == 1, "Must set MODEL.RESNETS.NUM_GROUPS = 1 for R18/R34" - - stages = [] - - for idx, stage_idx in enumerate(range(2, 6)): - # res5_dilation is used this way as a convention in R-FCN & Deformable Conv paper - dilation = res5_dilation if stage_idx == 5 else 1 - first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2 - stage_kargs = { - "num_blocks": num_blocks_per_stage[idx], - "stride_per_block": [first_stride] + [1] * (num_blocks_per_stage[idx] - 1), - "in_channels": in_channels, - "out_channels": out_channels, - "norm": norm, - } - # Use BasicBlock for R18 and R34. - if depth in [18, 34]: - stage_kargs["block_class"] = BasicBlock - else: - stage_kargs["bottleneck_channels"] = bottleneck_channels - stage_kargs["stride_in_1x1"] = stride_in_1x1 - stage_kargs["dilation"] = dilation - stage_kargs["num_groups"] = num_groups - if deform_on_per_stage[idx]: - stage_kargs["block_class"] = DeformBottleneckBlock - stage_kargs["deform_modulated"] = deform_modulated - stage_kargs["deform_num_groups"] = deform_num_groups - else: - stage_kargs["block_class"] = BottleneckBlock - blocks = ResNet.make_stage(**stage_kargs) - in_channels = out_channels - out_channels *= 2 - bottleneck_channels *= 2 - stages.append(blocks) - return ResNet(stem, stages, out_features=out_features, freeze_at=freeze_at) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/chart.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/chart.py deleted file mode 100644 index 3bcd13f7c592e37c2751556cda1f6e9cd3400b73..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/chart.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import torch -from torch import nn - -from detectron2.config import CfgNode -from detectron2.layers import ConvTranspose2d, interpolate - -from ...structures import DensePoseChartPredictorOutput -from ..utils import initialize_module_params -from .registry import DENSEPOSE_PREDICTOR_REGISTRY - - -@DENSEPOSE_PREDICTOR_REGISTRY.register() -class DensePoseChartPredictor(nn.Module): - """ - Predictor (last layers of a DensePose model) that takes DensePose head outputs as an input - and produces 4 tensors which represent DensePose results for predefined body parts - (patches / charts): - * coarse segmentation, a tensor of shape [N, K, Hout, Wout] - * fine segmentation, a tensor of shape [N, C, Hout, Wout] - * U coordinates, a tensor of shape [N, C, Hout, Wout] - * V coordinates, a tensor of shape [N, C, Hout, Wout] - where - - N is the number of instances - - K is the number of coarse segmentation channels ( - 2 = foreground / background, - 15 = one of 14 body parts / background) - - C is the number of fine segmentation channels ( - 24 fine body parts / background) - - Hout and Wout are height and width of predictions - """ - - def __init__(self, cfg: CfgNode, input_channels: int): - """ - Initialize predictor using configuration options - - Args: - cfg (CfgNode): configuration options - input_channels (int): input tensor size along the channel dimension - """ - super().__init__() - dim_in = input_channels - n_segm_chan = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_COARSE_SEGM_CHANNELS - dim_out_patches = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_PATCHES + 1 - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL - # coarse segmentation - self.ann_index_lowres = ConvTranspose2d( - dim_in, n_segm_chan, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - # fine segmentation - self.index_uv_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - # U - self.u_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - # V - self.v_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.scale_factor = cfg.MODEL.ROI_DENSEPOSE_HEAD.UP_SCALE - initialize_module_params(self) - - def interp2d(self, tensor_nchw: torch.Tensor): - """ - Bilinear interpolation method to be used for upscaling - - Args: - tensor_nchw (tensor): tensor of shape (N, C, H, W) - Return: - tensor of shape (N, C, Hout, Wout), where Hout and Wout are computed - by applying the scale factor to H and W - """ - return interpolate( - tensor_nchw, scale_factor=self.scale_factor, mode="bilinear", align_corners=False - ) - - def forward(self, head_outputs: torch.Tensor): - """ - Perform forward step on DensePose head outputs - - Args: - head_outputs (tensor): DensePose head outputs, tensor of shape [N, D, H, W] - Return: - An instance of DensePoseChartPredictorOutput - """ - return DensePoseChartPredictorOutput( - coarse_segm=self.interp2d(self.ann_index_lowres(head_outputs)), - fine_segm=self.interp2d(self.index_uv_lowres(head_outputs)), - u=self.interp2d(self.u_lowres(head_outputs)), - v=self.interp2d(self.v_lowres(head_outputs)), - ) diff --git a/spaces/chansung/LLaMA-13B/llama/tokenizer.py b/spaces/chansung/LLaMA-13B/llama/tokenizer.py deleted file mode 100644 index e76e0ac11e77bda798e01b4b27f8a87d8e1acdc8..0000000000000000000000000000000000000000 --- a/spaces/chansung/LLaMA-13B/llama/tokenizer.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the GNU General Public License version 3. - -from sentencepiece import SentencePieceProcessor -from logging import getLogger -from typing import List -import os - - -logger = getLogger() - - -class Tokenizer: - def __init__(self, model_path: str): - # reload tokenizer - assert os.path.isfile(model_path), model_path - self.sp_model = SentencePieceProcessor(model_file=model_path) - logger.info(f"Reloaded SentencePiece model from {model_path}") - - # BOS / EOS token IDs - self.n_words: int = self.sp_model.vocab_size() - self.bos_id: int = self.sp_model.bos_id() - self.eos_id: int = self.sp_model.eos_id() - self.pad_id: int = self.sp_model.pad_id() - logger.info( - f"#words: {self.n_words} - BOS ID: {self.bos_id} - EOS ID: {self.eos_id}" - ) - assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() - - def encode(self, s: str, bos: bool, eos: bool) -> List[int]: - assert type(s) is str - t = self.sp_model.encode(s) - if bos: - t = [self.bos_id] + t - if eos: - t = t + [self.eos_id] - return t - - def decode(self, t: List[int]) -> str: - return self.sp_model.decode(t) \ No newline at end of file diff --git a/spaces/chasemcdo/hf_localai/pkg/tts/generate_unsupported.go b/spaces/chasemcdo/hf_localai/pkg/tts/generate_unsupported.go deleted file mode 100644 index 30926953b73e41b68bdb44de608a1da1873a2687..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/pkg/tts/generate_unsupported.go +++ /dev/null @@ -1,10 +0,0 @@ -//go:build !tts -// +build !tts - -package tts - -import "fmt" - -func tts(text, model, assetDir, arLib, dst string) error { - return fmt.Errorf("this version of LocalAI was built without the tts tag") -} diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/coco_metric.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/coco_metric.py deleted file mode 100644 index db0e9a4154e9e72788f2353dec59ea0ee32187b8..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/coco_metric.py +++ /dev/null @@ -1,23 +0,0 @@ -from pycocoevalcap.eval import COCOEvalCap -from pycocotools.coco import COCO -import json - - -def compute_cider( - result_path, - annotations_path, -): - # create coco object and coco_result object - coco = COCO(annotations_path) - coco_result = coco.loadRes(result_path) - - # create coco_eval object by taking coco and coco_result - coco_eval = COCOEvalCap(coco, coco_result) - coco_eval.params["image_id"] = coco_result.getImgIds() - coco_eval.evaluate() - - return coco_eval.eval - - -def postprocess_captioning_generation(predictions): - return predictions diff --git a/spaces/chendl/compositional_test/transformers/examples/flax/image-captioning/create_model_from_encoder_decoder_models.py b/spaces/chendl/compositional_test/transformers/examples/flax/image-captioning/create_model_from_encoder_decoder_models.py deleted file mode 100644 index c5ce0e4ce133c4f2485f8bed008af8317954e3b1..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/flax/image-captioning/create_model_from_encoder_decoder_models.py +++ /dev/null @@ -1,116 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2022 The HuggingFace Team All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Create a VisionEncoderDecoderModel instance from pretrained encoder/decoder models. - -The cross-attention will be randomly initialized. -""" - -from dataclasses import dataclass, field -from typing import Optional - -from transformers import AutoConfig, AutoImageProcessor, AutoTokenizer, FlaxVisionEncoderDecoderModel, HfArgumentParser - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. - """ - - output_dir: str = field( - metadata={"help": "The output directory where the model will be written."}, - ) - encoder_model_name_or_path: str = field( - metadata={ - "help": ( - "The encoder model checkpoint for weights initialization." - "Don't set if you want to train an encoder model from scratch." - ) - }, - ) - decoder_model_name_or_path: str = field( - metadata={ - "help": ( - "The decoder model checkpoint for weights initialization." - "Don't set if you want to train a decoder model from scratch." - ) - }, - ) - encoder_config_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained encoder config name or path if not the same as encoder_model_name"} - ) - decoder_config_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained decoder config name or path if not the same as decoder_model_name"} - ) - - -def main(): - parser = HfArgumentParser((ModelArguments,)) - (model_args,) = parser.parse_args_into_dataclasses() - - # Load pretrained model and tokenizer - - # Use explicit specified encoder config - if model_args.encoder_config_name: - encoder_config = AutoConfig.from_pretrained(model_args.encoder_config_name) - # Use pretrained encoder model's config - else: - encoder_config = AutoConfig.from_pretrained(model_args.encoder_model_name_or_path) - - # Use explicit specified decoder config - if model_args.decoder_config_name: - decoder_config = AutoConfig.from_pretrained(model_args.decoder_config_name) - # Use pretrained decoder model's config - else: - decoder_config = AutoConfig.from_pretrained(model_args.decoder_model_name_or_path) - - # necessary for `from_encoder_decoder_pretrained` when `decoder_config` is passed - decoder_config.is_decoder = True - decoder_config.add_cross_attention = True - - model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained( - encoder_pretrained_model_name_or_path=model_args.encoder_model_name_or_path, - decoder_pretrained_model_name_or_path=model_args.decoder_model_name_or_path, - encoder_config=encoder_config, - decoder_config=decoder_config, - ) - - # GPT2 only has bos/eos tokens but not decoder_start/pad tokens - decoder_start_token_id = decoder_config.decoder_start_token_id - pad_token_id = decoder_config.pad_token_id - if decoder_start_token_id is None: - decoder_start_token_id = decoder_config.bos_token_id - if pad_token_id is None: - pad_token_id = decoder_config.eos_token_id - - # This is necessary to make Flax's generate() work - model.config.eos_token_id = decoder_config.eos_token_id - model.config.decoder_start_token_id = decoder_start_token_id - model.config.pad_token_id = pad_token_id - - image_processor = AutoImageProcessor.from_pretrained(model_args.encoder_model_name_or_path) - - tokenizer = AutoTokenizer.from_pretrained(model_args.decoder_model_name_or_path) - tokenizer.pad_token = tokenizer.convert_ids_to_tokens(model.config.pad_token_id) - - model.save_pretrained(model_args.output_dir) - image_processor.save_pretrained(model_args.output_dir) - tokenizer.save_pretrained(model_args.output_dir) - - -if __name__ == "__main__": - main() diff --git a/spaces/chongjie/PoseDiffusion_MVP/models/pose_diffusion_model.py b/spaces/chongjie/PoseDiffusion_MVP/models/pose_diffusion_model.py deleted file mode 100644 index 2ea937efc76cd6de9bd4b415e6c9df8a8e5de421..0000000000000000000000000000000000000000 --- a/spaces/chongjie/PoseDiffusion_MVP/models/pose_diffusion_model.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Standard library imports -import base64 -import io -import logging -import math -import pickle -import warnings -from collections import defaultdict -from dataclasses import field, dataclass -from typing import Any, Dict, List, Optional, Tuple, Union - -# Third-party library imports -import numpy as np -import torch -import torch.nn as nn -from PIL import Image - -from pytorch3d.renderer.cameras import CamerasBase -from pytorch3d.transforms import ( - se3_exp_map, - se3_log_map, - Transform3d, - so3_relative_angle, -) -from util.camera_transform import pose_encoding_to_camera - -import models -from hydra.utils import instantiate -from pytorch3d.renderer.cameras import PerspectiveCameras - - -logger = logging.getLogger(__name__) - - -class PoseDiffusionModel(nn.Module): - def __init__( - self, - pose_encoding_type: str, - IMAGE_FEATURE_EXTRACTOR: Dict, - DIFFUSER: Dict, - DENOISER: Dict, - ): - """Initializes a PoseDiffusion model. - - Args: - pose_encoding_type (str): - Defines the encoding type for extrinsics and intrinsics - Currently, only `"absT_quaR_logFL"` is supported - - a concatenation of the translation vector, - rotation quaternion, and logarithm of focal length. - image_feature_extractor_cfg (Dict): - Configuration for the image feature extractor. - diffuser_cfg (Dict): - Configuration for the diffuser. - denoiser_cfg (Dict): - Configuration for the denoiser. - """ - - super().__init__() - - self.pose_encoding_type = pose_encoding_type - - self.image_feature_extractor = instantiate( - IMAGE_FEATURE_EXTRACTOR, _recursive_=False - ) - self.diffuser = instantiate(DIFFUSER, _recursive_=False) - - denoiser = instantiate(DENOISER, _recursive_=False) - self.diffuser.model = denoiser - - self.target_dim = denoiser.target_dim - - def forward( - self, - image: torch.Tensor, - gt_cameras: Optional[CamerasBase] = None, - sequence_name: Optional[List[str]] = None, - cond_fn=None, - cond_start_step=0, - ): - """ - Forward pass of the PoseDiffusionModel. - - Args: - image (torch.Tensor): - Input image tensor, Bx3xHxW. - gt_cameras (Optional[CamerasBase], optional): - Camera object. Defaults to None. - sequence_name (Optional[List[str]], optional): - List of sequence names. Defaults to None. - cond_fn ([type], optional): - Conditional function. Wrapper for GGS or other functions. - cond_start_step (int, optional): - The sampling step to start using conditional function. - - Returns: - PerspectiveCameras: PyTorch3D camera object. - """ - - z = self.image_feature_extractor(image) - - z = z.unsqueeze(0) - - B, N, _ = z.shape - target_shape = [B, N, self.target_dim] - - # sampling - pose_encoding, pose_encoding_diffusion_samples = self.diffuser.sample( - shape=target_shape, - z=z, - cond_fn=cond_fn, - cond_start_step=cond_start_step, - ) - - # convert the encoded representation to PyTorch3D cameras - pred_cameras = pose_encoding_to_camera( - pose_encoding, pose_encoding_type=self.pose_encoding_type - ) - - return pred_cameras diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/ftp.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/ftp.py deleted file mode 100644 index 7e79877ebdd287e0ab2938345d448f52ab92dc90..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/ftp.py +++ /dev/null @@ -1,380 +0,0 @@ -import os -import sys -import uuid -import warnings -from ftplib import FTP, Error, error_perm -from typing import Any - -from ..spec import AbstractBufferedFile, AbstractFileSystem -from ..utils import infer_storage_options, isfilelike - - -class FTPFileSystem(AbstractFileSystem): - """A filesystem over classic FTP""" - - root_marker = "/" - cachable = False - protocol = "ftp" - - def __init__( - self, - host, - port=21, - username=None, - password=None, - acct=None, - block_size=None, - tempdir=None, - timeout=30, - encoding="utf-8", - **kwargs, - ): - """ - You can use _get_kwargs_from_urls to get some kwargs from - a reasonable FTP url. - - Authentication will be anonymous if username/password are not - given. - - Parameters - ---------- - host: str - The remote server name/ip to connect to - port: int - Port to connect with - username: str or None - If authenticating, the user's identifier - password: str of None - User's password on the server, if using - acct: str or None - Some servers also need an "account" string for auth - block_size: int or None - If given, the read-ahead or write buffer size. - tempdir: str - Directory on remote to put temporary files when in a transaction - timeout: int - Timeout of the ftp connection in seconds - encoding: str - Encoding to use for directories and filenames in FTP connection - """ - super(FTPFileSystem, self).__init__(**kwargs) - self.host = host - self.port = port - self.tempdir = tempdir or "/tmp" - self.cred = username, password, acct - self.timeout = timeout - self.encoding = encoding - if block_size is not None: - self.blocksize = block_size - else: - self.blocksize = 2**16 - self._connect() - - def _connect(self): - if sys.version_info >= (3, 9): - self.ftp = FTP(timeout=self.timeout, encoding=self.encoding) - elif self.encoding: - warnings.warn("`encoding` not supported for python<3.9, ignoring") - self.ftp = FTP(timeout=self.timeout) - else: - self.ftp = FTP(timeout=self.timeout) - self.ftp.connect(self.host, self.port) - self.ftp.login(*self.cred) - - @classmethod - def _strip_protocol(cls, path): - return "/" + infer_storage_options(path)["path"].lstrip("/").rstrip("/") - - @staticmethod - def _get_kwargs_from_urls(urlpath): - out = infer_storage_options(urlpath) - out.pop("path", None) - out.pop("protocol", None) - return out - - def ls(self, path, detail=True, **kwargs): - path = self._strip_protocol(path) - out = [] - if path not in self.dircache: - try: - try: - out = [ - (fn, details) - for (fn, details) in self.ftp.mlsd(path) - if fn not in [".", ".."] - and details["type"] not in ["pdir", "cdir"] - ] - except error_perm: - out = _mlsd2(self.ftp, path) # Not platform independent - for fn, details in out: - if path == "/": - path = "" # just for forming the names, below - details["name"] = "/".join([path, fn.lstrip("/")]) - if details["type"] == "file": - details["size"] = int(details["size"]) - else: - details["size"] = 0 - if details["type"] == "dir": - details["type"] = "directory" - self.dircache[path] = out - except Error: - try: - info = self.info(path) - if info["type"] == "file": - out = [(path, info)] - except (Error, IndexError): - raise FileNotFoundError(path) - files = self.dircache.get(path, out) - if not detail: - return sorted([fn for fn, details in files]) - return [details for fn, details in files] - - def info(self, path, **kwargs): - # implement with direct method - path = self._strip_protocol(path) - if path == "/": - # special case, since this dir has no real entry - return {"name": "/", "size": 0, "type": "directory"} - files = self.ls(self._parent(path).lstrip("/"), True) - try: - out = [f for f in files if f["name"] == path][0] - except IndexError: - raise FileNotFoundError(path) - return out - - def get_file(self, rpath, lpath, **kwargs): - if self.isdir(rpath): - if not os.path.exists(lpath): - os.mkdir(lpath) - return - if isfilelike(lpath): - outfile = lpath - else: - outfile = open(lpath, "wb") - - def cb(x): - outfile.write(x) - - self.ftp.retrbinary( - "RETR %s" % rpath, - blocksize=self.blocksize, - callback=cb, - ) - if not isfilelike(lpath): - outfile.close() - - def cat_file(self, path, start=None, end=None, **kwargs): - if end is not None: - return super().cat_file(path, start, end, **kwargs) - out = [] - - def cb(x): - out.append(x) - - self.ftp.retrbinary( - "RETR %s" % path, - blocksize=self.blocksize, - rest=start, - callback=cb, - ) - return b"".join(out) - - def _open( - self, - path, - mode="rb", - block_size=None, - cache_options=None, - autocommit=True, - **kwargs, - ): - path = self._strip_protocol(path) - block_size = block_size or self.blocksize - return FTPFile( - self, - path, - mode=mode, - block_size=block_size, - tempdir=self.tempdir, - autocommit=autocommit, - cache_options=cache_options, - ) - - def _rm(self, path): - path = self._strip_protocol(path) - self.ftp.delete(path) - self.invalidate_cache(self._parent(path)) - - def rm(self, path, recursive=False, maxdepth=None): - paths = self.expand_path(path, recursive=recursive, maxdepth=maxdepth) - for p in reversed(paths): - if self.isfile(p): - self.rm_file(p) - else: - self.rmdir(p) - - def mkdir(self, path: str, create_parents: bool = True, **kwargs: Any) -> None: - path = self._strip_protocol(path) - parent = self._parent(path) - if parent != self.root_marker and not self.exists(parent) and create_parents: - self.mkdir(parent, create_parents=create_parents) - - self.ftp.mkd(path) - self.invalidate_cache(self._parent(path)) - - def makedirs(self, path: str, exist_ok: bool = False) -> None: - path = self._strip_protocol(path) - if self.exists(path): - # NB: "/" does not "exist" as it has no directory entry - if not exist_ok: - raise FileExistsError(f"{path} exists without `exist_ok`") - # exists_ok=True -> no-op - else: - self.mkdir(path, create_parents=True) - - def rmdir(self, path): - path = self._strip_protocol(path) - self.ftp.rmd(path) - self.invalidate_cache(self._parent(path)) - - def mv(self, path1, path2, **kwargs): - path1 = self._strip_protocol(path1) - path2 = self._strip_protocol(path2) - self.ftp.rename(path1, path2) - self.invalidate_cache(self._parent(path1)) - self.invalidate_cache(self._parent(path2)) - - def __del__(self): - self.ftp.close() - - def invalidate_cache(self, path=None): - if path is None: - self.dircache.clear() - else: - self.dircache.pop(path, None) - super(FTPFileSystem, self).invalidate_cache(path) - - -class TransferDone(Exception): - """Internal exception to break out of transfer""" - - pass - - -class FTPFile(AbstractBufferedFile): - """Interact with a remote FTP file with read/write buffering""" - - def __init__( - self, - fs, - path, - mode="rb", - block_size="default", - autocommit=True, - cache_type="readahead", - cache_options=None, - **kwargs, - ): - super().__init__( - fs, - path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_type=cache_type, - cache_options=cache_options, - **kwargs, - ) - if not autocommit: - self.target = self.path - self.path = "/".join([kwargs["tempdir"], str(uuid.uuid4())]) - - def commit(self): - self.fs.mv(self.path, self.target) - - def discard(self): - self.fs.rm(self.path) - - def _fetch_range(self, start, end): - """Get bytes between given byte limits - - Implemented by raising an exception in the fetch callback when the - number of bytes received reaches the requested amount. - - Will fail if the server does not respect the REST command on - retrieve requests. - """ - out = [] - total = [0] - - def callback(x): - total[0] += len(x) - if total[0] > end - start: - out.append(x[: (end - start) - total[0]]) - if end < self.size: - raise TransferDone - else: - out.append(x) - - if total[0] == end - start and end < self.size: - raise TransferDone - - try: - self.fs.ftp.retrbinary( - "RETR %s" % self.path, - blocksize=self.blocksize, - rest=start, - callback=callback, - ) - except TransferDone: - try: - # stop transfer, we got enough bytes for this block - self.fs.ftp.abort() - self.fs.ftp.getmultiline() - except Error: - self.fs._connect() - - return b"".join(out) - - def _upload_chunk(self, final=False): - self.buffer.seek(0) - self.fs.ftp.storbinary( - "STOR " + self.path, self.buffer, blocksize=self.blocksize, rest=self.offset - ) - return True - - -def _mlsd2(ftp, path="."): - """ - Fall back to using `dir` instead of `mlsd` if not supported. - - This parses a Linux style `ls -l` response to `dir`, but the response may - be platform dependent. - - Parameters - ---------- - ftp: ftplib.FTP - path: str - Expects to be given path, but defaults to ".". - """ - lines = [] - minfo = [] - ftp.dir(path, lines.append) - for line in lines: - line = line.split() - this = ( - line[-1], - { - "modify": " ".join(line[5:8]), - "unix.owner": line[2], - "unix.group": line[3], - "unix.mode": line[0], - "size": line[4], - }, - ) - if "d" == this[1]["unix.mode"][0]: - this[1]["type"] = "dir" - else: - this[1]["type"] = "file" - minfo.append(this) - return minfo diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/csv-b0b7514a.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/csv-b0b7514a.js deleted file mode 100644 index 511b34b2aed1552447a6605d45d0760eccb992ab..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/csv-b0b7514a.js +++ /dev/null @@ -1,2 +0,0 @@ -import{d as a}from"./dsv-576afacd.js";var s=a(","),v=s.parse,o=s.parseRows;export{v as a,o as c}; -//# sourceMappingURL=csv-b0b7514a.js.map diff --git a/spaces/cihyFjudo/fairness-paper-search/Automagical C Download Free Softwareinstmank Hymnen Tabellarische !!EXCLUSIVE!!.md b/spaces/cihyFjudo/fairness-paper-search/Automagical C Download Free Softwareinstmank Hymnen Tabellarische !!EXCLUSIVE!!.md deleted file mode 100644 index ae907206490f3366e7d13879930d9613f968aa3a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Automagical C Download Free Softwareinstmank Hymnen Tabellarische !!EXCLUSIVE!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

Automagical C Download Free Softwareinstmank hymnen tabellarische


Downloadhttps://tinurli.com/2uwkza



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Mojo 2 Mia on Steam[1].md b/spaces/cihyFjudo/fairness-paper-search/Mojo 2 Mia on Steam[1].md deleted file mode 100644 index bb6ca45ed704afc6a84ea85933b7629c2a847700..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Mojo 2 Mia on Steam[1].md +++ /dev/null @@ -1,10 +0,0 @@ - -

Mamma Mia! Here We Go Again was premiered at the Hammersmith Apollo in London on July 16, 2018, and was released in the United Kingdom and the United States on July 20, 2018, ten years to the week after its predecessor's release, in both standard and IMAX formats.[5] The film was a box office success, grossing $402 million worldwide and received generally positive reviews, as an improvement over its predecessor with critics praising the performances and musical numbers.[6][7]

-

Mojo 2: Mia [serial Number]


DOWNLOAD ———>>> https://tinurli.com/2uwird



-

Principal photography on the film began on August 12, 2017, in Croatia, including the island of Vis.[21][22][23][24] In October 2017, the cast gathered at Shepperton Studios in Surrey, England, to film song and dance numbers with Cher.[25] Filming wrapped on December 2, 2017.[26]

-

Peter Travers of Rolling Stone awarded the film two and a half stars out of five, noting the absence of Streep for the majority of the film hindered his enjoyment, and saying, "her absence is deeply felt since the three-time Oscar winner sang and danced her heart out as Donna Sheridan".[50] Lindsay Bahr of Associated Press awarded the film three out of four stars, calling it "wholly ridiculous", but complimenting its self-awareness. She also praised James' performance and singing talent.[51] Richard Roeper of the Chicago Sun-Times gave the sequel a mixed review, awarding it two stars out of four, criticizing the reprises of "Dancing Queen" and "Super Trouper" as uninspired, and feeling that some of the musical numbers dragged the pacing. He considered the younger counterparts to the main characters "energetic" and "likeable."[52] Stephanie Zacharek of Time gave the film a mixed review, writing "Mamma Mia! Here We Go Again is atrocious. And wonderful. It's all the reasons you should never go to the movies. And all the reasons you should race to get a ticket."[53]

-

Mia and the heroes discovered that the Monitor had placed the towers in some Earths, and that they would have defended her from the shadow demons. She and Oliver talked privately, where she received a suit courtesy of her father. She and her allies set out to defend the tower so that the planet could be evacuated, where they faced an army of shadow demons, receiving praise from their father. Due to being outnumbered, Oliver asked everyone to protect themselves. Mia noticed that they were trying to enter the tower, also complaining that Ray Palmer talked too much and saving his father from one of the demons who was going to attack him from behind.

-

-

Hunting is allowed on several WMAs and NWRs only during special permit hunts to manage deer populations and prevent crowding. Only permit holders are allowed to engage in any hunting-related activity during permit hunts. Hunter education requirements have been removed for youth hunters 6-15 years of age participating in WMA permit hunts, but the hunter must obtain a free customer ID number by going to www.agfc.com, click on "Buy Licenses" located at the top of the page.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Trixie Model Sets 132 -.md b/spaces/cihyFjudo/fairness-paper-search/Trixie Model Sets 132 -.md deleted file mode 100644 index eee4c3269e0850aa9cb06ae62844b76f38ad4bf9..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Trixie Model Sets 132 -.md +++ /dev/null @@ -1,8 +0,0 @@ - -

By adding the percentages together a total 'book' of 100% is achieved (representing a fair book). The bookmaker, in his wish to avail himself of a profit, will invariably reduce these odds. Consider the simplest model of reducing, which uses a proportional decreasing of odds. For the above example, the following odds are in the same proportion with regard to their implied probabilities (3:2:1):

-

I received my license in 2018, from the School of Medical and Botanical Esthetics, in Denver. I fell in love with skincare and beauty at a very young age and followed my heart to esthetician school. I love being able to make others feel comfortable in their own skin and helping any skin concerns they have. I currently specialize in Volume and mega volume lash extensions. Before starting my career as a lash artist, I assisted one of the top makeup artists in Colorado. Being around models, brides, and fashion shows fueled my love even more. I cannot wait to meet you and help you achieve your dream lashes!

-

Trixie Model Sets 132 -


Download Zip 🗸🗸🗸 https://tinurli.com/2uwhQC



-

Available to stream beginning Friday, June 3, the eight-episode series will follow Trixie, her partner and property co-owner, David Silver, and her team as they tackle the massive overhaul of a ramshackle mid-century motel in Palm Springs, California. After spending nearly two million dollars to buy the dilapidated property, Trixie and David will recruit a slew of spectacular helpers to complete the project in time to kick off Pride Month with a grand opening extravaganza. To help out with the epic undertaking, Trixie secures "free labor" from friends, including hospitality mogul Lisa Vanderpump, comedian Nicole Byer, actor & musician Zooey Deschanel, Property Brother star Jonathan Scott, and drag queen/partner in crime Katya. More famous friends lend a hand during the season, including award-winning actor Leslie Jordan, musician and model Iggy Azalea, actor and television host Jonathan Bennett, and musician Belinda Carlisle.

-

To get the motel in drag, Trixie and David also will enlist fashion and interior designer Dani Dazey, who will bring her maximalist, elevated kitschy style to each space, alongside no-nonsense Palm Springs project manager David Rios. Throughout the season, even more of Trixie's most fabulous friends will pop in to participate in the process, including social media star and comedian Brittany Broski; actor and model Gigi Gorgeous; actor Emily Hampshire; and recording artist Orville Peck. Social media superstars The Old Gays; Nic Scheppard and Jenson Titus of Very Gay Paint; and renowned drag queens Mo Heart, Jaida Essence Hall, and alien drag queen Juno Birch also will make an appearance. With even more legendary guests at the grand opening party, the season will be flush with star power to ring in the fiercest spot Palm Springs has to offer.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/data.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/data.py deleted file mode 100644 index e6ba9a976c2aa4cabbf0a6031400f0d910b59ac3..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/data.py +++ /dev/null @@ -1,78 +0,0 @@ -from __future__ import annotations - -from typing import TYPE_CHECKING, Any - -import numpy as np - -if TYPE_CHECKING: - from contourpy._contourpy import CoordinateArray - - -def simple( - shape: tuple[int, int], want_mask: bool = False, -) -> tuple[CoordinateArray, CoordinateArray, CoordinateArray | np.ma.MaskedArray[Any, Any]]: - """Return simple test data consisting of the sum of two gaussians. - - Args: - shape (tuple(int, int)): 2D shape of data to return. - want_mask (bool, optional): Whether test data should be masked or not, default ``False``. - - Return: - Tuple of 3 arrays: ``x``, ``y``, ``z`` test data, ``z`` will be masked if - ``want_mask=True``. - """ - ny, nx = shape - x = np.arange(nx, dtype=np.float64) - y = np.arange(ny, dtype=np.float64) - x, y = np.meshgrid(x, y) - - xscale = nx - 1.0 - yscale = ny - 1.0 - - # z is sum of 2D gaussians. - amp = np.asarray([1.0, -1.0, 0.8, -0.9, 0.7]) - mid = np.asarray([[0.4, 0.2], [0.3, 0.8], [0.9, 0.75], [0.7, 0.3], [0.05, 0.7]]) - width = np.asarray([0.4, 0.2, 0.2, 0.2, 0.1]) - - z = np.zeros_like(x) - for i in range(len(amp)): - z += amp[i]*np.exp(-((x/xscale - mid[i, 0])**2 + (y/yscale - mid[i, 1])**2) / width[i]**2) - - if want_mask: - mask = np.logical_or( - ((x/xscale - 1.0)**2 / 0.2 + (y/yscale - 0.0)**2 / 0.1) < 1.0, - ((x/xscale - 0.2)**2 / 0.02 + (y/yscale - 0.45)**2 / 0.08) < 1.0, - ) - z = np.ma.array(z, mask=mask) # type: ignore[no-untyped-call] - - return x, y, z - - -def random( - shape: tuple[int, int], seed: int = 2187, mask_fraction: float = 0.0, -) -> tuple[CoordinateArray, CoordinateArray, CoordinateArray | np.ma.MaskedArray[Any, Any]]: - """Return random test data.. - - Args: - shape (tuple(int, int)): 2D shape of data to return. - seed (int, optional): Seed for random number generator, default 2187. - mask_fraction (float, optional): Fraction of elements to mask, default 0. - - Return: - Tuple of 3 arrays: ``x``, ``y``, ``z`` test data, ``z`` will be masked if - ``mask_fraction`` is greater than zero. - """ - ny, nx = shape - x = np.arange(nx, dtype=np.float64) - y = np.arange(ny, dtype=np.float64) - x, y = np.meshgrid(x, y) - - rng = np.random.default_rng(seed) - z = rng.uniform(size=shape) - - if mask_fraction > 0.0: - mask_fraction = min(mask_fraction, 0.99) - mask = rng.uniform(size=shape) < mask_fraction - z = np.ma.array(z, mask=mask) # type: ignore[no-untyped-call] - - return x, y, z diff --git a/spaces/colakin/video-generater/public/assets/css/fontawesome-all.min.css b/spaces/colakin/video-generater/public/assets/css/fontawesome-all.min.css deleted file mode 100644 index 03c42e37feba4ccd3507696d312f71be11de8b43..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/assets/css/fontawesome-all.min.css +++ /dev/null @@ -1,101 +0,0 @@ -/*! - * Font Awesome Free 5.15.4 by @fontawesome - https://fontawesome.com - * License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) - */ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -.fa,.fab,.fad,.fal,.far,.fas{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;display:inline-block;font-style:normal;font-variant:normal;text-rendering:auto;line-height:1}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-.0667em}.fa-xs{font-size:.75em}.fa-sm{font-size:.875em}.fa-1x{font-size:1em}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-6x{font-size:6em}.fa-7x{font-size:7em}.fa-8x{font-size:8em}.fa-9x{font-size:9em}.fa-10x{font-size:10em}.fa-fw{text-align:center;width:1.25em}.fa-ul{list-style-type:none;margin-left:2.5em;padding-left:0}.fa-ul>li{position:relative}.fa-li{left:-2em;position:absolute;text-align:center;width:2em;line-height:inherit}.fa-border{border:.08em solid #eee;border-radius:.1em;padding:.2em .25em .15em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa.fa-pull-left,.fab.fa-pull-left,.fal.fa-pull-left,.far.fa-pull-left,.fas.fa-pull-left{margin-right:.3em}.fa.fa-pull-right,.fab.fa-pull-right,.fal.fa-pull-right,.far.fa-pull-right,.fas.fa-pull-right{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(1turn);transform:rotate(1turn)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(1turn);transform:rotate(1turn)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-webkit-transform:scaleY(-1);transform:scaleY(-1)}.fa-flip-both,.fa-flip-horizontal.fa-flip-vertical,.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)"}.fa-flip-both,.fa-flip-horizontal.fa-flip-vertical{-webkit-transform:scale(-1);transform:scale(-1)}:root .fa-flip-both,:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{-webkit-filter:none;filter:none}.fa-stack{display:inline-block;height:2em;line-height:2em;position:relative;vertical-align:middle;width:2.5em}.fa-stack-1x,.fa-stack-2x{left:0;position:absolute;text-align:center;width:100%}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-500px:before{content:"\f26e"}.fa-accessible-icon:before{content:"\f368"}.fa-accusoft:before{content:"\f369"}.fa-acquisitions-incorporated:before{content:"\f6af"}.fa-ad:before{content:"\f641"}.fa-address-book:before{content:"\f2b9"}.fa-address-card:before{content:"\f2bb"}.fa-adjust:before{content:"\f042"}.fa-adn:before{content:"\f170"}.fa-adversal:before{content:"\f36a"}.fa-affiliatetheme:before{content:"\f36b"}.fa-air-freshener:before{content:"\f5d0"}.fa-airbnb:before{content:"\f834"}.fa-algolia:before{content:"\f36c"}.fa-align-center:before{content:"\f037"}.fa-align-justify:before{content:"\f039"}.fa-align-left:before{content:"\f036"}.fa-align-right:before{content:"\f038"}.fa-alipay:before{content:"\f642"}.fa-allergies:before{content:"\f461"}.fa-amazon:before{content:"\f270"}.fa-amazon-pay:before{content:"\f42c"}.fa-ambulance:before{content:"\f0f9"}.fa-american-sign-language-interpreting:before{content:"\f2a3"}.fa-amilia:before{content:"\f36d"}.fa-anchor:before{content:"\f13d"}.fa-android:before{content:"\f17b"}.fa-angellist:before{content:"\f209"}.fa-angle-double-down:before{content:"\f103"}.fa-angle-double-left:before{content:"\f100"}.fa-angle-double-right:before{content:"\f101"}.fa-angle-double-up:before{content:"\f102"}.fa-angle-down:before{content:"\f107"}.fa-angle-left:before{content:"\f104"}.fa-angle-right:before{content:"\f105"}.fa-angle-up:before{content:"\f106"}.fa-angry:before{content:"\f556"}.fa-angrycreative:before{content:"\f36e"}.fa-angular:before{content:"\f420"}.fa-ankh:before{content:"\f644"}.fa-app-store:before{content:"\f36f"}.fa-app-store-ios:before{content:"\f370"}.fa-apper:before{content:"\f371"}.fa-apple:before{content:"\f179"}.fa-apple-alt:before{content:"\f5d1"}.fa-apple-pay:before{content:"\f415"}.fa-archive:before{content:"\f187"}.fa-archway:before{content:"\f557"}.fa-arrow-alt-circle-down:before{content:"\f358"}.fa-arrow-alt-circle-left:before{content:"\f359"}.fa-arrow-alt-circle-right:before{content:"\f35a"}.fa-arrow-alt-circle-up:before{content:"\f35b"}.fa-arrow-circle-down:before{content:"\f0ab"}.fa-arrow-circle-left:before{content:"\f0a8"}.fa-arrow-circle-right:before{content:"\f0a9"}.fa-arrow-circle-up:before{content:"\f0aa"}.fa-arrow-down:before{content:"\f063"}.fa-arrow-left:before{content:"\f060"}.fa-arrow-right:before{content:"\f061"}.fa-arrow-up:before{content:"\f062"}.fa-arrows-alt:before{content:"\f0b2"}.fa-arrows-alt-h:before{content:"\f337"}.fa-arrows-alt-v:before{content:"\f338"}.fa-artstation:before{content:"\f77a"}.fa-assistive-listening-systems:before{content:"\f2a2"}.fa-asterisk:before{content:"\f069"}.fa-asymmetrik:before{content:"\f372"}.fa-at:before{content:"\f1fa"}.fa-atlas:before{content:"\f558"}.fa-atlassian:before{content:"\f77b"}.fa-atom:before{content:"\f5d2"}.fa-audible:before{content:"\f373"}.fa-audio-description:before{content:"\f29e"}.fa-autoprefixer:before{content:"\f41c"}.fa-avianex:before{content:"\f374"}.fa-aviato:before{content:"\f421"}.fa-award:before{content:"\f559"}.fa-aws:before{content:"\f375"}.fa-baby:before{content:"\f77c"}.fa-baby-carriage:before{content:"\f77d"}.fa-backspace:before{content:"\f55a"}.fa-backward:before{content:"\f04a"}.fa-bacon:before{content:"\f7e5"}.fa-bacteria:before{content:"\e059"}.fa-bacterium:before{content:"\e05a"}.fa-bahai:before{content:"\f666"}.fa-balance-scale:before{content:"\f24e"}.fa-balance-scale-left:before{content:"\f515"}.fa-balance-scale-right:before{content:"\f516"}.fa-ban:before{content:"\f05e"}.fa-band-aid:before{content:"\f462"}.fa-bandcamp:before{content:"\f2d5"}.fa-barcode:before{content:"\f02a"}.fa-bars:before{content:"\f0c9"}.fa-baseball-ball:before{content:"\f433"}.fa-basketball-ball:before{content:"\f434"}.fa-bath:before{content:"\f2cd"}.fa-battery-empty:before{content:"\f244"}.fa-battery-full:before{content:"\f240"}.fa-battery-half:before{content:"\f242"}.fa-battery-quarter:before{content:"\f243"}.fa-battery-three-quarters:before{content:"\f241"}.fa-battle-net:before{content:"\f835"}.fa-bed:before{content:"\f236"}.fa-beer:before{content:"\f0fc"}.fa-behance:before{content:"\f1b4"}.fa-behance-square:before{content:"\f1b5"}.fa-bell:before{content:"\f0f3"}.fa-bell-slash:before{content:"\f1f6"}.fa-bezier-curve:before{content:"\f55b"}.fa-bible:before{content:"\f647"}.fa-bicycle:before{content:"\f206"}.fa-biking:before{content:"\f84a"}.fa-bimobject:before{content:"\f378"}.fa-binoculars:before{content:"\f1e5"}.fa-biohazard:before{content:"\f780"}.fa-birthday-cake:before{content:"\f1fd"}.fa-bitbucket:before{content:"\f171"}.fa-bitcoin:before{content:"\f379"}.fa-bity:before{content:"\f37a"}.fa-black-tie:before{content:"\f27e"}.fa-blackberry:before{content:"\f37b"}.fa-blender:before{content:"\f517"}.fa-blender-phone:before{content:"\f6b6"}.fa-blind:before{content:"\f29d"}.fa-blog:before{content:"\f781"}.fa-blogger:before{content:"\f37c"}.fa-blogger-b:before{content:"\f37d"}.fa-bluetooth:before{content:"\f293"}.fa-bluetooth-b:before{content:"\f294"}.fa-bold:before{content:"\f032"}.fa-bolt:before{content:"\f0e7"}.fa-bomb:before{content:"\f1e2"}.fa-bone:before{content:"\f5d7"}.fa-bong:before{content:"\f55c"}.fa-book:before{content:"\f02d"}.fa-book-dead:before{content:"\f6b7"}.fa-book-medical:before{content:"\f7e6"}.fa-book-open:before{content:"\f518"}.fa-book-reader:before{content:"\f5da"}.fa-bookmark:before{content:"\f02e"}.fa-bootstrap:before{content:"\f836"}.fa-border-all:before{content:"\f84c"}.fa-border-none:before{content:"\f850"}.fa-border-style:before{content:"\f853"}.fa-bowling-ball:before{content:"\f436"}.fa-box:before{content:"\f466"}.fa-box-open:before{content:"\f49e"}.fa-box-tissue:before{content:"\e05b"}.fa-boxes:before{content:"\f468"}.fa-braille:before{content:"\f2a1"}.fa-brain:before{content:"\f5dc"}.fa-bread-slice:before{content:"\f7ec"}.fa-briefcase:before{content:"\f0b1"}.fa-briefcase-medical:before{content:"\f469"}.fa-broadcast-tower:before{content:"\f519"}.fa-broom:before{content:"\f51a"}.fa-brush:before{content:"\f55d"}.fa-btc:before{content:"\f15a"}.fa-buffer:before{content:"\f837"}.fa-bug:before{content:"\f188"}.fa-building:before{content:"\f1ad"}.fa-bullhorn:before{content:"\f0a1"}.fa-bullseye:before{content:"\f140"}.fa-burn:before{content:"\f46a"}.fa-buromobelexperte:before{content:"\f37f"}.fa-bus:before{content:"\f207"}.fa-bus-alt:before{content:"\f55e"}.fa-business-time:before{content:"\f64a"}.fa-buy-n-large:before{content:"\f8a6"}.fa-buysellads:before{content:"\f20d"}.fa-calculator:before{content:"\f1ec"}.fa-calendar:before{content:"\f133"}.fa-calendar-alt:before{content:"\f073"}.fa-calendar-check:before{content:"\f274"}.fa-calendar-day:before{content:"\f783"}.fa-calendar-minus:before{content:"\f272"}.fa-calendar-plus:before{content:"\f271"}.fa-calendar-times:before{content:"\f273"}.fa-calendar-week:before{content:"\f784"}.fa-camera:before{content:"\f030"}.fa-camera-retro:before{content:"\f083"}.fa-campground:before{content:"\f6bb"}.fa-canadian-maple-leaf:before{content:"\f785"}.fa-candy-cane:before{content:"\f786"}.fa-cannabis:before{content:"\f55f"}.fa-capsules:before{content:"\f46b"}.fa-car:before{content:"\f1b9"}.fa-car-alt:before{content:"\f5de"}.fa-car-battery:before{content:"\f5df"}.fa-car-crash:before{content:"\f5e1"}.fa-car-side:before{content:"\f5e4"}.fa-caravan:before{content:"\f8ff"}.fa-caret-down:before{content:"\f0d7"}.fa-caret-left:before{content:"\f0d9"}.fa-caret-right:before{content:"\f0da"}.fa-caret-square-down:before{content:"\f150"}.fa-caret-square-left:before{content:"\f191"}.fa-caret-square-right:before{content:"\f152"}.fa-caret-square-up:before{content:"\f151"}.fa-caret-up:before{content:"\f0d8"}.fa-carrot:before{content:"\f787"}.fa-cart-arrow-down:before{content:"\f218"}.fa-cart-plus:before{content:"\f217"}.fa-cash-register:before{content:"\f788"}.fa-cat:before{content:"\f6be"}.fa-cc-amazon-pay:before{content:"\f42d"}.fa-cc-amex:before{content:"\f1f3"}.fa-cc-apple-pay:before{content:"\f416"}.fa-cc-diners-club:before{content:"\f24c"}.fa-cc-discover:before{content:"\f1f2"}.fa-cc-jcb:before{content:"\f24b"}.fa-cc-mastercard:before{content:"\f1f1"}.fa-cc-paypal:before{content:"\f1f4"}.fa-cc-stripe:before{content:"\f1f5"}.fa-cc-visa:before{content:"\f1f0"}.fa-centercode:before{content:"\f380"}.fa-centos:before{content:"\f789"}.fa-certificate:before{content:"\f0a3"}.fa-chair:before{content:"\f6c0"}.fa-chalkboard:before{content:"\f51b"}.fa-chalkboard-teacher:before{content:"\f51c"}.fa-charging-station:before{content:"\f5e7"}.fa-chart-area:before{content:"\f1fe"}.fa-chart-bar:before{content:"\f080"}.fa-chart-line:before{content:"\f201"}.fa-chart-pie:before{content:"\f200"}.fa-check:before{content:"\f00c"}.fa-check-circle:before{content:"\f058"}.fa-check-double:before{content:"\f560"}.fa-check-square:before{content:"\f14a"}.fa-cheese:before{content:"\f7ef"}.fa-chess:before{content:"\f439"}.fa-chess-bishop:before{content:"\f43a"}.fa-chess-board:before{content:"\f43c"}.fa-chess-king:before{content:"\f43f"}.fa-chess-knight:before{content:"\f441"}.fa-chess-pawn:before{content:"\f443"}.fa-chess-queen:before{content:"\f445"}.fa-chess-rook:before{content:"\f447"}.fa-chevron-circle-down:before{content:"\f13a"}.fa-chevron-circle-left:before{content:"\f137"}.fa-chevron-circle-right:before{content:"\f138"}.fa-chevron-circle-up:before{content:"\f139"}.fa-chevron-down:before{content:"\f078"}.fa-chevron-left:before{content:"\f053"}.fa-chevron-right:before{content:"\f054"}.fa-chevron-up:before{content:"\f077"}.fa-child:before{content:"\f1ae"}.fa-chrome:before{content:"\f268"}.fa-chromecast:before{content:"\f838"}.fa-church:before{content:"\f51d"}.fa-circle:before{content:"\f111"}.fa-circle-notch:before{content:"\f1ce"}.fa-city:before{content:"\f64f"}.fa-clinic-medical:before{content:"\f7f2"}.fa-clipboard:before{content:"\f328"}.fa-clipboard-check:before{content:"\f46c"}.fa-clipboard-list:before{content:"\f46d"}.fa-clock:before{content:"\f017"}.fa-clone:before{content:"\f24d"}.fa-closed-captioning:before{content:"\f20a"}.fa-cloud:before{content:"\f0c2"}.fa-cloud-download-alt:before{content:"\f381"}.fa-cloud-meatball:before{content:"\f73b"}.fa-cloud-moon:before{content:"\f6c3"}.fa-cloud-moon-rain:before{content:"\f73c"}.fa-cloud-rain:before{content:"\f73d"}.fa-cloud-showers-heavy:before{content:"\f740"}.fa-cloud-sun:before{content:"\f6c4"}.fa-cloud-sun-rain:before{content:"\f743"}.fa-cloud-upload-alt:before{content:"\f382"}.fa-cloudflare:before{content:"\e07d"}.fa-cloudscale:before{content:"\f383"}.fa-cloudsmith:before{content:"\f384"}.fa-cloudversify:before{content:"\f385"}.fa-cocktail:before{content:"\f561"}.fa-code:before{content:"\f121"}.fa-code-branch:before{content:"\f126"}.fa-codepen:before{content:"\f1cb"}.fa-codiepie:before{content:"\f284"}.fa-coffee:before{content:"\f0f4"}.fa-cog:before{content:"\f013"}.fa-cogs:before{content:"\f085"}.fa-coins:before{content:"\f51e"}.fa-columns:before{content:"\f0db"}.fa-comment:before{content:"\f075"}.fa-comment-alt:before{content:"\f27a"}.fa-comment-dollar:before{content:"\f651"}.fa-comment-dots:before{content:"\f4ad"}.fa-comment-medical:before{content:"\f7f5"}.fa-comment-slash:before{content:"\f4b3"}.fa-comments:before{content:"\f086"}.fa-comments-dollar:before{content:"\f653"}.fa-compact-disc:before{content:"\f51f"}.fa-compass:before{content:"\f14e"}.fa-compress:before{content:"\f066"}.fa-compress-alt:before{content:"\f422"}.fa-compress-arrows-alt:before{content:"\f78c"}.fa-concierge-bell:before{content:"\f562"}.fa-confluence:before{content:"\f78d"}.fa-connectdevelop:before{content:"\f20e"}.fa-contao:before{content:"\f26d"}.fa-cookie:before{content:"\f563"}.fa-cookie-bite:before{content:"\f564"}.fa-copy:before{content:"\f0c5"}.fa-copyright:before{content:"\f1f9"}.fa-cotton-bureau:before{content:"\f89e"}.fa-couch:before{content:"\f4b8"}.fa-cpanel:before{content:"\f388"}.fa-creative-commons:before{content:"\f25e"}.fa-creative-commons-by:before{content:"\f4e7"}.fa-creative-commons-nc:before{content:"\f4e8"}.fa-creative-commons-nc-eu:before{content:"\f4e9"}.fa-creative-commons-nc-jp:before{content:"\f4ea"}.fa-creative-commons-nd:before{content:"\f4eb"}.fa-creative-commons-pd:before{content:"\f4ec"}.fa-creative-commons-pd-alt:before{content:"\f4ed"}.fa-creative-commons-remix:before{content:"\f4ee"}.fa-creative-commons-sa:before{content:"\f4ef"}.fa-creative-commons-sampling:before{content:"\f4f0"}.fa-creative-commons-sampling-plus:before{content:"\f4f1"}.fa-creative-commons-share:before{content:"\f4f2"}.fa-creative-commons-zero:before{content:"\f4f3"}.fa-credit-card:before{content:"\f09d"}.fa-critical-role:before{content:"\f6c9"}.fa-crop:before{content:"\f125"}.fa-crop-alt:before{content:"\f565"}.fa-cross:before{content:"\f654"}.fa-crosshairs:before{content:"\f05b"}.fa-crow:before{content:"\f520"}.fa-crown:before{content:"\f521"}.fa-crutch:before{content:"\f7f7"}.fa-css3:before{content:"\f13c"}.fa-css3-alt:before{content:"\f38b"}.fa-cube:before{content:"\f1b2"}.fa-cubes:before{content:"\f1b3"}.fa-cut:before{content:"\f0c4"}.fa-cuttlefish:before{content:"\f38c"}.fa-d-and-d:before{content:"\f38d"}.fa-d-and-d-beyond:before{content:"\f6ca"}.fa-dailymotion:before{content:"\e052"}.fa-dashcube:before{content:"\f210"}.fa-database:before{content:"\f1c0"}.fa-deaf:before{content:"\f2a4"}.fa-deezer:before{content:"\e077"}.fa-delicious:before{content:"\f1a5"}.fa-democrat:before{content:"\f747"}.fa-deploydog:before{content:"\f38e"}.fa-deskpro:before{content:"\f38f"}.fa-desktop:before{content:"\f108"}.fa-dev:before{content:"\f6cc"}.fa-deviantart:before{content:"\f1bd"}.fa-dharmachakra:before{content:"\f655"}.fa-dhl:before{content:"\f790"}.fa-diagnoses:before{content:"\f470"}.fa-diaspora:before{content:"\f791"}.fa-dice:before{content:"\f522"}.fa-dice-d20:before{content:"\f6cf"}.fa-dice-d6:before{content:"\f6d1"}.fa-dice-five:before{content:"\f523"}.fa-dice-four:before{content:"\f524"}.fa-dice-one:before{content:"\f525"}.fa-dice-six:before{content:"\f526"}.fa-dice-three:before{content:"\f527"}.fa-dice-two:before{content:"\f528"}.fa-digg:before{content:"\f1a6"}.fa-digital-ocean:before{content:"\f391"}.fa-digital-tachograph:before{content:"\f566"}.fa-directions:before{content:"\f5eb"}.fa-discord:before{content:"\f392"}.fa-discourse:before{content:"\f393"}.fa-disease:before{content:"\f7fa"}.fa-divide:before{content:"\f529"}.fa-dizzy:before{content:"\f567"}.fa-dna:before{content:"\f471"}.fa-dochub:before{content:"\f394"}.fa-docker:before{content:"\f395"}.fa-dog:before{content:"\f6d3"}.fa-dollar-sign:before{content:"\f155"}.fa-dolly:before{content:"\f472"}.fa-dolly-flatbed:before{content:"\f474"}.fa-donate:before{content:"\f4b9"}.fa-door-closed:before{content:"\f52a"}.fa-door-open:before{content:"\f52b"}.fa-dot-circle:before{content:"\f192"}.fa-dove:before{content:"\f4ba"}.fa-download:before{content:"\f019"}.fa-draft2digital:before{content:"\f396"}.fa-drafting-compass:before{content:"\f568"}.fa-dragon:before{content:"\f6d5"}.fa-draw-polygon:before{content:"\f5ee"}.fa-dribbble:before{content:"\f17d"}.fa-dribbble-square:before{content:"\f397"}.fa-dropbox:before{content:"\f16b"}.fa-drum:before{content:"\f569"}.fa-drum-steelpan:before{content:"\f56a"}.fa-drumstick-bite:before{content:"\f6d7"}.fa-drupal:before{content:"\f1a9"}.fa-dumbbell:before{content:"\f44b"}.fa-dumpster:before{content:"\f793"}.fa-dumpster-fire:before{content:"\f794"}.fa-dungeon:before{content:"\f6d9"}.fa-dyalog:before{content:"\f399"}.fa-earlybirds:before{content:"\f39a"}.fa-ebay:before{content:"\f4f4"}.fa-edge:before{content:"\f282"}.fa-edge-legacy:before{content:"\e078"}.fa-edit:before{content:"\f044"}.fa-egg:before{content:"\f7fb"}.fa-eject:before{content:"\f052"}.fa-elementor:before{content:"\f430"}.fa-ellipsis-h:before{content:"\f141"}.fa-ellipsis-v:before{content:"\f142"}.fa-ello:before{content:"\f5f1"}.fa-ember:before{content:"\f423"}.fa-empire:before{content:"\f1d1"}.fa-envelope:before{content:"\f0e0"}.fa-envelope-open:before{content:"\f2b6"}.fa-envelope-open-text:before{content:"\f658"}.fa-envelope-square:before{content:"\f199"}.fa-envira:before{content:"\f299"}.fa-equals:before{content:"\f52c"}.fa-eraser:before{content:"\f12d"}.fa-erlang:before{content:"\f39d"}.fa-ethereum:before{content:"\f42e"}.fa-ethernet:before{content:"\f796"}.fa-etsy:before{content:"\f2d7"}.fa-euro-sign:before{content:"\f153"}.fa-evernote:before{content:"\f839"}.fa-exchange-alt:before{content:"\f362"}.fa-exclamation:before{content:"\f12a"}.fa-exclamation-circle:before{content:"\f06a"}.fa-exclamation-triangle:before{content:"\f071"}.fa-expand:before{content:"\f065"}.fa-expand-alt:before{content:"\f424"}.fa-expand-arrows-alt:before{content:"\f31e"}.fa-expeditedssl:before{content:"\f23e"}.fa-external-link-alt:before{content:"\f35d"}.fa-external-link-square-alt:before{content:"\f360"}.fa-eye:before{content:"\f06e"}.fa-eye-dropper:before{content:"\f1fb"}.fa-eye-slash:before{content:"\f070"}.fa-facebook:before{content:"\f09a"}.fa-facebook-f:before{content:"\f39e"}.fa-facebook-messenger:before{content:"\f39f"}.fa-facebook-square:before{content:"\f082"}.fa-fan:before{content:"\f863"}.fa-fantasy-flight-games:before{content:"\f6dc"}.fa-fast-backward:before{content:"\f049"}.fa-fast-forward:before{content:"\f050"}.fa-faucet:before{content:"\e005"}.fa-fax:before{content:"\f1ac"}.fa-feather:before{content:"\f52d"}.fa-feather-alt:before{content:"\f56b"}.fa-fedex:before{content:"\f797"}.fa-fedora:before{content:"\f798"}.fa-female:before{content:"\f182"}.fa-fighter-jet:before{content:"\f0fb"}.fa-figma:before{content:"\f799"}.fa-file:before{content:"\f15b"}.fa-file-alt:before{content:"\f15c"}.fa-file-archive:before{content:"\f1c6"}.fa-file-audio:before{content:"\f1c7"}.fa-file-code:before{content:"\f1c9"}.fa-file-contract:before{content:"\f56c"}.fa-file-csv:before{content:"\f6dd"}.fa-file-download:before{content:"\f56d"}.fa-file-excel:before{content:"\f1c3"}.fa-file-export:before{content:"\f56e"}.fa-file-image:before{content:"\f1c5"}.fa-file-import:before{content:"\f56f"}.fa-file-invoice:before{content:"\f570"}.fa-file-invoice-dollar:before{content:"\f571"}.fa-file-medical:before{content:"\f477"}.fa-file-medical-alt:before{content:"\f478"}.fa-file-pdf:before{content:"\f1c1"}.fa-file-powerpoint:before{content:"\f1c4"}.fa-file-prescription:before{content:"\f572"}.fa-file-signature:before{content:"\f573"}.fa-file-upload:before{content:"\f574"}.fa-file-video:before{content:"\f1c8"}.fa-file-word:before{content:"\f1c2"}.fa-fill:before{content:"\f575"}.fa-fill-drip:before{content:"\f576"}.fa-film:before{content:"\f008"}.fa-filter:before{content:"\f0b0"}.fa-fingerprint:before{content:"\f577"}.fa-fire:before{content:"\f06d"}.fa-fire-alt:before{content:"\f7e4"}.fa-fire-extinguisher:before{content:"\f134"}.fa-firefox:before{content:"\f269"}.fa-firefox-browser:before{content:"\e007"}.fa-first-aid:before{content:"\f479"}.fa-first-order:before{content:"\f2b0"}.fa-first-order-alt:before{content:"\f50a"}.fa-firstdraft:before{content:"\f3a1"}.fa-fish:before{content:"\f578"}.fa-fist-raised:before{content:"\f6de"}.fa-flag:before{content:"\f024"}.fa-flag-checkered:before{content:"\f11e"}.fa-flag-usa:before{content:"\f74d"}.fa-flask:before{content:"\f0c3"}.fa-flickr:before{content:"\f16e"}.fa-flipboard:before{content:"\f44d"}.fa-flushed:before{content:"\f579"}.fa-fly:before{content:"\f417"}.fa-folder:before{content:"\f07b"}.fa-folder-minus:before{content:"\f65d"}.fa-folder-open:before{content:"\f07c"}.fa-folder-plus:before{content:"\f65e"}.fa-font:before{content:"\f031"}.fa-font-awesome:before{content:"\f2b4"}.fa-font-awesome-alt:before{content:"\f35c"}.fa-font-awesome-flag:before{content:"\f425"}.fa-font-awesome-logo-full:before{content:"\f4e6"}.fa-fonticons:before{content:"\f280"}.fa-fonticons-fi:before{content:"\f3a2"}.fa-football-ball:before{content:"\f44e"}.fa-fort-awesome:before{content:"\f286"}.fa-fort-awesome-alt:before{content:"\f3a3"}.fa-forumbee:before{content:"\f211"}.fa-forward:before{content:"\f04e"}.fa-foursquare:before{content:"\f180"}.fa-free-code-camp:before{content:"\f2c5"}.fa-freebsd:before{content:"\f3a4"}.fa-frog:before{content:"\f52e"}.fa-frown:before{content:"\f119"}.fa-frown-open:before{content:"\f57a"}.fa-fulcrum:before{content:"\f50b"}.fa-funnel-dollar:before{content:"\f662"}.fa-futbol:before{content:"\f1e3"}.fa-galactic-republic:before{content:"\f50c"}.fa-galactic-senate:before{content:"\f50d"}.fa-gamepad:before{content:"\f11b"}.fa-gas-pump:before{content:"\f52f"}.fa-gavel:before{content:"\f0e3"}.fa-gem:before{content:"\f3a5"}.fa-genderless:before{content:"\f22d"}.fa-get-pocket:before{content:"\f265"}.fa-gg:before{content:"\f260"}.fa-gg-circle:before{content:"\f261"}.fa-ghost:before{content:"\f6e2"}.fa-gift:before{content:"\f06b"}.fa-gifts:before{content:"\f79c"}.fa-git:before{content:"\f1d3"}.fa-git-alt:before{content:"\f841"}.fa-git-square:before{content:"\f1d2"}.fa-github:before{content:"\f09b"}.fa-github-alt:before{content:"\f113"}.fa-github-square:before{content:"\f092"}.fa-gitkraken:before{content:"\f3a6"}.fa-gitlab:before{content:"\f296"}.fa-gitter:before{content:"\f426"}.fa-glass-cheers:before{content:"\f79f"}.fa-glass-martini:before{content:"\f000"}.fa-glass-martini-alt:before{content:"\f57b"}.fa-glass-whiskey:before{content:"\f7a0"}.fa-glasses:before{content:"\f530"}.fa-glide:before{content:"\f2a5"}.fa-glide-g:before{content:"\f2a6"}.fa-globe:before{content:"\f0ac"}.fa-globe-africa:before{content:"\f57c"}.fa-globe-americas:before{content:"\f57d"}.fa-globe-asia:before{content:"\f57e"}.fa-globe-europe:before{content:"\f7a2"}.fa-gofore:before{content:"\f3a7"}.fa-golf-ball:before{content:"\f450"}.fa-goodreads:before{content:"\f3a8"}.fa-goodreads-g:before{content:"\f3a9"}.fa-google:before{content:"\f1a0"}.fa-google-drive:before{content:"\f3aa"}.fa-google-pay:before{content:"\e079"}.fa-google-play:before{content:"\f3ab"}.fa-google-plus:before{content:"\f2b3"}.fa-google-plus-g:before{content:"\f0d5"}.fa-google-plus-square:before{content:"\f0d4"}.fa-google-wallet:before{content:"\f1ee"}.fa-gopuram:before{content:"\f664"}.fa-graduation-cap:before{content:"\f19d"}.fa-gratipay:before{content:"\f184"}.fa-grav:before{content:"\f2d6"}.fa-greater-than:before{content:"\f531"}.fa-greater-than-equal:before{content:"\f532"}.fa-grimace:before{content:"\f57f"}.fa-grin:before{content:"\f580"}.fa-grin-alt:before{content:"\f581"}.fa-grin-beam:before{content:"\f582"}.fa-grin-beam-sweat:before{content:"\f583"}.fa-grin-hearts:before{content:"\f584"}.fa-grin-squint:before{content:"\f585"}.fa-grin-squint-tears:before{content:"\f586"}.fa-grin-stars:before{content:"\f587"}.fa-grin-tears:before{content:"\f588"}.fa-grin-tongue:before{content:"\f589"}.fa-grin-tongue-squint:before{content:"\f58a"}.fa-grin-tongue-wink:before{content:"\f58b"}.fa-grin-wink:before{content:"\f58c"}.fa-grip-horizontal:before{content:"\f58d"}.fa-grip-lines:before{content:"\f7a4"}.fa-grip-lines-vertical:before{content:"\f7a5"}.fa-grip-vertical:before{content:"\f58e"}.fa-gripfire:before{content:"\f3ac"}.fa-grunt:before{content:"\f3ad"}.fa-guilded:before{content:"\e07e"}.fa-guitar:before{content:"\f7a6"}.fa-gulp:before{content:"\f3ae"}.fa-h-square:before{content:"\f0fd"}.fa-hacker-news:before{content:"\f1d4"}.fa-hacker-news-square:before{content:"\f3af"}.fa-hackerrank:before{content:"\f5f7"}.fa-hamburger:before{content:"\f805"}.fa-hammer:before{content:"\f6e3"}.fa-hamsa:before{content:"\f665"}.fa-hand-holding:before{content:"\f4bd"}.fa-hand-holding-heart:before{content:"\f4be"}.fa-hand-holding-medical:before{content:"\e05c"}.fa-hand-holding-usd:before{content:"\f4c0"}.fa-hand-holding-water:before{content:"\f4c1"}.fa-hand-lizard:before{content:"\f258"}.fa-hand-middle-finger:before{content:"\f806"}.fa-hand-paper:before{content:"\f256"}.fa-hand-peace:before{content:"\f25b"}.fa-hand-point-down:before{content:"\f0a7"}.fa-hand-point-left:before{content:"\f0a5"}.fa-hand-point-right:before{content:"\f0a4"}.fa-hand-point-up:before{content:"\f0a6"}.fa-hand-pointer:before{content:"\f25a"}.fa-hand-rock:before{content:"\f255"}.fa-hand-scissors:before{content:"\f257"}.fa-hand-sparkles:before{content:"\e05d"}.fa-hand-spock:before{content:"\f259"}.fa-hands:before{content:"\f4c2"}.fa-hands-helping:before{content:"\f4c4"}.fa-hands-wash:before{content:"\e05e"}.fa-handshake:before{content:"\f2b5"}.fa-handshake-alt-slash:before{content:"\e05f"}.fa-handshake-slash:before{content:"\e060"}.fa-hanukiah:before{content:"\f6e6"}.fa-hard-hat:before{content:"\f807"}.fa-hashtag:before{content:"\f292"}.fa-hat-cowboy:before{content:"\f8c0"}.fa-hat-cowboy-side:before{content:"\f8c1"}.fa-hat-wizard:before{content:"\f6e8"}.fa-hdd:before{content:"\f0a0"}.fa-head-side-cough:before{content:"\e061"}.fa-head-side-cough-slash:before{content:"\e062"}.fa-head-side-mask:before{content:"\e063"}.fa-head-side-virus:before{content:"\e064"}.fa-heading:before{content:"\f1dc"}.fa-headphones:before{content:"\f025"}.fa-headphones-alt:before{content:"\f58f"}.fa-headset:before{content:"\f590"}.fa-heart:before{content:"\f004"}.fa-heart-broken:before{content:"\f7a9"}.fa-heartbeat:before{content:"\f21e"}.fa-helicopter:before{content:"\f533"}.fa-highlighter:before{content:"\f591"}.fa-hiking:before{content:"\f6ec"}.fa-hippo:before{content:"\f6ed"}.fa-hips:before{content:"\f452"}.fa-hire-a-helper:before{content:"\f3b0"}.fa-history:before{content:"\f1da"}.fa-hive:before{content:"\e07f"}.fa-hockey-puck:before{content:"\f453"}.fa-holly-berry:before{content:"\f7aa"}.fa-home:before{content:"\f015"}.fa-hooli:before{content:"\f427"}.fa-hornbill:before{content:"\f592"}.fa-horse:before{content:"\f6f0"}.fa-horse-head:before{content:"\f7ab"}.fa-hospital:before{content:"\f0f8"}.fa-hospital-alt:before{content:"\f47d"}.fa-hospital-symbol:before{content:"\f47e"}.fa-hospital-user:before{content:"\f80d"}.fa-hot-tub:before{content:"\f593"}.fa-hotdog:before{content:"\f80f"}.fa-hotel:before{content:"\f594"}.fa-hotjar:before{content:"\f3b1"}.fa-hourglass:before{content:"\f254"}.fa-hourglass-end:before{content:"\f253"}.fa-hourglass-half:before{content:"\f252"}.fa-hourglass-start:before{content:"\f251"}.fa-house-damage:before{content:"\f6f1"}.fa-house-user:before{content:"\e065"}.fa-houzz:before{content:"\f27c"}.fa-hryvnia:before{content:"\f6f2"}.fa-html5:before{content:"\f13b"}.fa-hubspot:before{content:"\f3b2"}.fa-i-cursor:before{content:"\f246"}.fa-ice-cream:before{content:"\f810"}.fa-icicles:before{content:"\f7ad"}.fa-icons:before{content:"\f86d"}.fa-id-badge:before{content:"\f2c1"}.fa-id-card:before{content:"\f2c2"}.fa-id-card-alt:before{content:"\f47f"}.fa-ideal:before{content:"\e013"}.fa-igloo:before{content:"\f7ae"}.fa-image:before{content:"\f03e"}.fa-images:before{content:"\f302"}.fa-imdb:before{content:"\f2d8"}.fa-inbox:before{content:"\f01c"}.fa-indent:before{content:"\f03c"}.fa-industry:before{content:"\f275"}.fa-infinity:before{content:"\f534"}.fa-info:before{content:"\f129"}.fa-info-circle:before{content:"\f05a"}.fa-innosoft:before{content:"\e080"}.fa-instagram:before{content:"\f16d"}.fa-instagram-square:before{content:"\e055"}.fa-instalod:before{content:"\e081"}.fa-intercom:before{content:"\f7af"}.fa-internet-explorer:before{content:"\f26b"}.fa-invision:before{content:"\f7b0"}.fa-ioxhost:before{content:"\f208"}.fa-italic:before{content:"\f033"}.fa-itch-io:before{content:"\f83a"}.fa-itunes:before{content:"\f3b4"}.fa-itunes-note:before{content:"\f3b5"}.fa-java:before{content:"\f4e4"}.fa-jedi:before{content:"\f669"}.fa-jedi-order:before{content:"\f50e"}.fa-jenkins:before{content:"\f3b6"}.fa-jira:before{content:"\f7b1"}.fa-joget:before{content:"\f3b7"}.fa-joint:before{content:"\f595"}.fa-joomla:before{content:"\f1aa"}.fa-journal-whills:before{content:"\f66a"}.fa-js:before{content:"\f3b8"}.fa-js-square:before{content:"\f3b9"}.fa-jsfiddle:before{content:"\f1cc"}.fa-kaaba:before{content:"\f66b"}.fa-kaggle:before{content:"\f5fa"}.fa-key:before{content:"\f084"}.fa-keybase:before{content:"\f4f5"}.fa-keyboard:before{content:"\f11c"}.fa-keycdn:before{content:"\f3ba"}.fa-khanda:before{content:"\f66d"}.fa-kickstarter:before{content:"\f3bb"}.fa-kickstarter-k:before{content:"\f3bc"}.fa-kiss:before{content:"\f596"}.fa-kiss-beam:before{content:"\f597"}.fa-kiss-wink-heart:before{content:"\f598"}.fa-kiwi-bird:before{content:"\f535"}.fa-korvue:before{content:"\f42f"}.fa-landmark:before{content:"\f66f"}.fa-language:before{content:"\f1ab"}.fa-laptop:before{content:"\f109"}.fa-laptop-code:before{content:"\f5fc"}.fa-laptop-house:before{content:"\e066"}.fa-laptop-medical:before{content:"\f812"}.fa-laravel:before{content:"\f3bd"}.fa-lastfm:before{content:"\f202"}.fa-lastfm-square:before{content:"\f203"}.fa-laugh:before{content:"\f599"}.fa-laugh-beam:before{content:"\f59a"}.fa-laugh-squint:before{content:"\f59b"}.fa-laugh-wink:before{content:"\f59c"}.fa-layer-group:before{content:"\f5fd"}.fa-leaf:before{content:"\f06c"}.fa-leanpub:before{content:"\f212"}.fa-lemon:before{content:"\f094"}.fa-less:before{content:"\f41d"}.fa-less-than:before{content:"\f536"}.fa-less-than-equal:before{content:"\f537"}.fa-level-down-alt:before{content:"\f3be"}.fa-level-up-alt:before{content:"\f3bf"}.fa-life-ring:before{content:"\f1cd"}.fa-lightbulb:before{content:"\f0eb"}.fa-line:before{content:"\f3c0"}.fa-link:before{content:"\f0c1"}.fa-linkedin:before{content:"\f08c"}.fa-linkedin-in:before{content:"\f0e1"}.fa-linode:before{content:"\f2b8"}.fa-linux:before{content:"\f17c"}.fa-lira-sign:before{content:"\f195"}.fa-list:before{content:"\f03a"}.fa-list-alt:before{content:"\f022"}.fa-list-ol:before{content:"\f0cb"}.fa-list-ul:before{content:"\f0ca"}.fa-location-arrow:before{content:"\f124"}.fa-lock:before{content:"\f023"}.fa-lock-open:before{content:"\f3c1"}.fa-long-arrow-alt-down:before{content:"\f309"}.fa-long-arrow-alt-left:before{content:"\f30a"}.fa-long-arrow-alt-right:before{content:"\f30b"}.fa-long-arrow-alt-up:before{content:"\f30c"}.fa-low-vision:before{content:"\f2a8"}.fa-luggage-cart:before{content:"\f59d"}.fa-lungs:before{content:"\f604"}.fa-lungs-virus:before{content:"\e067"}.fa-lyft:before{content:"\f3c3"}.fa-magento:before{content:"\f3c4"}.fa-magic:before{content:"\f0d0"}.fa-magnet:before{content:"\f076"}.fa-mail-bulk:before{content:"\f674"}.fa-mailchimp:before{content:"\f59e"}.fa-male:before{content:"\f183"}.fa-mandalorian:before{content:"\f50f"}.fa-map:before{content:"\f279"}.fa-map-marked:before{content:"\f59f"}.fa-map-marked-alt:before{content:"\f5a0"}.fa-map-marker:before{content:"\f041"}.fa-map-marker-alt:before{content:"\f3c5"}.fa-map-pin:before{content:"\f276"}.fa-map-signs:before{content:"\f277"}.fa-markdown:before{content:"\f60f"}.fa-marker:before{content:"\f5a1"}.fa-mars:before{content:"\f222"}.fa-mars-double:before{content:"\f227"}.fa-mars-stroke:before{content:"\f229"}.fa-mars-stroke-h:before{content:"\f22b"}.fa-mars-stroke-v:before{content:"\f22a"}.fa-mask:before{content:"\f6fa"}.fa-mastodon:before{content:"\f4f6"}.fa-maxcdn:before{content:"\f136"}.fa-mdb:before{content:"\f8ca"}.fa-medal:before{content:"\f5a2"}.fa-medapps:before{content:"\f3c6"}.fa-medium:before{content:"\f23a"}.fa-medium-m:before{content:"\f3c7"}.fa-medkit:before{content:"\f0fa"}.fa-medrt:before{content:"\f3c8"}.fa-meetup:before{content:"\f2e0"}.fa-megaport:before{content:"\f5a3"}.fa-meh:before{content:"\f11a"}.fa-meh-blank:before{content:"\f5a4"}.fa-meh-rolling-eyes:before{content:"\f5a5"}.fa-memory:before{content:"\f538"}.fa-mendeley:before{content:"\f7b3"}.fa-menorah:before{content:"\f676"}.fa-mercury:before{content:"\f223"}.fa-meteor:before{content:"\f753"}.fa-microblog:before{content:"\e01a"}.fa-microchip:before{content:"\f2db"}.fa-microphone:before{content:"\f130"}.fa-microphone-alt:before{content:"\f3c9"}.fa-microphone-alt-slash:before{content:"\f539"}.fa-microphone-slash:before{content:"\f131"}.fa-microscope:before{content:"\f610"}.fa-microsoft:before{content:"\f3ca"}.fa-minus:before{content:"\f068"}.fa-minus-circle:before{content:"\f056"}.fa-minus-square:before{content:"\f146"}.fa-mitten:before{content:"\f7b5"}.fa-mix:before{content:"\f3cb"}.fa-mixcloud:before{content:"\f289"}.fa-mixer:before{content:"\e056"}.fa-mizuni:before{content:"\f3cc"}.fa-mobile:before{content:"\f10b"}.fa-mobile-alt:before{content:"\f3cd"}.fa-modx:before{content:"\f285"}.fa-monero:before{content:"\f3d0"}.fa-money-bill:before{content:"\f0d6"}.fa-money-bill-alt:before{content:"\f3d1"}.fa-money-bill-wave:before{content:"\f53a"}.fa-money-bill-wave-alt:before{content:"\f53b"}.fa-money-check:before{content:"\f53c"}.fa-money-check-alt:before{content:"\f53d"}.fa-monument:before{content:"\f5a6"}.fa-moon:before{content:"\f186"}.fa-mortar-pestle:before{content:"\f5a7"}.fa-mosque:before{content:"\f678"}.fa-motorcycle:before{content:"\f21c"}.fa-mountain:before{content:"\f6fc"}.fa-mouse:before{content:"\f8cc"}.fa-mouse-pointer:before{content:"\f245"}.fa-mug-hot:before{content:"\f7b6"}.fa-music:before{content:"\f001"}.fa-napster:before{content:"\f3d2"}.fa-neos:before{content:"\f612"}.fa-network-wired:before{content:"\f6ff"}.fa-neuter:before{content:"\f22c"}.fa-newspaper:before{content:"\f1ea"}.fa-nimblr:before{content:"\f5a8"}.fa-node:before{content:"\f419"}.fa-node-js:before{content:"\f3d3"}.fa-not-equal:before{content:"\f53e"}.fa-notes-medical:before{content:"\f481"}.fa-npm:before{content:"\f3d4"}.fa-ns8:before{content:"\f3d5"}.fa-nutritionix:before{content:"\f3d6"}.fa-object-group:before{content:"\f247"}.fa-object-ungroup:before{content:"\f248"}.fa-octopus-deploy:before{content:"\e082"}.fa-odnoklassniki:before{content:"\f263"}.fa-odnoklassniki-square:before{content:"\f264"}.fa-oil-can:before{content:"\f613"}.fa-old-republic:before{content:"\f510"}.fa-om:before{content:"\f679"}.fa-opencart:before{content:"\f23d"}.fa-openid:before{content:"\f19b"}.fa-opera:before{content:"\f26a"}.fa-optin-monster:before{content:"\f23c"}.fa-orcid:before{content:"\f8d2"}.fa-osi:before{content:"\f41a"}.fa-otter:before{content:"\f700"}.fa-outdent:before{content:"\f03b"}.fa-page4:before{content:"\f3d7"}.fa-pagelines:before{content:"\f18c"}.fa-pager:before{content:"\f815"}.fa-paint-brush:before{content:"\f1fc"}.fa-paint-roller:before{content:"\f5aa"}.fa-palette:before{content:"\f53f"}.fa-palfed:before{content:"\f3d8"}.fa-pallet:before{content:"\f482"}.fa-paper-plane:before{content:"\f1d8"}.fa-paperclip:before{content:"\f0c6"}.fa-parachute-box:before{content:"\f4cd"}.fa-paragraph:before{content:"\f1dd"}.fa-parking:before{content:"\f540"}.fa-passport:before{content:"\f5ab"}.fa-pastafarianism:before{content:"\f67b"}.fa-paste:before{content:"\f0ea"}.fa-patreon:before{content:"\f3d9"}.fa-pause:before{content:"\f04c"}.fa-pause-circle:before{content:"\f28b"}.fa-paw:before{content:"\f1b0"}.fa-paypal:before{content:"\f1ed"}.fa-peace:before{content:"\f67c"}.fa-pen:before{content:"\f304"}.fa-pen-alt:before{content:"\f305"}.fa-pen-fancy:before{content:"\f5ac"}.fa-pen-nib:before{content:"\f5ad"}.fa-pen-square:before{content:"\f14b"}.fa-pencil-alt:before{content:"\f303"}.fa-pencil-ruler:before{content:"\f5ae"}.fa-penny-arcade:before{content:"\f704"}.fa-people-arrows:before{content:"\e068"}.fa-people-carry:before{content:"\f4ce"}.fa-pepper-hot:before{content:"\f816"}.fa-perbyte:before{content:"\e083"}.fa-percent:before{content:"\f295"}.fa-percentage:before{content:"\f541"}.fa-periscope:before{content:"\f3da"}.fa-person-booth:before{content:"\f756"}.fa-phabricator:before{content:"\f3db"}.fa-phoenix-framework:before{content:"\f3dc"}.fa-phoenix-squadron:before{content:"\f511"}.fa-phone:before{content:"\f095"}.fa-phone-alt:before{content:"\f879"}.fa-phone-slash:before{content:"\f3dd"}.fa-phone-square:before{content:"\f098"}.fa-phone-square-alt:before{content:"\f87b"}.fa-phone-volume:before{content:"\f2a0"}.fa-photo-video:before{content:"\f87c"}.fa-php:before{content:"\f457"}.fa-pied-piper:before{content:"\f2ae"}.fa-pied-piper-alt:before{content:"\f1a8"}.fa-pied-piper-hat:before{content:"\f4e5"}.fa-pied-piper-pp:before{content:"\f1a7"}.fa-pied-piper-square:before{content:"\e01e"}.fa-piggy-bank:before{content:"\f4d3"}.fa-pills:before{content:"\f484"}.fa-pinterest:before{content:"\f0d2"}.fa-pinterest-p:before{content:"\f231"}.fa-pinterest-square:before{content:"\f0d3"}.fa-pizza-slice:before{content:"\f818"}.fa-place-of-worship:before{content:"\f67f"}.fa-plane:before{content:"\f072"}.fa-plane-arrival:before{content:"\f5af"}.fa-plane-departure:before{content:"\f5b0"}.fa-plane-slash:before{content:"\e069"}.fa-play:before{content:"\f04b"}.fa-play-circle:before{content:"\f144"}.fa-playstation:before{content:"\f3df"}.fa-plug:before{content:"\f1e6"}.fa-plus:before{content:"\f067"}.fa-plus-circle:before{content:"\f055"}.fa-plus-square:before{content:"\f0fe"}.fa-podcast:before{content:"\f2ce"}.fa-poll:before{content:"\f681"}.fa-poll-h:before{content:"\f682"}.fa-poo:before{content:"\f2fe"}.fa-poo-storm:before{content:"\f75a"}.fa-poop:before{content:"\f619"}.fa-portrait:before{content:"\f3e0"}.fa-pound-sign:before{content:"\f154"}.fa-power-off:before{content:"\f011"}.fa-pray:before{content:"\f683"}.fa-praying-hands:before{content:"\f684"}.fa-prescription:before{content:"\f5b1"}.fa-prescription-bottle:before{content:"\f485"}.fa-prescription-bottle-alt:before{content:"\f486"}.fa-print:before{content:"\f02f"}.fa-procedures:before{content:"\f487"}.fa-product-hunt:before{content:"\f288"}.fa-project-diagram:before{content:"\f542"}.fa-pump-medical:before{content:"\e06a"}.fa-pump-soap:before{content:"\e06b"}.fa-pushed:before{content:"\f3e1"}.fa-puzzle-piece:before{content:"\f12e"}.fa-python:before{content:"\f3e2"}.fa-qq:before{content:"\f1d6"}.fa-qrcode:before{content:"\f029"}.fa-question:before{content:"\f128"}.fa-question-circle:before{content:"\f059"}.fa-quidditch:before{content:"\f458"}.fa-quinscape:before{content:"\f459"}.fa-quora:before{content:"\f2c4"}.fa-quote-left:before{content:"\f10d"}.fa-quote-right:before{content:"\f10e"}.fa-quran:before{content:"\f687"}.fa-r-project:before{content:"\f4f7"}.fa-radiation:before{content:"\f7b9"}.fa-radiation-alt:before{content:"\f7ba"}.fa-rainbow:before{content:"\f75b"}.fa-random:before{content:"\f074"}.fa-raspberry-pi:before{content:"\f7bb"}.fa-ravelry:before{content:"\f2d9"}.fa-react:before{content:"\f41b"}.fa-reacteurope:before{content:"\f75d"}.fa-readme:before{content:"\f4d5"}.fa-rebel:before{content:"\f1d0"}.fa-receipt:before{content:"\f543"}.fa-record-vinyl:before{content:"\f8d9"}.fa-recycle:before{content:"\f1b8"}.fa-red-river:before{content:"\f3e3"}.fa-reddit:before{content:"\f1a1"}.fa-reddit-alien:before{content:"\f281"}.fa-reddit-square:before{content:"\f1a2"}.fa-redhat:before{content:"\f7bc"}.fa-redo:before{content:"\f01e"}.fa-redo-alt:before{content:"\f2f9"}.fa-registered:before{content:"\f25d"}.fa-remove-format:before{content:"\f87d"}.fa-renren:before{content:"\f18b"}.fa-reply:before{content:"\f3e5"}.fa-reply-all:before{content:"\f122"}.fa-replyd:before{content:"\f3e6"}.fa-republican:before{content:"\f75e"}.fa-researchgate:before{content:"\f4f8"}.fa-resolving:before{content:"\f3e7"}.fa-restroom:before{content:"\f7bd"}.fa-retweet:before{content:"\f079"}.fa-rev:before{content:"\f5b2"}.fa-ribbon:before{content:"\f4d6"}.fa-ring:before{content:"\f70b"}.fa-road:before{content:"\f018"}.fa-robot:before{content:"\f544"}.fa-rocket:before{content:"\f135"}.fa-rocketchat:before{content:"\f3e8"}.fa-rockrms:before{content:"\f3e9"}.fa-route:before{content:"\f4d7"}.fa-rss:before{content:"\f09e"}.fa-rss-square:before{content:"\f143"}.fa-ruble-sign:before{content:"\f158"}.fa-ruler:before{content:"\f545"}.fa-ruler-combined:before{content:"\f546"}.fa-ruler-horizontal:before{content:"\f547"}.fa-ruler-vertical:before{content:"\f548"}.fa-running:before{content:"\f70c"}.fa-rupee-sign:before{content:"\f156"}.fa-rust:before{content:"\e07a"}.fa-sad-cry:before{content:"\f5b3"}.fa-sad-tear:before{content:"\f5b4"}.fa-safari:before{content:"\f267"}.fa-salesforce:before{content:"\f83b"}.fa-sass:before{content:"\f41e"}.fa-satellite:before{content:"\f7bf"}.fa-satellite-dish:before{content:"\f7c0"}.fa-save:before{content:"\f0c7"}.fa-schlix:before{content:"\f3ea"}.fa-school:before{content:"\f549"}.fa-screwdriver:before{content:"\f54a"}.fa-scribd:before{content:"\f28a"}.fa-scroll:before{content:"\f70e"}.fa-sd-card:before{content:"\f7c2"}.fa-search:before{content:"\f002"}.fa-search-dollar:before{content:"\f688"}.fa-search-location:before{content:"\f689"}.fa-search-minus:before{content:"\f010"}.fa-search-plus:before{content:"\f00e"}.fa-searchengin:before{content:"\f3eb"}.fa-seedling:before{content:"\f4d8"}.fa-sellcast:before{content:"\f2da"}.fa-sellsy:before{content:"\f213"}.fa-server:before{content:"\f233"}.fa-servicestack:before{content:"\f3ec"}.fa-shapes:before{content:"\f61f"}.fa-share:before{content:"\f064"}.fa-share-alt:before{content:"\f1e0"}.fa-share-alt-square:before{content:"\f1e1"}.fa-share-square:before{content:"\f14d"}.fa-shekel-sign:before{content:"\f20b"}.fa-shield-alt:before{content:"\f3ed"}.fa-shield-virus:before{content:"\e06c"}.fa-ship:before{content:"\f21a"}.fa-shipping-fast:before{content:"\f48b"}.fa-shirtsinbulk:before{content:"\f214"}.fa-shoe-prints:before{content:"\f54b"}.fa-shopify:before{content:"\e057"}.fa-shopping-bag:before{content:"\f290"}.fa-shopping-basket:before{content:"\f291"}.fa-shopping-cart:before{content:"\f07a"}.fa-shopware:before{content:"\f5b5"}.fa-shower:before{content:"\f2cc"}.fa-shuttle-van:before{content:"\f5b6"}.fa-sign:before{content:"\f4d9"}.fa-sign-in-alt:before{content:"\f2f6"}.fa-sign-language:before{content:"\f2a7"}.fa-sign-out-alt:before{content:"\f2f5"}.fa-signal:before{content:"\f012"}.fa-signature:before{content:"\f5b7"}.fa-sim-card:before{content:"\f7c4"}.fa-simplybuilt:before{content:"\f215"}.fa-sink:before{content:"\e06d"}.fa-sistrix:before{content:"\f3ee"}.fa-sitemap:before{content:"\f0e8"}.fa-sith:before{content:"\f512"}.fa-skating:before{content:"\f7c5"}.fa-sketch:before{content:"\f7c6"}.fa-skiing:before{content:"\f7c9"}.fa-skiing-nordic:before{content:"\f7ca"}.fa-skull:before{content:"\f54c"}.fa-skull-crossbones:before{content:"\f714"}.fa-skyatlas:before{content:"\f216"}.fa-skype:before{content:"\f17e"}.fa-slack:before{content:"\f198"}.fa-slack-hash:before{content:"\f3ef"}.fa-slash:before{content:"\f715"}.fa-sleigh:before{content:"\f7cc"}.fa-sliders-h:before{content:"\f1de"}.fa-slideshare:before{content:"\f1e7"}.fa-smile:before{content:"\f118"}.fa-smile-beam:before{content:"\f5b8"}.fa-smile-wink:before{content:"\f4da"}.fa-smog:before{content:"\f75f"}.fa-smoking:before{content:"\f48d"}.fa-smoking-ban:before{content:"\f54d"}.fa-sms:before{content:"\f7cd"}.fa-snapchat:before{content:"\f2ab"}.fa-snapchat-ghost:before{content:"\f2ac"}.fa-snapchat-square:before{content:"\f2ad"}.fa-snowboarding:before{content:"\f7ce"}.fa-snowflake:before{content:"\f2dc"}.fa-snowman:before{content:"\f7d0"}.fa-snowplow:before{content:"\f7d2"}.fa-soap:before{content:"\e06e"}.fa-socks:before{content:"\f696"}.fa-solar-panel:before{content:"\f5ba"}.fa-sort:before{content:"\f0dc"}.fa-sort-alpha-down:before{content:"\f15d"}.fa-sort-alpha-down-alt:before{content:"\f881"}.fa-sort-alpha-up:before{content:"\f15e"}.fa-sort-alpha-up-alt:before{content:"\f882"}.fa-sort-amount-down:before{content:"\f160"}.fa-sort-amount-down-alt:before{content:"\f884"}.fa-sort-amount-up:before{content:"\f161"}.fa-sort-amount-up-alt:before{content:"\f885"}.fa-sort-down:before{content:"\f0dd"}.fa-sort-numeric-down:before{content:"\f162"}.fa-sort-numeric-down-alt:before{content:"\f886"}.fa-sort-numeric-up:before{content:"\f163"}.fa-sort-numeric-up-alt:before{content:"\f887"}.fa-sort-up:before{content:"\f0de"}.fa-soundcloud:before{content:"\f1be"}.fa-sourcetree:before{content:"\f7d3"}.fa-spa:before{content:"\f5bb"}.fa-space-shuttle:before{content:"\f197"}.fa-speakap:before{content:"\f3f3"}.fa-speaker-deck:before{content:"\f83c"}.fa-spell-check:before{content:"\f891"}.fa-spider:before{content:"\f717"}.fa-spinner:before{content:"\f110"}.fa-splotch:before{content:"\f5bc"}.fa-spotify:before{content:"\f1bc"}.fa-spray-can:before{content:"\f5bd"}.fa-square:before{content:"\f0c8"}.fa-square-full:before{content:"\f45c"}.fa-square-root-alt:before{content:"\f698"}.fa-squarespace:before{content:"\f5be"}.fa-stack-exchange:before{content:"\f18d"}.fa-stack-overflow:before{content:"\f16c"}.fa-stackpath:before{content:"\f842"}.fa-stamp:before{content:"\f5bf"}.fa-star:before{content:"\f005"}.fa-star-and-crescent:before{content:"\f699"}.fa-star-half:before{content:"\f089"}.fa-star-half-alt:before{content:"\f5c0"}.fa-star-of-david:before{content:"\f69a"}.fa-star-of-life:before{content:"\f621"}.fa-staylinked:before{content:"\f3f5"}.fa-steam:before{content:"\f1b6"}.fa-steam-square:before{content:"\f1b7"}.fa-steam-symbol:before{content:"\f3f6"}.fa-step-backward:before{content:"\f048"}.fa-step-forward:before{content:"\f051"}.fa-stethoscope:before{content:"\f0f1"}.fa-sticker-mule:before{content:"\f3f7"}.fa-sticky-note:before{content:"\f249"}.fa-stop:before{content:"\f04d"}.fa-stop-circle:before{content:"\f28d"}.fa-stopwatch:before{content:"\f2f2"}.fa-stopwatch-20:before{content:"\e06f"}.fa-store:before{content:"\f54e"}.fa-store-alt:before{content:"\f54f"}.fa-store-alt-slash:before{content:"\e070"}.fa-store-slash:before{content:"\e071"}.fa-strava:before{content:"\f428"}.fa-stream:before{content:"\f550"}.fa-street-view:before{content:"\f21d"}.fa-strikethrough:before{content:"\f0cc"}.fa-stripe:before{content:"\f429"}.fa-stripe-s:before{content:"\f42a"}.fa-stroopwafel:before{content:"\f551"}.fa-studiovinari:before{content:"\f3f8"}.fa-stumbleupon:before{content:"\f1a4"}.fa-stumbleupon-circle:before{content:"\f1a3"}.fa-subscript:before{content:"\f12c"}.fa-subway:before{content:"\f239"}.fa-suitcase:before{content:"\f0f2"}.fa-suitcase-rolling:before{content:"\f5c1"}.fa-sun:before{content:"\f185"}.fa-superpowers:before{content:"\f2dd"}.fa-superscript:before{content:"\f12b"}.fa-supple:before{content:"\f3f9"}.fa-surprise:before{content:"\f5c2"}.fa-suse:before{content:"\f7d6"}.fa-swatchbook:before{content:"\f5c3"}.fa-swift:before{content:"\f8e1"}.fa-swimmer:before{content:"\f5c4"}.fa-swimming-pool:before{content:"\f5c5"}.fa-symfony:before{content:"\f83d"}.fa-synagogue:before{content:"\f69b"}.fa-sync:before{content:"\f021"}.fa-sync-alt:before{content:"\f2f1"}.fa-syringe:before{content:"\f48e"}.fa-table:before{content:"\f0ce"}.fa-table-tennis:before{content:"\f45d"}.fa-tablet:before{content:"\f10a"}.fa-tablet-alt:before{content:"\f3fa"}.fa-tablets:before{content:"\f490"}.fa-tachometer-alt:before{content:"\f3fd"}.fa-tag:before{content:"\f02b"}.fa-tags:before{content:"\f02c"}.fa-tape:before{content:"\f4db"}.fa-tasks:before{content:"\f0ae"}.fa-taxi:before{content:"\f1ba"}.fa-teamspeak:before{content:"\f4f9"}.fa-teeth:before{content:"\f62e"}.fa-teeth-open:before{content:"\f62f"}.fa-telegram:before{content:"\f2c6"}.fa-telegram-plane:before{content:"\f3fe"}.fa-temperature-high:before{content:"\f769"}.fa-temperature-low:before{content:"\f76b"}.fa-tencent-weibo:before{content:"\f1d5"}.fa-tenge:before{content:"\f7d7"}.fa-terminal:before{content:"\f120"}.fa-text-height:before{content:"\f034"}.fa-text-width:before{content:"\f035"}.fa-th:before{content:"\f00a"}.fa-th-large:before{content:"\f009"}.fa-th-list:before{content:"\f00b"}.fa-the-red-yeti:before{content:"\f69d"}.fa-theater-masks:before{content:"\f630"}.fa-themeco:before{content:"\f5c6"}.fa-themeisle:before{content:"\f2b2"}.fa-thermometer:before{content:"\f491"}.fa-thermometer-empty:before{content:"\f2cb"}.fa-thermometer-full:before{content:"\f2c7"}.fa-thermometer-half:before{content:"\f2c9"}.fa-thermometer-quarter:before{content:"\f2ca"}.fa-thermometer-three-quarters:before{content:"\f2c8"}.fa-think-peaks:before{content:"\f731"}.fa-thumbs-down:before{content:"\f165"}.fa-thumbs-up:before{content:"\f164"}.fa-thumbtack:before{content:"\f08d"}.fa-ticket-alt:before{content:"\f3ff"}.fa-tiktok:before{content:"\e07b"}.fa-times:before{content:"\f00d"}.fa-times-circle:before{content:"\f057"}.fa-tint:before{content:"\f043"}.fa-tint-slash:before{content:"\f5c7"}.fa-tired:before{content:"\f5c8"}.fa-toggle-off:before{content:"\f204"}.fa-toggle-on:before{content:"\f205"}.fa-toilet:before{content:"\f7d8"}.fa-toilet-paper:before{content:"\f71e"}.fa-toilet-paper-slash:before{content:"\e072"}.fa-toolbox:before{content:"\f552"}.fa-tools:before{content:"\f7d9"}.fa-tooth:before{content:"\f5c9"}.fa-torah:before{content:"\f6a0"}.fa-torii-gate:before{content:"\f6a1"}.fa-tractor:before{content:"\f722"}.fa-trade-federation:before{content:"\f513"}.fa-trademark:before{content:"\f25c"}.fa-traffic-light:before{content:"\f637"}.fa-trailer:before{content:"\e041"}.fa-train:before{content:"\f238"}.fa-tram:before{content:"\f7da"}.fa-transgender:before{content:"\f224"}.fa-transgender-alt:before{content:"\f225"}.fa-trash:before{content:"\f1f8"}.fa-trash-alt:before{content:"\f2ed"}.fa-trash-restore:before{content:"\f829"}.fa-trash-restore-alt:before{content:"\f82a"}.fa-tree:before{content:"\f1bb"}.fa-trello:before{content:"\f181"}.fa-trophy:before{content:"\f091"}.fa-truck:before{content:"\f0d1"}.fa-truck-loading:before{content:"\f4de"}.fa-truck-monster:before{content:"\f63b"}.fa-truck-moving:before{content:"\f4df"}.fa-truck-pickup:before{content:"\f63c"}.fa-tshirt:before{content:"\f553"}.fa-tty:before{content:"\f1e4"}.fa-tumblr:before{content:"\f173"}.fa-tumblr-square:before{content:"\f174"}.fa-tv:before{content:"\f26c"}.fa-twitch:before{content:"\f1e8"}.fa-twitter:before{content:"\f099"}.fa-twitter-square:before{content:"\f081"}.fa-typo3:before{content:"\f42b"}.fa-uber:before{content:"\f402"}.fa-ubuntu:before{content:"\f7df"}.fa-uikit:before{content:"\f403"}.fa-umbraco:before{content:"\f8e8"}.fa-umbrella:before{content:"\f0e9"}.fa-umbrella-beach:before{content:"\f5ca"}.fa-uncharted:before{content:"\e084"}.fa-underline:before{content:"\f0cd"}.fa-undo:before{content:"\f0e2"}.fa-undo-alt:before{content:"\f2ea"}.fa-uniregistry:before{content:"\f404"}.fa-unity:before{content:"\e049"}.fa-universal-access:before{content:"\f29a"}.fa-university:before{content:"\f19c"}.fa-unlink:before{content:"\f127"}.fa-unlock:before{content:"\f09c"}.fa-unlock-alt:before{content:"\f13e"}.fa-unsplash:before{content:"\e07c"}.fa-untappd:before{content:"\f405"}.fa-upload:before{content:"\f093"}.fa-ups:before{content:"\f7e0"}.fa-usb:before{content:"\f287"}.fa-user:before{content:"\f007"}.fa-user-alt:before{content:"\f406"}.fa-user-alt-slash:before{content:"\f4fa"}.fa-user-astronaut:before{content:"\f4fb"}.fa-user-check:before{content:"\f4fc"}.fa-user-circle:before{content:"\f2bd"}.fa-user-clock:before{content:"\f4fd"}.fa-user-cog:before{content:"\f4fe"}.fa-user-edit:before{content:"\f4ff"}.fa-user-friends:before{content:"\f500"}.fa-user-graduate:before{content:"\f501"}.fa-user-injured:before{content:"\f728"}.fa-user-lock:before{content:"\f502"}.fa-user-md:before{content:"\f0f0"}.fa-user-minus:before{content:"\f503"}.fa-user-ninja:before{content:"\f504"}.fa-user-nurse:before{content:"\f82f"}.fa-user-plus:before{content:"\f234"}.fa-user-secret:before{content:"\f21b"}.fa-user-shield:before{content:"\f505"}.fa-user-slash:before{content:"\f506"}.fa-user-tag:before{content:"\f507"}.fa-user-tie:before{content:"\f508"}.fa-user-times:before{content:"\f235"}.fa-users:before{content:"\f0c0"}.fa-users-cog:before{content:"\f509"}.fa-users-slash:before{content:"\e073"}.fa-usps:before{content:"\f7e1"}.fa-ussunnah:before{content:"\f407"}.fa-utensil-spoon:before{content:"\f2e5"}.fa-utensils:before{content:"\f2e7"}.fa-vaadin:before{content:"\f408"}.fa-vector-square:before{content:"\f5cb"}.fa-venus:before{content:"\f221"}.fa-venus-double:before{content:"\f226"}.fa-venus-mars:before{content:"\f228"}.fa-vest:before{content:"\e085"}.fa-vest-patches:before{content:"\e086"}.fa-viacoin:before{content:"\f237"}.fa-viadeo:before{content:"\f2a9"}.fa-viadeo-square:before{content:"\f2aa"}.fa-vial:before{content:"\f492"}.fa-vials:before{content:"\f493"}.fa-viber:before{content:"\f409"}.fa-video:before{content:"\f03d"}.fa-video-slash:before{content:"\f4e2"}.fa-vihara:before{content:"\f6a7"}.fa-vimeo:before{content:"\f40a"}.fa-vimeo-square:before{content:"\f194"}.fa-vimeo-v:before{content:"\f27d"}.fa-vine:before{content:"\f1ca"}.fa-virus:before{content:"\e074"}.fa-virus-slash:before{content:"\e075"}.fa-viruses:before{content:"\e076"}.fa-vk:before{content:"\f189"}.fa-vnv:before{content:"\f40b"}.fa-voicemail:before{content:"\f897"}.fa-volleyball-ball:before{content:"\f45f"}.fa-volume-down:before{content:"\f027"}.fa-volume-mute:before{content:"\f6a9"}.fa-volume-off:before{content:"\f026"}.fa-volume-up:before{content:"\f028"}.fa-vote-yea:before{content:"\f772"}.fa-vr-cardboard:before{content:"\f729"}.fa-vuejs:before{content:"\f41f"}.fa-walking:before{content:"\f554"}.fa-wallet:before{content:"\f555"}.fa-warehouse:before{content:"\f494"}.fa-watchman-monitoring:before{content:"\e087"}.fa-water:before{content:"\f773"}.fa-wave-square:before{content:"\f83e"}.fa-waze:before{content:"\f83f"}.fa-weebly:before{content:"\f5cc"}.fa-weibo:before{content:"\f18a"}.fa-weight:before{content:"\f496"}.fa-weight-hanging:before{content:"\f5cd"}.fa-weixin:before{content:"\f1d7"}.fa-whatsapp:before{content:"\f232"}.fa-whatsapp-square:before{content:"\f40c"}.fa-wheelchair:before{content:"\f193"}.fa-whmcs:before{content:"\f40d"}.fa-wifi:before{content:"\f1eb"}.fa-wikipedia-w:before{content:"\f266"}.fa-wind:before{content:"\f72e"}.fa-window-close:before{content:"\f410"}.fa-window-maximize:before{content:"\f2d0"}.fa-window-minimize:before{content:"\f2d1"}.fa-window-restore:before{content:"\f2d2"}.fa-windows:before{content:"\f17a"}.fa-wine-bottle:before{content:"\f72f"}.fa-wine-glass:before{content:"\f4e3"}.fa-wine-glass-alt:before{content:"\f5ce"}.fa-wix:before{content:"\f5cf"}.fa-wizards-of-the-coast:before{content:"\f730"}.fa-wodu:before{content:"\e088"}.fa-wolf-pack-battalion:before{content:"\f514"}.fa-won-sign:before{content:"\f159"}.fa-wordpress:before{content:"\f19a"}.fa-wordpress-simple:before{content:"\f411"}.fa-wpbeginner:before{content:"\f297"}.fa-wpexplorer:before{content:"\f2de"}.fa-wpforms:before{content:"\f298"}.fa-wpressr:before{content:"\f3e4"}.fa-wrench:before{content:"\f0ad"}.fa-x-ray:before{content:"\f497"}.fa-xbox:before{content:"\f412"}.fa-xing:before{content:"\f168"}.fa-xing-square:before{content:"\f169"}.fa-y-combinator:before{content:"\f23b"}.fa-yahoo:before{content:"\f19e"}.fa-yammer:before{content:"\f840"}.fa-yandex:before{content:"\f413"}.fa-yandex-international:before{content:"\f414"}.fa-yarn:before{content:"\f7e3"}.fa-yelp:before{content:"\f1e9"}.fa-yen-sign:before{content:"\f157"}.fa-yin-yang:before{content:"\f6ad"}.fa-yoast:before{content:"\f2b1"}.fa-youtube:before{content:"\f167"}.fa-youtube-square:before{content:"\f431"}.fa-zhihu:before{content:"\f63f"}.sr-only{border:0;clip:rect(0,0,0,0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.sr-only-focusable:active,.sr-only-focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}@font-face{font-family:"Font Awesome 5 Brands";font-style:normal;font-weight:400;font-display:block;src:url(../webfonts/fa-brands-400.eot);src:url(../webfonts/fa-brands-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-brands-400.woff2) format("woff2"),url(../webfonts/fa-brands-400.woff) format("woff"),url(../webfonts/fa-brands-400.ttf) format("truetype"),url(../webfonts/fa-brands-400.svg#fontawesome) format("svg")}.fab{font-family:"Font Awesome 5 Brands"}@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:400;font-display:block;src:url(../webfonts/fa-regular-400.eot);src:url(../webfonts/fa-regular-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-regular-400.woff2) format("woff2"),url(../webfonts/fa-regular-400.woff) format("woff"),url(../webfonts/fa-regular-400.ttf) format("truetype"),url(../webfonts/fa-regular-400.svg#fontawesome) format("svg")}.fab,.far{font-weight:400}@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:900;font-display:block;src:url(../webfonts/fa-solid-900.eot);src:url(../webfonts/fa-solid-900.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-solid-900.woff2) format("woff2"),url(../webfonts/fa-solid-900.woff) format("woff"),url(../webfonts/fa-solid-900.ttf) format("truetype"),url(../webfonts/fa-solid-900.svg#fontawesome) format("svg")}.fa,.far,.fas{font-family:"Font Awesome 5 Free"}.fa,.fas{font-weight:900} \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vp9dsp_init.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vp9dsp_init.h deleted file mode 100644 index 0dc1c2dc20940f715f6180b63b435c87920777c2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vp9dsp_init.h +++ /dev/null @@ -1,29 +0,0 @@ -/* - * Copyright (c) 2017 Google Inc. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_ARM_VP9DSP_INIT_H -#define AVCODEC_ARM_VP9DSP_INIT_H - -#include "libavcodec/vp9dsp.h" - -void ff_vp9dsp_init_10bpp_arm(VP9DSPContext *dsp); -void ff_vp9dsp_init_12bpp_arm(VP9DSPContext *dsp); - -#endif /* AVCODEC_ARM_VP9DSP_INIT_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h264.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h264.h deleted file mode 100644 index ca9b688c057a728642f508491e4681744e537138..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h264.h +++ /dev/null @@ -1,427 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_CBS_H264_H -#define AVCODEC_CBS_H264_H - -#include -#include - -#include "cbs.h" -#include "cbs_h2645.h" -#include "cbs_sei.h" -#include "h264.h" - - -typedef struct H264RawNALUnitHeader { - uint8_t nal_ref_idc; - uint8_t nal_unit_type; - - uint8_t svc_extension_flag; - uint8_t avc_3d_extension_flag; -} H264RawNALUnitHeader; - -typedef struct H264RawScalingList { - int8_t delta_scale[64]; -} H264RawScalingList; - -typedef struct H264RawHRD { - uint8_t cpb_cnt_minus1; - uint8_t bit_rate_scale; - uint8_t cpb_size_scale; - - uint32_t bit_rate_value_minus1[H264_MAX_CPB_CNT]; - uint32_t cpb_size_value_minus1[H264_MAX_CPB_CNT]; - uint8_t cbr_flag[H264_MAX_CPB_CNT]; - - uint8_t initial_cpb_removal_delay_length_minus1; - uint8_t cpb_removal_delay_length_minus1; - uint8_t dpb_output_delay_length_minus1; - uint8_t time_offset_length; -} H264RawHRD; - -typedef struct H264RawVUI { - uint8_t aspect_ratio_info_present_flag; - uint8_t aspect_ratio_idc; - uint16_t sar_width; - uint16_t sar_height; - - uint8_t overscan_info_present_flag; - uint8_t overscan_appropriate_flag; - - uint8_t video_signal_type_present_flag; - uint8_t video_format; - uint8_t video_full_range_flag; - uint8_t colour_description_present_flag; - uint8_t colour_primaries; - uint8_t transfer_characteristics; - uint8_t matrix_coefficients; - - uint8_t chroma_loc_info_present_flag; - uint8_t chroma_sample_loc_type_top_field; - uint8_t chroma_sample_loc_type_bottom_field; - - uint8_t timing_info_present_flag; - uint32_t num_units_in_tick; - uint32_t time_scale; - uint8_t fixed_frame_rate_flag; - - uint8_t nal_hrd_parameters_present_flag; - H264RawHRD nal_hrd_parameters; - uint8_t vcl_hrd_parameters_present_flag; - H264RawHRD vcl_hrd_parameters; - uint8_t low_delay_hrd_flag; - - uint8_t pic_struct_present_flag; - - uint8_t bitstream_restriction_flag; - uint8_t motion_vectors_over_pic_boundaries_flag; - uint8_t max_bytes_per_pic_denom; - uint8_t max_bits_per_mb_denom; - uint8_t log2_max_mv_length_horizontal; - uint8_t log2_max_mv_length_vertical; - uint8_t max_num_reorder_frames; - uint8_t max_dec_frame_buffering; -} H264RawVUI; - -typedef struct H264RawSPS { - H264RawNALUnitHeader nal_unit_header; - - uint8_t profile_idc; - uint8_t constraint_set0_flag; - uint8_t constraint_set1_flag; - uint8_t constraint_set2_flag; - uint8_t constraint_set3_flag; - uint8_t constraint_set4_flag; - uint8_t constraint_set5_flag; - uint8_t reserved_zero_2bits; - uint8_t level_idc; - - uint8_t seq_parameter_set_id; - - uint8_t chroma_format_idc; - uint8_t separate_colour_plane_flag; - uint8_t bit_depth_luma_minus8; - uint8_t bit_depth_chroma_minus8; - uint8_t qpprime_y_zero_transform_bypass_flag; - - uint8_t seq_scaling_matrix_present_flag; - uint8_t seq_scaling_list_present_flag[12]; - H264RawScalingList scaling_list_4x4[6]; - H264RawScalingList scaling_list_8x8[6]; - - uint8_t log2_max_frame_num_minus4; - uint8_t pic_order_cnt_type; - uint8_t log2_max_pic_order_cnt_lsb_minus4; - uint8_t delta_pic_order_always_zero_flag; - int32_t offset_for_non_ref_pic; - int32_t offset_for_top_to_bottom_field; - uint8_t num_ref_frames_in_pic_order_cnt_cycle; - int32_t offset_for_ref_frame[256]; - - uint8_t max_num_ref_frames; - uint8_t gaps_in_frame_num_allowed_flag; - - uint16_t pic_width_in_mbs_minus1; - uint16_t pic_height_in_map_units_minus1; - - uint8_t frame_mbs_only_flag; - uint8_t mb_adaptive_frame_field_flag; - uint8_t direct_8x8_inference_flag; - - uint8_t frame_cropping_flag; - uint16_t frame_crop_left_offset; - uint16_t frame_crop_right_offset; - uint16_t frame_crop_top_offset; - uint16_t frame_crop_bottom_offset; - - uint8_t vui_parameters_present_flag; - H264RawVUI vui; -} H264RawSPS; - -typedef struct H264RawSPSExtension { - H264RawNALUnitHeader nal_unit_header; - - uint8_t seq_parameter_set_id; - - uint8_t aux_format_idc; - uint8_t bit_depth_aux_minus8; - uint8_t alpha_incr_flag; - uint16_t alpha_opaque_value; - uint16_t alpha_transparent_value; - - uint8_t additional_extension_flag; -} H264RawSPSExtension; - -typedef struct H264RawPPS { - H264RawNALUnitHeader nal_unit_header; - - uint8_t pic_parameter_set_id; - uint8_t seq_parameter_set_id; - - uint8_t entropy_coding_mode_flag; - uint8_t bottom_field_pic_order_in_frame_present_flag; - - uint8_t num_slice_groups_minus1; - uint8_t slice_group_map_type; - uint16_t run_length_minus1[H264_MAX_SLICE_GROUPS]; - uint16_t top_left[H264_MAX_SLICE_GROUPS]; - uint16_t bottom_right[H264_MAX_SLICE_GROUPS]; - uint8_t slice_group_change_direction_flag; - uint16_t slice_group_change_rate_minus1; - uint16_t pic_size_in_map_units_minus1; - - uint8_t *slice_group_id; - AVBufferRef *slice_group_id_ref; - - uint8_t num_ref_idx_l0_default_active_minus1; - uint8_t num_ref_idx_l1_default_active_minus1; - - uint8_t weighted_pred_flag; - uint8_t weighted_bipred_idc; - - int8_t pic_init_qp_minus26; - int8_t pic_init_qs_minus26; - int8_t chroma_qp_index_offset; - - uint8_t deblocking_filter_control_present_flag; - uint8_t constrained_intra_pred_flag; - - uint8_t more_rbsp_data; - - uint8_t redundant_pic_cnt_present_flag; - uint8_t transform_8x8_mode_flag; - - uint8_t pic_scaling_matrix_present_flag; - uint8_t pic_scaling_list_present_flag[12]; - H264RawScalingList scaling_list_4x4[6]; - H264RawScalingList scaling_list_8x8[6]; - - int8_t second_chroma_qp_index_offset; -} H264RawPPS; - -typedef struct H264RawAUD { - H264RawNALUnitHeader nal_unit_header; - - uint8_t primary_pic_type; -} H264RawAUD; - -typedef struct H264RawSEIBufferingPeriod { - uint8_t seq_parameter_set_id; - struct { - uint32_t initial_cpb_removal_delay[H264_MAX_CPB_CNT]; - uint32_t initial_cpb_removal_delay_offset[H264_MAX_CPB_CNT]; - } nal, vcl; -} H264RawSEIBufferingPeriod; - -typedef struct H264RawSEIPicTimestamp { - uint8_t ct_type; - uint8_t nuit_field_based_flag; - uint8_t counting_type; - uint8_t full_timestamp_flag; - uint8_t discontinuity_flag; - uint8_t cnt_dropped_flag; - uint8_t n_frames; - uint8_t seconds_flag; - uint8_t seconds_value; - uint8_t minutes_flag; - uint8_t minutes_value; - uint8_t hours_flag; - uint8_t hours_value; - int32_t time_offset; -} H264RawSEIPicTimestamp; - -typedef struct H264RawSEIPicTiming { - uint32_t cpb_removal_delay; - uint32_t dpb_output_delay; - uint8_t pic_struct; - uint8_t clock_timestamp_flag[3]; - H264RawSEIPicTimestamp timestamp[3]; -} H264RawSEIPicTiming; - -typedef struct H264RawSEIPanScanRect { - uint32_t pan_scan_rect_id; - uint8_t pan_scan_rect_cancel_flag; - uint8_t pan_scan_cnt_minus1; - int32_t pan_scan_rect_left_offset[3]; - int32_t pan_scan_rect_right_offset[3]; - int32_t pan_scan_rect_top_offset[3]; - int32_t pan_scan_rect_bottom_offset[3]; - uint16_t pan_scan_rect_repetition_period; -} H264RawSEIPanScanRect; - -typedef struct H264RawSEIRecoveryPoint { - uint16_t recovery_frame_cnt; - uint8_t exact_match_flag; - uint8_t broken_link_flag; - uint8_t changing_slice_group_idc; -} H264RawSEIRecoveryPoint; - -typedef struct H264RawFilmGrainCharacteristics { - uint8_t film_grain_characteristics_cancel_flag; - uint8_t film_grain_model_id; - uint8_t separate_colour_description_present_flag; - uint8_t film_grain_bit_depth_luma_minus8; - uint8_t film_grain_bit_depth_chroma_minus8; - uint8_t film_grain_full_range_flag; - uint8_t film_grain_colour_primaries; - uint8_t film_grain_transfer_characteristics; - uint8_t film_grain_matrix_coefficients; - uint8_t blending_mode_id; - uint8_t log2_scale_factor; - uint8_t comp_model_present_flag[3]; - uint8_t num_intensity_intervals_minus1[3]; - uint8_t num_model_values_minus1[3]; - uint8_t intensity_interval_lower_bound[3][256]; - uint8_t intensity_interval_upper_bound[3][256]; - int16_t comp_model_value[3][256][6]; - uint8_t film_grain_characteristics_repetition_period; -} H264RawFilmGrainCharacteristics; - -typedef struct H264RawSEIDisplayOrientation { - uint8_t display_orientation_cancel_flag; - uint8_t hor_flip; - uint8_t ver_flip; - uint16_t anticlockwise_rotation; - uint16_t display_orientation_repetition_period; - uint8_t display_orientation_extension_flag; -} H264RawSEIDisplayOrientation; - -typedef struct H264RawSEI { - H264RawNALUnitHeader nal_unit_header; - SEIRawMessageList message_list; -} H264RawSEI; - -typedef struct H264RawSliceHeader { - H264RawNALUnitHeader nal_unit_header; - - uint32_t first_mb_in_slice; - uint8_t slice_type; - - uint8_t pic_parameter_set_id; - - uint8_t colour_plane_id; - - uint16_t frame_num; - uint8_t field_pic_flag; - uint8_t bottom_field_flag; - - uint16_t idr_pic_id; - - uint16_t pic_order_cnt_lsb; - int32_t delta_pic_order_cnt_bottom; - int32_t delta_pic_order_cnt[2]; - - uint8_t redundant_pic_cnt; - uint8_t direct_spatial_mv_pred_flag; - - uint8_t num_ref_idx_active_override_flag; - uint8_t num_ref_idx_l0_active_minus1; - uint8_t num_ref_idx_l1_active_minus1; - - uint8_t ref_pic_list_modification_flag_l0; - uint8_t ref_pic_list_modification_flag_l1; - struct { - uint8_t modification_of_pic_nums_idc; - int32_t abs_diff_pic_num_minus1; - uint8_t long_term_pic_num; - } rplm_l0[H264_MAX_RPLM_COUNT], rplm_l1[H264_MAX_RPLM_COUNT]; - - uint8_t luma_log2_weight_denom; - uint8_t chroma_log2_weight_denom; - - uint8_t luma_weight_l0_flag[H264_MAX_REFS]; - int8_t luma_weight_l0[H264_MAX_REFS]; - int8_t luma_offset_l0[H264_MAX_REFS]; - uint8_t chroma_weight_l0_flag[H264_MAX_REFS]; - int8_t chroma_weight_l0[H264_MAX_REFS][2]; - int8_t chroma_offset_l0[H264_MAX_REFS][2]; - - uint8_t luma_weight_l1_flag[H264_MAX_REFS]; - int8_t luma_weight_l1[H264_MAX_REFS]; - int8_t luma_offset_l1[H264_MAX_REFS]; - uint8_t chroma_weight_l1_flag[H264_MAX_REFS]; - int8_t chroma_weight_l1[H264_MAX_REFS][2]; - int8_t chroma_offset_l1[H264_MAX_REFS][2]; - - uint8_t no_output_of_prior_pics_flag; - uint8_t long_term_reference_flag; - - uint8_t adaptive_ref_pic_marking_mode_flag; - struct { - uint8_t memory_management_control_operation; - int32_t difference_of_pic_nums_minus1; - uint8_t long_term_pic_num; - uint8_t long_term_frame_idx; - uint8_t max_long_term_frame_idx_plus1; - } mmco[H264_MAX_MMCO_COUNT]; - - uint8_t cabac_init_idc; - - int8_t slice_qp_delta; - - uint8_t sp_for_switch_flag; - int8_t slice_qs_delta; - - uint8_t disable_deblocking_filter_idc; - int8_t slice_alpha_c0_offset_div2; - int8_t slice_beta_offset_div2; - - uint16_t slice_group_change_cycle; -} H264RawSliceHeader; - -typedef struct H264RawSlice { - H264RawSliceHeader header; - - uint8_t *data; - AVBufferRef *data_ref; - size_t data_size; - int data_bit_start; -} H264RawSlice; - -typedef struct H264RawFiller { - H264RawNALUnitHeader nal_unit_header; - - uint32_t filler_size; -} H264RawFiller; - - -typedef struct CodedBitstreamH264Context { - // Reader/writer context in common with the H.265 implementation. - CodedBitstreamH2645Context common; - - // All currently available parameter sets. These are updated when - // any parameter set NAL unit is read/written with this context. - AVBufferRef *sps_ref[H264_MAX_SPS_COUNT]; - AVBufferRef *pps_ref[H264_MAX_PPS_COUNT]; - H264RawSPS *sps[H264_MAX_SPS_COUNT]; - H264RawPPS *pps[H264_MAX_PPS_COUNT]; - - // The currently active parameter sets. These are updated when any - // NAL unit refers to the relevant parameter set. These pointers - // must also be present in the arrays above. - const H264RawSPS *active_sps; - const H264RawPPS *active_pps; - - // The NAL unit type of the most recent normal slice. This is required - // to be able to read/write auxiliary slices, because IdrPicFlag is - // otherwise unknown. - uint8_t last_slice_nal_unit_type; -} CodedBitstreamH264Context; - -#endif /* AVCODEC_CBS_H264_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dv.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dv.c deleted file mode 100644 index eb49978ad8f4d4695f172ac7403bd17b1f401077..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dv.c +++ /dev/null @@ -1,190 +0,0 @@ -/* - * DV decoder - * Copyright (c) 2002 Fabrice Bellard - * Copyright (c) 2004 Roman Shaposhnik - * - * DV encoder - * Copyright (c) 2003 Roman Shaposhnik - * - * 50 Mbps (DVCPRO50) support - * Copyright (c) 2006 Daniel Maas - * - * 100 Mbps (DVCPRO HD) support - * Initial code by Daniel Maas (funded by BBC R&D) - * Final code by Roman Shaposhnik - * - * Many thanks to Dan Dennedy for providing wealth - * of DV technical info. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * DV codec. - */ - -#include - -#include "libavutil/pixfmt.h" - -#include "dv_internal.h" -#include "dv_profile.h" - -static inline void dv_calc_mb_coordinates(const AVDVProfile *d, int chan, - int seq, int slot, uint16_t *tbl) -{ - static const uint8_t off[] = { 2, 6, 8, 0, 4 }; - static const uint8_t shuf1[] = { 36, 18, 54, 0, 72 }; - static const uint8_t shuf2[] = { 24, 12, 36, 0, 48 }; - static const uint8_t shuf3[] = { 18, 9, 27, 0, 36 }; - - static const uint8_t l_start[] = { 0, 4, 9, 13, 18, 22, 27, 31, 36, 40 }; - static const uint8_t l_start_shuffled[] = { 9, 4, 13, 0, 18 }; - - static const uint8_t serpent1[] = { - 0, 1, 2, 2, 1, 0, - 0, 1, 2, 2, 1, 0, - 0, 1, 2, 2, 1, 0, - 0, 1, 2, 2, 1, 0, - 0, 1, 2 - }; - static const uint8_t serpent2[] = { - 0, 1, 2, 3, 4, 5, 5, 4, 3, 2, 1, 0, - 0, 1, 2, 3, 4, 5, 5, 4, 3, 2, 1, 0, - 0, 1, 2, 3, 4, 5 - }; - - static const uint8_t remap[][2] = { - { 0, 0 }, { 0, 0 }, { 0, 0 }, { 0, 0 }, /* dummy */ - { 0, 0 }, { 0, 1 }, { 0, 2 }, { 0, 3 }, { 10, 0 }, - { 10, 1 }, { 10, 2 }, { 10, 3 }, { 20, 0 }, { 20, 1 }, - { 20, 2 }, { 20, 3 }, { 30, 0 }, { 30, 1 }, { 30, 2 }, - { 30, 3 }, { 40, 0 }, { 40, 1 }, { 40, 2 }, { 40, 3 }, - { 50, 0 }, { 50, 1 }, { 50, 2 }, { 50, 3 }, { 60, 0 }, - { 60, 1 }, { 60, 2 }, { 60, 3 }, { 70, 0 }, { 70, 1 }, - { 70, 2 }, { 70, 3 }, { 0, 64 }, { 0, 65 }, { 0, 66 }, - { 10, 64 }, { 10, 65 }, { 10, 66 }, { 20, 64 }, { 20, 65 }, - { 20, 66 }, { 30, 64 }, { 30, 65 }, { 30, 66 }, { 40, 64 }, - { 40, 65 }, { 40, 66 }, { 50, 64 }, { 50, 65 }, { 50, 66 }, - { 60, 64 }, { 60, 65 }, { 60, 66 }, { 70, 64 }, { 70, 65 }, - { 70, 66 }, { 0, 67 }, { 20, 67 }, { 40, 67 }, { 60, 67 } - }; - - int i, k, m; - int x, y, blk; - - for (m = 0; m < 5; m++) { - switch (d->width) { - case 1440: - blk = (chan * 11 + seq) * 27 + slot; - - if (chan == 0 && seq == 11) { - x = m * 27 + slot; - if (x < 90) { - y = 0; - } else { - x = (x - 90) * 2; - y = 67; - } - } else { - i = (4 * chan + blk + off[m]) % 11; - k = (blk / 11) % 27; - - x = shuf1[m] + (chan & 1) * 9 + k % 9; - y = (i * 3 + k / 9) * 2 + (chan >> 1) + 1; - } - tbl[m] = (x << 1) | (y << 9); - break; - case 1280: - blk = (chan * 10 + seq) * 27 + slot; - - i = (4 * chan + (seq / 5) + 2 * blk + off[m]) % 10; - k = (blk / 5) % 27; - - x = shuf1[m] + (chan & 1) * 9 + k % 9; - y = (i * 3 + k / 9) * 2 + (chan >> 1) + 4; - - if (x >= 80) { - x = remap[y][0] + ((x - 80) << (y > 59)); - y = remap[y][1]; - } - tbl[m] = (x << 1) | (y << 9); - break; - case 960: - blk = (chan * 10 + seq) * 27 + slot; - - i = (4 * chan + (seq / 5) + 2 * blk + off[m]) % 10; - k = (blk / 5) % 27 + (i & 1) * 3; - - x = shuf2[m] + k % 6 + 6 * (chan & 1); - y = l_start[i] + k / 6 + 45 * (chan >> 1); - tbl[m] = (x << 1) | (y << 9); - break; - case 720: - switch (d->pix_fmt) { - case AV_PIX_FMT_YUV422P: - x = shuf3[m] + slot / 3; - y = serpent1[slot] + - ((((seq + off[m]) % d->difseg_size) << 1) + chan) * 3; - tbl[m] = (x << 1) | (y << 8); - break; - case AV_PIX_FMT_YUV420P: - x = shuf3[m] + slot / 3; - y = serpent1[slot] + - ((seq + off[m]) % d->difseg_size) * 3; - tbl[m] = (x << 1) | (y << 9); - break; - case AV_PIX_FMT_YUV411P: - i = (seq + off[m]) % d->difseg_size; - k = slot + ((m == 1 || m == 2) ? 3 : 0); - - x = l_start_shuffled[m] + k / 6; - y = serpent2[k] + i * 6; - if (x > 21) - y = y * 2 - i * 6; - tbl[m] = (x << 2) | (y << 8); - break; - } - default: - break; - } - } -} - -int ff_dv_init_dynamic_tables(DVwork_chunk *work_chunks, const AVDVProfile *d) -{ - int j, i, c, s, p; - - p = i = 0; - for (c = 0; c < d->n_difchan; c++) { - for (s = 0; s < d->difseg_size; s++) { - p += 6; - for (j = 0; j < 27; j++) { - p += !(j % 3); - if (!(DV_PROFILE_IS_1080i50(d) && c != 0 && s == 11) && - !(DV_PROFILE_IS_720p50(d) && s > 9)) { - dv_calc_mb_coordinates(d, c, s, j, &work_chunks[i].mb_coordinates[0]); - work_chunks[i++].buf_offset = p; - } - p += 5; - } - } - } - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hap.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hap.c deleted file mode 100644 index 1a330c9c9b3cc7e4faa5c45e8bc9ef2a98e38e50..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hap.c +++ /dev/null @@ -1,77 +0,0 @@ -/* - * Vidvox Hap utility functions - * Copyright (C) 2015 Tom Butterworth - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Hap utilities - */ -#include "hap.h" - -int ff_hap_set_chunk_count(HapContext *ctx, int count, int first_in_frame) -{ - int ret = 0; - if (first_in_frame == 1 && ctx->chunk_count != count) { - int ret = av_reallocp_array(&ctx->chunks, count, sizeof(HapChunk)); - if (ret == 0) - ret = av_reallocp_array(&ctx->chunk_results, count, sizeof(int)); - if (ret < 0) { - ctx->chunk_count = 0; - } else { - ctx->chunk_count = count; - } - } else if (ctx->chunk_count != count) { - /* If this is not the first chunk count calculated for a frame and a - * different count has already been encountered, then reject the frame: - * each table in the Decode Instructions Container must describe the - * same number of chunks. */ - ret = AVERROR_INVALIDDATA; - } - return ret; -} - -av_cold void ff_hap_free_context(HapContext *ctx) -{ - av_freep(&ctx->tex_buf); - av_freep(&ctx->chunks); - av_freep(&ctx->chunk_results); -} - -int ff_hap_parse_section_header(GetByteContext *gbc, int *section_size, - enum HapSectionType *section_type) -{ - if (bytestream2_get_bytes_left(gbc) < 4) - return AVERROR_INVALIDDATA; - - *section_size = bytestream2_get_le24(gbc); - *section_type = bytestream2_get_byte(gbc); - - if (*section_size == 0) { - if (bytestream2_get_bytes_left(gbc) < 4) - return AVERROR_INVALIDDATA; - - *section_size = bytestream2_get_le32(gbc); - } - - if (*section_size > bytestream2_get_bytes_left(gbc) || *section_size < 0) - return AVERROR_INVALIDDATA; - else - return 0; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Build manage and expand your own zoo in Zoo 2 Animal Park. Download today!.md b/spaces/congsaPfin/Manga-OCR/logs/Build manage and expand your own zoo in Zoo 2 Animal Park. Download today!.md deleted file mode 100644 index b68961e25f9e8c477acb88f31af4892d3014ae86..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Build manage and expand your own zoo in Zoo 2 Animal Park. Download today!.md +++ /dev/null @@ -1,91 +0,0 @@ - -

Download Zoo 2 Animal Park: A Fun and Engaging Zoo Game for All Ages

-

Do you love animals and dream of running your own zoo? Do you enjoy playing simulation games with tycoon elements? Do you want to experience a charming game setting with a captivating story and plenty of entertaining quests? If you answered yes to any of these questions, then you should download Zoo 2 Animal Park, an amazing zoo game that will keep you hooked for hours.

-

What is Zoo 2 Animal Park?

-

Zoo 2 Animal Park is a game developed by upjers GmbH, a German company that specializes in creating browser and mobile games. It was released in 2018 and has since gained millions of fans worldwide. Zoo 2 Animal Park is a game that combines three genres: animal game, zookeeper simulation, and tycoon game. Here are some of the features that make this game so unique and appealing:

-

download zoo 2 animal park


Download Zip » https://urlca.com/2uO7Lt



-

A captivating animal game with a riveting story

-

In Zoo 2 Animal Park, you slip into the role of a zoo director who inherits a small and rundown zoo from your aunt. Your task is to save the zoo from being closed down by the mayor, who wants to build a supermarket on the land. To do this, you have to restore the zoo to its former glory, attract more visitors, and uncover the secrets of your family's past. Along the way, you will meet a quirky cast of characters who will help you or hinder you in your quest. You will also discover an ancient treasure that will change your life forever.

-

A zookeeper simulation with tycoon features

-

Zoo 2 Animal Park is not just about watching cute animals. It's also about managing your own business and making strategic decisions. As a zoo director, you have to build enclosures, clean paths, buy new animals, feed them, breed them, decorate your zoo, hire staff, and more. You also have to balance your income and expenses, set prices, and deal with customer feedback. You can unlock new items and features with every level up, and expand your zoo as you progress in the game.

-

A colorful and detailed game world with adorable animals

-

One of the most attractive aspects of Zoo 2 Animal Park is its graphics and animations. The game boasts a vibrant and realistic game world that will immerse you in the atmosphere of a zoo. You can zoom in and out, rotate the camera, and explore every corner of your park. You can also interact with your animals, pet them, play with them, and watch them behave in different ways. The game features over 200 animal species, from domestic animals like goats and rabbits, to exotic animals like pandas and lions. Each animal has its own personality, needs, and preferences. You can also breed rare and special animals with different coat colors and patterns.

-

How to play Zoo 2 Animal Park?

-

Zoo 2 Animal Park is easy to play and suitable for all ages. You can play it on your browser or download it as an app on your mobile device or tablet. The game has a tutorial that will guide you through the basics of the game. You can also access the help menu anytime if you need more information or tips. Here are some of the main activities that you can do in the game:

-

Build and customize your own zoo

-

Your zoo is your canvas, and you can design it however you want. You can choose from a variety of enclosures, shops, restrooms, benches, flowers, bushes, trees, decorations, and more. You can also change the layout, the theme, and the name of your zoo. You can also unlock new areas and biomes as you level up, such as the savanna, the jungle, and the arctic. You can also visit other players' zoos and rate them.

-

Take care of your animals and breed cute babies

-

Your animals are the heart of your zoo, and you have to make sure they are happy and healthy. You can feed them, clean them, cure them, and watch them grow. You can also breed them to create new generations of animals. Breeding can result in adorable baby animals that will attract more visitors to your zoo. You can also collect animal cards and use them to unlock new species or upgrade your existing ones.

-

Complete quests and events for rewards and achievements

-

Zoo 2 Animal Park is not just a sandbox game. It also has a lot of goals and challenges that will keep you motivated and entertained. You can follow the main story line and complete quests given by different characters. You can also participate in daily and weekly tasks, special events, and competitions. Completing these activities will reward you with coins, diamonds, experience points, animal cards, decorations, and more. You can also earn achievements and trophies for reaching certain milestones or performing specific actions in the game.

-

Why download Zoo 2 Animal Park?

-

Zoo 2 Animal Park is a game that has something for everyone. Whether you are a casual gamer or a hardcore fan of zoo games, you will find plenty of reasons to download this game and enjoy it. Here are some of the benefits of playing Zoo 2 Animal Park:

-

download zoo 2 animal park app
-download zoo 2 animal park game for pc
-download zoo 2 animal park mod apk
-download zoo 2 animal park hack
-download zoo 2 animal park cheats
-download zoo 2 animal park online
-download zoo 2 animal park free
-download zoo 2 animal park for android
-download zoo 2 animal park for ios
-download zoo 2 animal park for windows
-download zoo 2 animal park steam
-download zoo 2 animal park browser game
-download zoo 2 animal park simulation game
-download zoo 2 animal park tycoon game
-download zoo 2 animal park upjers game
-download zoo 2 animal park latest version
-download zoo 2 animal park update
-download zoo 2 animal park new animals
-download zoo 2 animal park tips and tricks
-download zoo 2 animal park guide
-download zoo 2 animal park walkthrough
-download zoo 2 animal park review
-download zoo 2 animal park gameplay
-download zoo 2 animal park trailer
-download zoo 2 animal park video
-download zoo 2 animal park wiki
-download zoo 2 animal park reddit
-download zoo 2 animal park forum
-download zoo 2 animal park facebook
-download zoo 2 animal park instagram
-download zoo 2 animal park twitter
-download zoo 2 animal park youtube
-download zoo 2 animal park google play store
-download zoo 2 animal park app store
-download zoo 2 animal park amazon appstore
-download zoo 2 animal park microsoft store
-download zoo 2 animal park steam store
-download zoo 2 animal park official website
-download zoo 2 animal park support page
-download zoo 2 animal park contact page
-download zoo 2 animal park faq page
-download zoo 2 animal park privacy policy page
-download zoo 2 animal park terms of service page
-download zoo 2 animal park how to play page
-download zoo 2 animal park how to breed animals page
-download zoo 2 animal park how to decorate your zoo page
-download zoo 2 animal park how to earn diamonds page

-

It's free to play and available on multiple platforms

-

Zoo 2 Animal Park is a game that you can play for free without any hidden costs or fees. You can download it as an app on your Android or iOS device, or play it on your browser on your PC or laptop. You can also sync your progress across different devices and platforms with your upjers account. You can also play offline without an internet connection, as long as you save your game before exiting.

-

It's fun and educational for kids and adults alike

-

Zoo 2 Animal Park is a game that is suitable for all ages and backgrounds. It's fun and easy to play, but also challenging and rewarding. It's also educational and informative, as it teaches you about different animals, their habitats, their behaviors, their diets, their conservation status, and more. You can also learn about zoo management, business skills, customer service, and environmental issues. You can also chat with other players, join clubs, make friends, and share tips and tricks.

-

It's constantly updated with new content and features

-

Zoo 2 Animal Park is a game that never gets boring or stale. The developers are always working on adding new content and features to the game, such as new animals, new enclosures, new decorations, new quests, new events, new competitions, new biomes, new stories, and more. You can always expect something new and exciting every time you log in to the game.

-

Conclusion

-

Zoo 2 Animal Park is a game that will make you fall in love with animals and zoos. It's a game that will let you create your own zoo paradise and take care of hundreds of adorable animals. It's a game that will challenge you with quests and events and reward you with coins and diamonds. It's a game that will entertain you with its graphics and animations and immerse you in its story and characters. It's a game that you should download today if you want to have fun and learn something new.

-

FAQs

-

Here are some of the frequently asked questions about Zoo 2 Animal Park:

-

How do I get more coins and diamonds?

-

Coins and diamonds are the two main currencies in Zoo 2 Animal Park. You can earn coins by collecting entrance fees from visitors, completing quests and tasks, selling items or animals, participating in events or competitions, or watching ads. You can earn diamonds by leveling up, completing achievements or trophies, opening chests or card packs, participating in events or competitions, or buying them with real money.

-

How do I get more animal cards?

-

Animal cards are used to unlock new animal species or upgrade your existing ones. You can get animal cards by opening chests or card packs that you get from completing quests or tasks, participating in events or competitions, or buying them with real money. You can also trade animal cards with other players or your club members.

-

How do I breed animals?

-

Breeding animals is one of the most fun and rewarding aspects of Zoo 2 Animal Park. To breed animals, you need to have a male and a female of the same species in the same enclosure. You also need to make sure they are happy, healthy, and well-fed. You can then tap on the heart icon above their heads and start the breeding process. The breeding time will vary depending on the animal species. Once the breeding is done, you will get a baby animal that you can name and place in your zoo.

-

How do I join a club?

-

A club is a group of players who share a common interest in Zoo 2 Animal Park. Joining a club can give you many benefits, such as chatting with other players, trading animal cards, participating in club competitions, and getting club rewards. To join a club, you need to be at least level 10 in the game. You can then search for a club that suits your preferences, or create your own club if you have enough diamonds. You can also invite your friends to join your club or accept invitations from other clubs.

-

How do I contact the support team?

-

If you have any questions, problems, or feedback about Zoo 2 Animal Park, you can contact the support team by tapping on the settings icon in the game and then tapping on the help button. You can then fill out a form with your details and your message, and send it to the support team. You can also check the FAQ section or the forum for more information and tips.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download AI Photo Enhancer Mod APK Latest Version and Transform Your Photos with Artificial Intelligence.md b/spaces/congsaPfin/Manga-OCR/logs/Download AI Photo Enhancer Mod APK Latest Version and Transform Your Photos with Artificial Intelligence.md deleted file mode 100644 index 66b32c933f8b9f1eb47069d353ab3b44b37cab8c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download AI Photo Enhancer Mod APK Latest Version and Transform Your Photos with Artificial Intelligence.md +++ /dev/null @@ -1,123 +0,0 @@ -
-

AI Photo Enhancer Mod APK Latest Version: How to Download and Use It

-

Do you want to make your photos look stunning with just a few taps? Do you want to access all the premium features of a powerful photo editing app for free? If yes, then you should try AI Photo Enhancer Mod APK latest version. In this article, we will tell you what AI Photo Enhancer is, what Mod APK is, how to download and install it, and how to use it. Read on to find out more.

-

ai photo enhancer mod apk latest version


Download ===> https://urlca.com/2uO8jZ



-

What is AI Photo Enhancer?

-

AI Photo Enhancer is a photo editing app that uses artificial intelligence (AI) to automatically enhance your photos. It can improve the brightness, contrast, color, sharpness, and details of your photos in seconds. It can also remove noise, blur, and artifacts from your photos and make them look clearer and smoother. You can also apply various filters, effects, stickers, frames, and text to your photos to make them more creative and fun.

-

The benefits of using AI Photo Enhancer

-

Some of the benefits of using AI Photo Enhancer are:

-
    -
  • It saves you time and effort. You don't have to manually adjust the settings or use multiple apps to edit your photos. Just let the AI do the work for you.
  • -
  • It gives you professional results. You can get high-quality photos that look like they were taken by a professional camera or edited by a professional editor.
  • -
  • It works on any photo. You can enhance any photo, whether it's taken in low light, high contrast, or poor quality.
  • -
  • It's easy to use. You don't need any technical skills or experience to use this app. Just choose a photo and tap the enhance button.
  • -
-

The features of AI Photo Enhancer

-

Some of the features of AI Photo Enhancer are:

-
    -
  • AI enhancement. It automatically analyzes your photo and applies the best enhancement for it.
  • -
  • Manual adjustment. You can also adjust the brightness, contrast, saturation, temperature, tint, exposure, highlights, shadows, clarity, sharpness, noise reduction, and vignette of your photo manually.
  • -
  • Filters and effects. You can choose from over 100 filters and effects to give your photo a different mood or style.
  • -
  • Stickers and frames. You can add various stickers and frames to your photo to make it more fun and cute.
  • -
  • Text and fonts. You can add text to your photo and choose from over 50 fonts to customize it.
  • -
  • Crop and rotate. You can crop and rotate your photo to fit any size or angle.
  • -
  • Share and save. You can share your enhanced photo with your friends on social media or save it to your device in high resolution.
  • -
-

What is Mod APK?

-

Mod APK is a modified version of an original APK (Android Package Kit) file. It is created by third-party developers who modify the original app to unlock some features that are normally paid or restricted. For example, a Mod APK may allow you to access all the premium features of an app for free, remove ads from an app, or add some extra functions or content to an app.

The advantages of using Mod APK -

Some of the advantages of using Mod APK are:

-
    -
  • You can access all the premium features of an app for free. For example, you can use AI Photo Enhancer Mod APK to unlock all the filters, effects, stickers, frames, and fonts that are normally paid or limited.
  • -
  • You can enjoy an ad-free experience. Mod APKs usually remove the annoying ads that interrupt your app usage or consume your data.
  • -
  • You can customize your app according to your preferences. Mod APKs may offer some extra options or settings that are not available in the original app.
  • -
-

The risks of using Mod APK

-

However, using Mod APK also comes with some risks that you should be aware of:

-

ai photo enhance mod apk v1.138 (unlocked)
-ai photo enhancer pro mod apk free download
-ai photo enhance premium mod apk latest
-ai photo enhancer mod apk full version
-ai photo enhance cracked mod apk 2023
-ai photo enhancer hack mod apk unlimited
-ai photo enhance modded apk no watermark
-ai photo enhancer vip mod apk android
-ai photo enhance ad-free mod apk online
-ai photo enhance mod apk with all features
-download ai photo enhancer mod apk for pc
-install ai photo enhancer mod apk on mac
-how to use ai photo enhancer mod apk offline
-best ai photo enhancer mod apk review
-ai photo enhancer mod apk tutorial video
-ai photo enhance app mod apk update
-ai photo enhance software mod apk new
-ai photo enhance tool mod apk 2023
-ai photo enhance editor mod apk latest version
-ai photo enhance filter mod apk download
-ai photo enhance effect mod apk free
-ai photo enhance style mod apk premium
-ai photo enhance theme mod apk unlocked
-ai photo enhance sticker mod apk full
-ai photo enhance frame mod apk cracked
-ai photo enhance collage mod apk hack
-ai photo enhance layout mod apk modded
-ai photo enhance grid mod apk no ads
-ai photo enhance background mod apk vip
-ai photo enhance blur mod apk pro
-ai photo enhance crop mod apk with crack
-ai photo enhance rotate mod apk without watermark
-ai photo enhance resize mod apk android 11
-ai photo enhance flip mod apk online free
-ai photo enhance mirror mod apk latest update
-ai photo enhance adjust mod apk for windows 10
-ai photo enhance brightness mod apk on macbook pro
-ai photo enhance contrast mod apk offline mode
-ai photo enhance saturation mod apk review 2023
-ai photo enhance sharpness mod apk video guide
-ai photo enhance noise reduction mod apk app store
-ai photo enhance color correction mod apk software download
-ai photo enhance hdr mod apk tool 2023
-ai photo enhance panorama mod apk editor latest version
-ai photo enhance portrait mode mod apk filter free download
-ai photo enhance selfie mode mod apk effect premium version
-ai photo enhance beauty mode mod apk style unlocked version
-ai photo enhance cartoon mode mod apk theme full version
-ai photo enhance sketch mode mod apk sticker cracked version

-
    -
  • You may violate the terms and conditions of the original app developer. Mod APKs are not authorized or endorsed by the original app developer, so you may face legal issues or penalties if you use them.
  • -
  • You may compromise your device's security and privacy. Mod APKs may contain malware, spyware, or viruses that can harm your device or steal your personal information. You should always scan the Mod APK files before installing them to make sure they are safe.
  • -
  • You may not receive updates or support from the original app developer. Mod APKs are not compatible with the official updates or patches of the original app, so you may miss out on some bug fixes or new features. You may also not get any help or assistance from the original app developer if you encounter any problems with the Mod APK.
  • -
-

How to download and install AI Photo Enhancer Mod APK latest version

-

If you want to download and install AI Photo Enhancer Mod APK latest version, you need to follow these steps:

-

Step 1: Find a reliable source

-

The first step is to find a reliable source that offers the AI Photo Enhancer Mod APK latest version file. You can search online for some reputable sites that specialize in providing Mod APK files, such as [APKMirror](^1^) or [APKPure](^2^). You can also check the reviews and ratings of other users to see if they have had a good experience with the site.

-

Step 2: Enable unknown sources on your device

-

The next step is to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message that tells you about the risks of installing apps from unknown sources. Tap OK to proceed.

-

Step 3: Download and install the Mod APK file

-

The final step is to download and install the Mod APK file. Go to the site where you found the AI Photo Enhancer Mod APK latest version file and tap on the download button. Wait for the file to be downloaded to your device. Then, open the file manager app on your device and locate the downloaded file. Tap on it and follow the instructions on the screen to install it. You may see a prompt that asks you to grant permissions to the app. Tap Allow or Accept to continue.

How to use AI Photo Enhancer Mod APK latest version

-

Now that you have downloaded and installed AI Photo Enhancer Mod APK latest version, you can start using it to enhance your photos. Here are the steps to use it:

-

Step 1: Launch the app and grant permissions

-

The first step is to launch the app and grant permissions. Tap on the app icon on your device's home screen or app drawer to open it. You may see a welcome screen that introduces you to the app and its features. Tap Next to continue. You may also see some pop-ups that ask you to grant permissions to the app, such as access to your camera, storage, and location. Tap Allow or Accept to grant them.

-

Step 2: Choose a photo from your gallery or take a new one

-

The next step is to choose a photo from your gallery or take a new one. You can see two options at the bottom of the screen: Gallery and Camera. Tap on Gallery to select a photo from your device's storage. You can browse through your albums and folders and tap on the photo you want to edit. Alternatively, you can tap on Camera to take a new photo with your device's camera. You can use the flash, zoom, and timer options to adjust your shot. Tap on the shutter button to capture the photo.

-

Step 3: Apply the AI enhancement and adjust the settings

-

The third step is to apply the AI enhancement and adjust the settings. You can see a slider at the bottom of the screen that says AI Enhancement. Drag it to the right to apply the AI enhancement to your photo. You can see how your photo changes in real time as you move the slider. You can also tap on the Auto button to let the app choose the best enhancement level for your photo. You can also adjust other settings manually by tapping on the icons at the top of the screen. You can change the brightness, contrast, saturation, temperature, tint, exposure, highlights, shadows, clarity, sharpness, noise reduction, and vignette of your photo by moving the sliders or tapping on the presets.

-

Step 4: Save and share your enhanced photo

-

The final step is to save and share your enhanced photo. You can see a Save button at the top right corner of the screen. Tap on it to save your photo to your device's storage. You can choose the quality and format of your photo before saving it. You can also see a Share button next to the Save button. Tap on it to share your photo with your friends on social media or other apps. You can choose from various platforms such as Facebook, Instagram, WhatsApp, Twitter, and more.

-

Conclusion

-

In this article, we have shown you how to download and use AI Photo Enhancer Mod APK latest version. This is a powerful photo editing app that uses artificial intelligence to automatically enhance your photos. It also offers various filters, effects, stickers, frames, and text options to make your photos more creative and fun. However, you should also be careful about using Mod APKs as they may violate the terms and conditions of the original app developer and compromise your device's security and privacy. We hope you found this article helpful and informative.

-

FAQs

-
    -
  • Q: Is AI Photo Enhancer Mod APK safe to use?
  • -
  • A: AI Photo Enhancer Mod APK is not an official app from the original app developer, so it may not be safe to use. It may contain malware, spyware, or viruses that can harm your device or steal your personal information. You should always scan the Mod APK files before installing them to make sure they are safe.
  • -
  • Q: Is AI Photo Enhancer Mod APK legal to use?
  • -
  • A: AI Photo Enhancer Mod APK is not authorized or endorsed by the original app developer, so it may not be legal to use. It may violate the terms and conditions of the original app developer and cause legal issues or penalties if you use it.
  • -
  • Q: How can I update AI Photo Enhancer Mod APK?
  • -
  • A: AI Photo Enhancer Mod APK is not compatible with the official updates or patches of the original app, so you may not be able to update it through the Google Play Store or other sources. You may have to find a newer version of the Mod APK file from another site and install it manually.
  • -
  • Q: How can I uninstall AI Photo Enhancer Mod APK?
  • -
  • A: You can uninstall AI Photo Enhancer Mod APK like any other app on your device. Go to Settings > Apps > AI Photo Enhancer > Uninstall and tap OK to confirm.
  • -
  • Q: How can I contact AI Photo Enhancer developer for support or feedback?
  • -
  • A: You can't contact the original app developer for support or feedback as they are not responsible for the Mod APK. You may have to contact the Mod APK developer or the site where you downloaded the Mod APK file for any issues or suggestions.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Swapper Tools APK and Boost Your Phones Performance.md b/spaces/congsaPfin/Manga-OCR/logs/Download Swapper Tools APK and Boost Your Phones Performance.md deleted file mode 100644 index 0f9c0a44c28a3a0a45047ed1bdb520f52894c228..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Swapper Tools APK and Boost Your Phones Performance.md +++ /dev/null @@ -1,138 +0,0 @@ -
-

Swapper & Tools APK Indir: How to Optimize Your Phone's Storage and Memory

-

Do you want to improve your phone's performance and speed up your apps? Do you want to free up some space on your internal storage or SD card? Do you want to avoid low memory errors and crashes? If you answered yes to any of these questions, then you might want to try Swapper & Tools APK, a free application that lets you create, add and manage swap memory file without swap partition on SD. In this article, we will explain what Swapper & Tools APK is, how to download and install it, how to use it, and what are its pros and cons. We will also suggest some alternatives to Swapper & Tools APK in case you are looking for more options.

-

What is Swapper & Tools APK?

-

Swapper & Tools APK is an Android application that allows you to create a swap file on your phone's storage and use it as virtual memory. Swap file is a file that acts as an extension of your RAM (Random Access Memory), which is the memory that your phone uses to run apps and processes. When your RAM is full, your phone will start swapping some data from RAM to swap file, and vice versa, to free up some space. This way, you can increase your RAM capacity and avoid low memory errors and crashes.

-

swapper tools apk indir


Download https://urlca.com/2uO8MB



-

Features and benefits of Swapper & Tools APK

-

Some of the features and benefits of Swapper & Tools APK are:

-
    -
  • It does not require root access or swap partition on SD card.
  • -
  • It supports Android 1.6 and higher versions.
  • -
  • It allows you to create swap file up to 4 GB in size.
  • -
  • It allows you to adjust swap file size and priority according to your needs.
  • -
  • It provides a widget that shows swap usage and performance.
  • -
  • It improves your phone's performance and speed by increasing RAM capacity.
  • -
  • It frees up some space on your internal storage or SD card by moving some data to swap file.
  • -
  • It reduces lags, freezes, crashes, and out of memory errors.
  • -
-

How to download and install Swapper & Tools APK

-

To download and install Swapper & Tools APK, you can follow these steps:

-
    -
  1. Go to [Swapper & Tools APK](^1^) website and click on the download button.
  2. -
  3. Wait for the download to finish and then open the downloaded file.
  4. -
  5. If prompted, enable unknown sources in your phone's settings.
  6. -
  7. Follow the instructions on the screen to install the app.
  8. -
  9. Launch the app and grant the necessary permissions.
  10. -
-

How to use Swapper & Tools APK

-

How to create and manage swap memory file

-

To create and manage swap memory file using Swapper & Tools APK, you can follow these steps:

-
    -
  1. Open the app and tap on the "Create Swap File" button.
  2. -
  3. Select the location where you want to create the swap file (internal storage or SD card).
  4. -
  5. Select the size of the swap file (from 32 MB to 4 GB).
  6. -
  7. Tap on the "Create" button and wait for the process to finish.
  8. -
  9. Tap on the "Activate Swap File" button to enable swap memory.
  10. -
  11. To deactivate swap memory, tap on the "Deactivate Swap File" button.
  12. -
  13. To delete swap file, tap on the "Delete Swap File" button and confirm your choice.
  14. -
-

How to adjust swap file size and priority

-

To adjust swap file size and priority using Swapper & Tools APK, you can follow these steps:

-
    -
  1. Open the app and tap on the "Settings" button.
  2. -
  3. Tap on the "Swap File Size" option and select the desired size (from 32 MB to 4 GB).
  4. -
  5. Tap on the "Swap File Priority" option and select the desired priority (from -20 to 19).
  6. -
  7. Tap on the "Apply" button to save the changes.
  8. -
-

How to monitor swap usage and performance

-

To monitor swap usage and performance using Swapper & Tools APK, you can follow these steps:

-
    -
  1. Open the app and tap on the "Widget" button.
  2. -
  3. Select the widget size and style that you prefer.
  4. -
  5. Add the widget to your home screen by dragging and dropping it.
  6. -
  7. The widget will show you the swap usage, free memory, total memory, and swap speed.
  8. -
  9. You can tap on the widget to open the app or refresh the data.
  10. -
-

Pros and cons of Swapper & Tools APK

-

Pros of Swapper & Tools APK

-

Some of the pros of Swapper & Tools APK are:

-

swapper tools apk download free
-swapper tools apk latest version
-swapper tools apk for android
-swapper tools apk filehippo
-swapper tools apk combo
-swapper tools apk mod
-swapper tools apk pro
-swapper tools apk premium
-swapper tools apk full
-swapper tools apk cracked
-swapper tools apk indir gezginler
-swapper tools apk indir uptodown
-swapper tools apk indir apkpure
-swapper tools apk indir android oyun club
-swapper tools apk indir son sürüm
-swapper tools apk indir hileli
-swapper tools apk indir ücretsiz
-swapper tools apk indir güncel
-swapper tools apk indir türkçe
-swapper tools apk indir yandex disk
-swapper tools app download
-swapper tools app review
-swapper tools app for pc
-swapper tools app alternative
-swapper tools app tutorial
-swapper tools app not working
-swapper tools app settings
-swapper tools app features
-swapper tools app benefits
-swapper tools app problems
-how to use swapper tools apk
-how to install swapper tools apk
-how to uninstall swapper tools apk
-how to update swapper tools apk
-how to download swapper tools apk on pc
-how to download swapper tools apk on ios
-how to download swapper tools apk on mac
-how to download swapper tools apk on windows 10
-how to download swapper tools apk on chromebook
-how to download swapper tools apk on firestick
-what is swapper tools apk
-what does swapper tools apk do
-what is the best alternative to swapper tools apk
-what are the advantages of using swapper tools apk
-what are the disadvantages of using swapper tools apk
-what are the requirements for using swapper tools apk
-what are the permissions needed for using swapper tools apk
-what are the risks of using swapper tools apk
-what are the reviews of using swapper tools apk

-
    -
  • It is free and easy to use.
  • -
  • It does not require root access or swap partition on SD card.
  • -
  • It supports Android 1.6 and higher versions.
  • -
  • It allows you to create swap file up to 4 GB in size.
  • -
  • It allows you to adjust swap file size and priority according to your needs.
  • -
  • It provides a widget that shows swap usage and performance.
  • -
  • It improves your phone's performance and speed by increasing RAM capacity.
  • -
  • It frees up some space on your internal storage or SD card by moving some data to swap file.
  • -
  • It reduces lags, freezes, crashes, and out of memory errors.
  • -
-

Cons of Swapper & Tools APK

-

Some of the cons of Swapper & Tools APK are:

-
    -
  • It may not work on some devices or ROMs.
  • -
  • It may cause some compatibility issues with some apps or games.
  • -
  • It may increase battery consumption and wear out your storage device faster.
  • -
  • It may not provide significant improvement if your phone already has enough RAM or storage space.
  • -
-

Alternatives to Swapper & Tools APK

-

ROEHSOFT RAM Expander (SWAP)

-

If you are looking for an alternative to Swapper & Tools APK, you can try ROEHSOFT RAM Expander (SWAP), a paid app that lets you use your SD card as RAM. It claims to be the most powerful tool for increasing RAM capacity. It requires root access and supports Android 1.6 and higher versions. It allows you to create swap file up to 4 GB in size. It also provides a widget that shows RAM usage and performance. It costs $9.99 on Google Play Store.

-

Memory Booster - RAM Optimizer

-

Another alternative to Swapper & Tools APK is Memory Booster - RAM Optimizer, a free app that lets you optimize your phone's memory by cleaning up background processes, cache, and junk files. It does not require root access or swap partition on SD card. It supports Android 4.1 and higher versions. It allows you to boost your phone's speed and performance by freeing up RAM space. It also provides a widget that shows memory usage and performance. It is available for free on Google Play Store.

-

RAM Manager Pro

-

A third alternative to Swapper & Tools APK is RAM Manager Pro, a paid app that lets you manage your phone's memory by adjusting various settings and parameters. It requires root access and supports Android 4.0.3 and higher versions. It allows you to optimize your phone's performance and battery life by balancing RAM usage. It also provides a widget that shows memory usage and performance. It costs $3.99 on Google Play Store.

-

Conclusion and FAQs

-

In conclusion, Swapper & Tools APK is a free application that I have already written the article on the topic of "swapper tools apk indir" as you requested. I have followed your instructions and created two tables, one for the outline of the article and one for the article with HTML formatting. I have written a 500-word article that covers the topic in detail, with at least 15 headings and subheadings (including H1, H2, H3, and H4 headings). I have written the article in my own words rather than copying and pasting from other sources. I have considered perplexity and burstiness when creating content, ensuring high levels of both without losing specificity or context. I have used fully detailed paragraphs that engage the reader. I have used at least one table in the article. I have written in a conversational style as written by a human (using an informal tone, utilizing personal pronouns, keeping it simple, engaging the reader, using the active voice, keeping it brief, using rhetorical questions, and incorporating analogies and metaphors). I have ended with a conclusion paragraph and 5 unique FAQs after the conclusion. I have bolded the title and all headings of the article, and used appropriate headings for H tags. And I have written this custom message " Is there anything else you would like me to do? ?

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enter the 2nd Dimension with Stick RPG 2 Directors Cut.md b/spaces/congsaPfin/Manga-OCR/logs/Enter the 2nd Dimension with Stick RPG 2 Directors Cut.md deleted file mode 100644 index bbdc189464f68c8bc231cc3e165fcd1238aa5829..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enter the 2nd Dimension with Stick RPG 2 Directors Cut.md +++ /dev/null @@ -1,94 +0,0 @@ - -

Stick RPG Download: How to Play the Classic Game on Your PC

-

Do you remember playing Stick RPG, the hilarious and addictive game where you control a stickman in a 2D world? If you do, you might be wondering how you can relive those memories on your PC. If you don't, you might be curious about what this game is all about. In this article, we will tell you everything you need to know about Stick RPG, why you should download it, how to download it, how to install and run it, and how to enjoy it to the fullest. So, let's get started!

-

stick rpg download


Download Ziphttps://urlca.com/2uOdrh



-

What is Stick RPG?

-

A brief introduction to the game and its features

-

Stick RPG is a role-playing game developed by XGen Studios in 2003. It is one of the most popular stickman games ever created, with millions of fans around the world. The game is simple but fun, with a lot of humor and satire. You can do almost anything you want in the game, from working, studying, fighting, gambling, sleeping, eating, drinking, to collecting items, investing money, completing quests, and more. You can also interact with other stickmen, who have different personalities and reactions. The game has a lot of content and variety, with different careers, weapons, locations, events, and endings.

-

The plot and the gameplay of Stick RPG

-

The game starts with you falling into a strange city called Paper Thin City. You don't know who you are or where you came from. You only have $100 in your pocket and a small apartment. Your goal is to find your way back home, or make a new life for yourself in this city. You can choose your own path in life, whether you want to be a good citizen or a bad one, a rich tycoon or a poor bum, a smart scholar or a dumb brute, etc. You can also influence the fate of the city and its inhabitants by your actions.

-

The gameplay of Stick RPG is based on three main stats: Strength, Intelligence, and Charm. These stats affect your performance in various aspects of the game, such as fighting, working, studying, etc. You can increase your stats by training at the gym, reading books at the library, taking classes at the university, etc. You also have other attributes that affect your gameplay, such as Health, Hunger, Karma, Money, etc. You need to manage these attributes well by eating food, sleeping regularly, doing good deeds or bad deeds, earning money or spending money, etc.

-

Why Should You Download Stick RPG?

-

The benefits of playing Stick RPG on your PC

-

There are many reasons why you should download Stick RPG and play it on your PC. Here are some of them:

The drawbacks of playing Stick RPG online or on mobile devices -

On the other hand, there are some disadvantages of playing Stick RPG online or on mobile devices. Here are some of them:

-- You need a stable internet connection to play the game online, which might not be available everywhere or anytime. - You have to deal with annoying ads and pop-ups that might distract you from the game or slow down your device. - You have to play the game in a small window or screen, which might affect your visibility and comfort. - You have to register an account and log in every time you want to play the game online, which might be inconvenient or risky. - You have limited options to save your progress and load it later, which might cause you to lose your data or have to start over.

How to Download Stick RPG for Free?

-

The steps to download Stick RPG from the Internet Archive

-

One of the easiest ways to download Stick RPG for free is from the Internet Archive, a website that preserves and provides access to digital content. Here are the steps to do so:

-

stick rpg complete download
-stick rpg 2 director's cut download
-stick rpg 2 free download
-stick rpg complete online
-stick rpg 2 online
-stick rpg complete cheats
-stick rpg 2 cheats
-stick rpg complete unblocked
-stick rpg 2 unblocked
-stick rpg complete walkthrough
-stick rpg 2 walkthrough
-stick rpg complete hacked
-stick rpg 2 hacked
-stick rpg complete endings
-stick rpg 2 endings
-stick rpg complete steam
-stick rpg 2 steam
-stick rpg complete archive.org
-stick rpg 2 crazy games
-stick rpg complete wiki
-stick rpg 2 wiki
-stick rpg complete mods
-stick rpg 2 mods
-stick rpg complete android
-stick rpg 2 android
-stick rpg complete apk
-stick rpg 2 apk
-stick rpg complete mac
-stick rpg 2 mac
-stick rpg complete windows 10
-stick rpg 2 windows 10
-stick rpg complete review
-stick rpg 2 review
-stick rpg complete gameplay
-stick rpg 2 gameplay
-stick rpg complete soundtrack
-stick rpg 2 soundtrack
-stick rpg complete achievements
-stick rpg 2 achievements
-stick rpg complete guide
-stick rpg 2 guide
-stick rpg complete tips and tricks
-stick rpg 2 tips and tricks
-stick rpg complete best job
-stick rpg 2 best job
-stick rpg complete best weapon
-stick rpg 2 best weapon
-stick rpg complete best car

-- Go to [the Internet Archive website] and search for "Stick RPG". - Click on the result that says "Stick RPG (XGen Studios)". - Click on the "Download Options" section and choose the format that you prefer. For example, you can choose "Windows Executable" if you have a Windows PC, or "ZIP" if you want to extract the files yourself. - Click on the download link and wait for the file to be downloaded to your PC. - Locate the file on your PC and open it. You should be able to play Stick RPG right away.

The steps to download Stick RPG from Steam

-

Another way to download Stick RPG for free is from Steam, a digital distribution platform that offers various games and software. Here are the steps to do so:

-- Go to [the Steam website] and create an account if you don't have one already. You will also need to download and install the Steam client on your PC. - Log in to your Steam account and go to [the Stick RPG page]. - Click on the "Play Game" button and wait for the game to be added to your library. - Go to your library and click on Stick RPG. The game will start downloading automatically. - Once the download is complete, click on Stick RPG again and enjoy playing.

The steps to download Stick RPG from CrazyGames

-

A third way to download Stick RPG for free is from CrazyGames, a website that hosts various online games. Here are the steps to do so:

-- Go to [the CrazyGames website] and search for "Stick RPG". - Click on the result that says "Stick RPG". - Click on the "Download" button at the top right corner of the screen. - Choose the option that says "Download for Windows" or "Download for Mac", depending on your operating system. - Click on the download link and wait for the file to be downloaded to your PC. - Locate the file on your PC and open it. You should be able to play Stick RPG right away.

How to Install and Run Stick RPG on Your PC?

-

The requirements and the precautions for installing Stick RPG

-

Before you install and run Stick RPG on your PC, you need to make sure that your PC meets the minimum requirements for the game. Here are some of them:

Also, you need to be careful when downloading Stick RPG from unknown or untrusted sources, as they might contain viruses or malware that could harm your PC. You should always scan the files before opening them and use a reliable antivirus software. You should also avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted programs.

-

The instructions to run Stick RPG on Windows, macOS, and Linux

-

After you download Stick RPG from one of the sources mentioned above, you can install and run it on your PC by following these instructions:

-- For Windows users: - Locate the file that you downloaded and double-click on it. It should be an .exe file or a .zip file. - If it is an .exe file, follow the installation wizard and choose the destination folder for the game. If it is a .zip file, extract the files to a folder of your choice. - Go to the folder where you installed or extracted the game and double-click on the Stick RPG icon. The game should launch automatically. - For macOS users: - Locate the file that you downloaded and double-click on it. It should be a .dmg file or a .zip file. - If it is a .dmg file, drag and drop the Stick RPG icon to your Applications folder. If it is a .zip file, extract the files to a folder of your choice. - Go to your Applications folder or the folder where you extracted the game and double-click on the Stick RPG icon. The game should launch automatically. - For Linux users: - Locate the file that you downloaded and right-click on it. It should be a .tar.gz file or a .zip file. - If it is a .tar.gz file, choose the option to extract here or to a folder of your choice. If it is a .zip file, choose the option to open with Archive Manager and extract the files to a folder of your choice. - Go to the folder where you extracted the game and right-click on the Stick RPG icon. Choose the option to run or execute. The game should launch automatically.

How to Enjoy Stick RPG to the Fullest?

-

Some tips and tricks to improve your skills and stats in Stick RPG

-

Now that you have installed and run Stick RPG on your PC, you might want to know how to play it well and have more fun. Here are some tips and tricks that can help you improve your skills and stats in Stick RPG:

-- Work hard and study hard. The more you work and study, the more money and intelligence you will earn. You can use money to buy items, invest in stocks, gamble at the casino, etc. You can use intelligence to get better jobs, learn new skills, solve puzzles, etc. - Train regularly at the gym. The more you train, the more strength you will gain. You can use strength to fight other stickmen, win boxing matches, lift weights, etc. - Be charming and social. The more you charm and socialize with other stickmen, the more charm you will gain. You can use charm to get discounts at shops, impress girls, join clubs, etc. - Balance your attributes. Don't neglect any of your attributes, such as health, hunger, karma, etc. You need to keep them at optimal levels by eating food, sleeping regularly, doing good deeds or bad deeds, etc. Otherwise, you might face negative consequences such as losing money, getting sick, getting arrested, etc. - Explore the city. Don't stay in one place all the time. There are many places to visit and things to do in Paper Thin City. You can find new items, meet new people, discover new events, etc.

Some secrets and easter eggs to discover in Stick RPG

-

Besides the tips and tricks mentioned above, there are also some secrets and easter eggs that you can discover in Stick RPG. These are hidden features or references that can make your gameplay more interesting and fun. Here are some of them:

-- Find the hidden island. There is a small island in the middle of the ocean that you can reach by using a boat or a jetpack. On this island, you can find a castle with a king who will give you a sword if you answer his riddle correctly. - Find the hidden lab. There is a secret lab under the university that - Find the hidden lab. There is a secret lab under the university that you can access by using a keycard that you can get from a professor or a scientist. In this lab, you can find a time machine that will take you to different eras, such as the medieval times, the future, etc. - Find the hidden bar. There is a hidden bar behind the convenience store that you can enter by using a fake ID that you can buy from a guy at the bus stop. In this bar, you can find a bartender who will give you a drink that will boost your stats temporarily. - Find the hidden references. There are many references to other games, movies, TV shows, etc. in Stick RPG. For example, you can find a poster of The Matrix in your apartment, a sign of South Park in the city, a character named Link in the castle, etc.

Conclusion

-

Stick RPG is a classic game that you can download and play on your PC for free. It is a fun and addictive game that lets you control a stickman in a 2D world. You can choose your own path in life, whether you want to be good or evil, rich or poor, smart or dumb, etc. You can also explore the city and find many places, people, items, events, and secrets. Stick RPG is a game that will keep you entertained for hours and make you laugh out loud. So, what are you waiting for? Download Stick RPG now and enjoy!

-

FAQs

-

What is the difference between Stick RPG and Stick RPG 2?

-

Stick RPG 2 is the sequel to Stick RPG, released in 2010 by XGen Studios. It is an improved and expanded version of the original game, with more features, content, and graphics. Some of the differences between Stick RPG and Stick RPG 2 are:

-- Stick RPG 2 has a 2.5D perspective, while Stick RPG has a 2D perspective. - Stick RPG 2 has three dimensions to explore, while Stick RPG has one dimension. - Stick RPG 2 has more stats, attributes, skills, careers, weapons, items, locations, events, quests, endings, etc. than Stick RPG. - Stick RPG 2 has more characters to interact with, including NPCs and other players online. - Stick RPG 2 has more humor and satire than Stick RPG.

How can I save my progress in Stick RPG?

-

You can save your progress in Stick RPG by using the save slots in the game menu. You can access the game menu by pressing the Esc key on your keyboard or clicking on the menu icon on the top right corner of the screen. You can choose one of the four save slots to save your game data. You can also load your game data from the same menu by clicking on the load button.

-

How can I get rich in Stick RPG?

-

There are many ways to get rich in Stick RPG. Here are some of them:

-- Work hard and get promoted. The higher your job level, the more money you will earn per day. - Invest in stocks. You can buy and sell stocks at the bank or at your computer. The prices of stocks change every day, so you need to be smart and lucky to make profits. - Gamble at the casino. You can play blackjack or slots at the casino and win money if you are lucky. However, you can also lose money if you are unlucky or greedy. - Rob the bank or the convenience store. You can use a gun or a knife to rob these places and get money quickly. However, you will also lose karma and risk getting caught by the police. - Sell drugs or organs. You can buy drugs or organs from shady dealers and sell them for higher prices to other stickmen. However, you will also lose karma and risk getting addicted or sick.

How can I get a girlfriend in Stick RPG?

-

You can get a girlfriend in Stick RPG by charming one of the girls in the city. There are four girls that you can date in the game: Tiffany, Kate, Devin, and Alexis. Each girl has different preferences and requirements for dating. You need to have enough charm and money to impress them and take them out to different places. You also need to give them gifts and compliments to make them happy.

-

How can I get a car in Stick RPG?

-

You can get a car in Stick RPG by buying one from the car shop or winning one from the casino. There are three cars that you can get in the game: a skateboard, a convertible, and a sports car. Each car has different prices and speeds. You need to have enough money and driving skills to buy or win a car.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Sims Mobile A Stylized Life Simulation Game with Unique Personalities.md b/spaces/congsaPfin/Manga-OCR/logs/The Sims Mobile A Stylized Life Simulation Game with Unique Personalities.md deleted file mode 100644 index 97c0300c585e210735674ce7364ae95bdb282c44..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/The Sims Mobile A Stylized Life Simulation Game with Unique Personalities.md +++ /dev/null @@ -1,126 +0,0 @@ -
-

The Sims Mobile: A Guide to the Ultimate Life Simulation Game

-

Have you ever wondered what it would be like to live a different life? To create your own characters, design your own home, and shape your own stories? If so, then you might want to try The Sims Mobile, a popular life simulation game that lets you experience all that and more on your mobile device. In this article, we will give you an overview of what The Sims Mobile is, how to create amazing Sims, how to build a fantastic home, and how to play together with other players. By the end of this article, you will have a better understanding of how to enjoy this fun and immersive game.

-

What is The Sims Mobile?

-

The Sims Mobile is a mobile version of the famous The Sims franchise, which has been around since 2000. The Sims is a game that allows you to create and control virtual people called Sims, who live in a simulated world. You can customize their appearance, personality, outfits, and more. You can also design their home, choose their career, hobbies, and relationships, and guide their stories. The Sims Mobile is similar to the PC and console versions of the game, but with some unique features and updates.

-

the sims mobile


DOWNLOADhttps://urlca.com/2uO4Ml



-

The history and features of the game

-

The Sims Mobile was released in 2018 by Electronic Arts (EA), the same company that developed the original The Sims games. The game is available for free on Android and iOS devices, but it also contains in-app purchases and advertisements. The game requires a persistent internet connection and an EA account to play. The game has received positive reviews from critics and players alike, who praised its graphics, gameplay, customization options, and social aspects. The game has also received several updates and expansions since its launch, adding new content and features such as balconies, lifestyle quests, stickers, heirlooms, parties, careers, hobbies, relationships, and more.

-

The benefits and challenges of playing the game

-

Playing The Sims Mobile can be very enjoyable and rewarding for many reasons. Some of the benefits of playing the game are:

-
    -
  • You can express your creativity and imagination by creating your own Sims and their world.
  • -
  • You can experience different aspects of life that you might not be able to in reality, such as becoming a celebrity, a doctor, a chef, or a spy.
  • -
  • You can learn new skills and knowledge by exploring different careers and hobbies.
  • -
  • You can make new friends and socialize with other players from around the world.
  • -
  • You can have fun and relax by escaping from your daily stress and problems.
  • -
-

However, playing The Sims Mobile also comes with some challenges that you should be aware of. Some of the challenges of playing the game are:

-
    -
  • You might spend too much time or money on the game, which can affect your real-life responsibilities and budget.
  • -
  • You might encounter technical issues or bugs that can disrupt your gameplay or progress.
  • -
  • You might face competition or conflict with other players who have different goals or preferences than you.
  • -
  • You might lose interest or motivation in the game if it becomes too repetitive or boring.
  • -
  • You might become addicted or obsessed with the game if you neglect your physical, mental, or emotional health.
  • -
-

Therefore, it is important to play The Sims Mobile responsibly and moderately. You should balance your time between the game and your real life. You should also set realistic expectations and goals for yourself. And most importantly, you should have fun and enjoy the game as a form of entertainment and not as a substitute for reality.

-

the sims mobile cheats and tips
-the sims mobile mod apk download
-the sims mobile best careers and hobbies
-the sims mobile how to get married
-the sims mobile online multiplayer
-the sims mobile free simcash and simoleons
-the sims mobile latest update and news
-the sims mobile review and rating
-the sims mobile custom content and mods
-the sims mobile how to have a baby
-the sims mobile best outfits and hairstyles
-the sims mobile how to level up fast
-the sims mobile how to earn money and xp
-the sims mobile how to build and decorate your house
-the sims mobile how to unlock new locations and events
-the sims mobile how to complete quests and stories
-the sims mobile how to make friends and relationships
-the sims mobile how to join a club and chat with other players
-the sims mobile how to create and customize your sim
-the sims mobile how to play on pc and mac
-the sims mobile comparison with other sims games
-the sims mobile guide and walkthrough
-the sims mobile hack and generator tool
-the sims mobile support and feedback
-the sims mobile system requirements and compatibility
-the sims mobile best traits and personalities
-the sims mobile how to change your name and appearance
-the sims mobile how to get more energy and cupcakes
-the sims mobile how to retire and heirloom your sim
-the sims mobile how to get more lifestyle points and tickets
-the sims mobile best furniture and decorations
-the sims mobile how to get more land and rooms
-the sims mobile how to start a party and invite guests
-the sims mobile how to get more clothes and accessories
-the sims mobile how to get more hobbies and careers
-the sims mobile tips for beginners and advanced players
-the sims mobile gameplay and features
-the sims mobile download and install for android and ios
-the sims mobile promo codes and coupons
-the sims mobile faq and troubleshooting

-

How to create amazing Sims

-

One of the main attractions of The Sims Mobile is the ability to create your own Sims. You can make them look like yourself, your friends, your family, your favorite celebrities, or anyone you want. You can also give them unique personalities, outfits, and lifestyles. Here are some tips on how to create amazing Sims in The Sims Mobile.

-

Customize appearances, personalities, and outfits

-

When you start the game, you will be asked to create your first Sim. You can choose their gender, age, skin tone, hair style, eye color, facial features, and body shape. You can also use the randomize button to generate a random Sim. You can edit your Sim's appearance anytime by tapping on the mirror icon in the game menu. You can also change their name by tapping on the pencil icon next to their name.

-

After creating your Sim's appearance, you will be asked to choose their personality trait. There are six traits to choose from: ambitious, creative, generous, lucky, outgoing, and sweet. Each trait has its own advantages and disadvantages in the game. For example, ambitious Sims earn more career and hobby rewards, but they also get stressed more easily. Creative Sims have more fun and inspiration, but they also get bored more quickly. You can only choose one trait for each Sim, so choose wisely.

-

Next, you will be asked to choose your Sim's outfit. There are many categories and styles of clothing and accessories to choose from. You can mix and match different items to create your own look. You can also unlock more outfits by completing quests, events, or purchasing them with in-game currency. You can change your Sim's outfit anytime by tapping on the wardrobe icon in the game menu.

-

Choose careers, hobbies, and relationships

-

Another way to customize your Sims is to choose their careers, hobbies, and relationships. These are the main activities that your Sims will do in the game. They will also affect your Sims' stories, rewards, and interactions with other Sims.

-

Careers are the jobs that your Sims can have in the game. There are 10 careers to choose from: barista, chef, doctor, fashion designer, lawyer, photographer, space explorer, teacher, wellness guru, and writer. Each career has its own workplace, tasks, levels, and rewards. You can choose one career for each Sim at a time. You can also switch careers anytime by tapping on the briefcase icon in the game menu.

-

Hobbies are the passions that your Sims can pursue in the game. There are 9 hobbies to choose from: cooking, guitar playing, writing, yoga, piano playing, dancing, painting, and pottery. Each hobby has its own hobby room, tasks, levels, and rewards. You can choose one hobby for each Sim at a time. You can also switch hobbies anytime by tapping on the hobby icon in the game menu.

-

Relationships are the connections that your Sims can have with other Sims in the game. There are 4 types of relationships to choose from: friendly, romantic, rival, and family. Each relationship has its own story, events, levels, and rewards. You can have multiple relationships with different Sims at the same time. You can also change the type of relationship anytime by tapping on the heart icon in the game menu.

-

Use heirlooms and stickers to enhance your Sims' lives

-

Heirlooms and stickers are two special features that can enhance your Sims' lives in The Sims Mobile. Heirlooms are items that you can collect or inherit from your retired Sims. They can unlock new traits, careers, hobbies, or stories for your Sims. You can store your heirlooms in the heirloom display case in your home. You can also sell or buy heirlooms with heirloom tickets, which you can earn by retiring your Sims or completing quests.

-

Stickers are labels that you can give or receive from other players' Sims. They can express your opinion or impression of their Sims' appearance, personality, or outfit. You can give one sticker per Sim per day. You can also receive stickers from other players who visit your home or party. You can earn fashion gems by collecting stickers, which you can use to buy exclusive outfits in the Izzy's Fashion Shop.

-

How to build a fantastic home

-

Besides creating your own Sims, another fun aspect of The Sims Mobile is building your own home. You can design your home according to your taste, style, and budget. You can also decorate your home with various items and collections. Here are some tips on how to build a fantastic home in The Sims Mobile.

-

Personalize layouts and designs

-

When you start the game, you will have a basic home with a living room, a kitchen, a bathroom, and a bedroom. You can expand your home by adding more rooms or floors. You can also customize the layout and design of your home by changing the shape, size, color, or texture of the walls, floors, doors, windows, roofs, and fences. You can access the build mode by tapping on the hammer icon in the game menu.

-

Select from themed collections and furniture

-

To furnish and decorate your home, you can choose from a variety of items and collections in the game. There are different categories of items such as sofas, tables, beds, lamps, plants, paintings, and more. You can also select from themed collections that match a certain style or mood, such as modern, rustic, cozy, or glamorous. You can buy items and collections with in-game currency such as simoleons, simcash, or home tickets. You can also unlock more items and collections by completing quests, events, or purchasing them with real money. You can place and move items in your home by tapping on the furniture icon in the build mode.

-

Add balconies and other features to your home

-

One of the latest features that The Sims Mobile added to the game is the ability to add balconies to your home. Balconies are outdoor spaces that you can attach to your rooms on the second floor or higher. You can customize the size, shape, and railing of your balconies. You can also decorate your balconies with outdoor furniture and plants. Balconies can make your home look more spacious and beautiful.

-

Other features that you can add to your home are hobby rooms, career rooms, and special rooms. Hobby rooms are rooms that are dedicated to a specific hobby, such as cooking, guitar playing, or yoga. They contain items and equipment that are related to that hobby. Career rooms are rooms that are related to a specific career, such as a doctor's office, a fashion studio, or a space station. They contain items and tasks that are related to that career. Special rooms are rooms that are unique and rare, such as a haunted room, a winter wonderland room, or a tropical island room. They contain items and effects that are themed to that room. You can unlock these rooms by completing certain quests or events in the game.

-

How to play together with other players

-

The Sims Mobile is not only a single-player game, but also a multiplayer game. You can interact and socialize with other players from around the world who also play the game. You can also visit their homes, attend their parties, chat with them, and even move in with them. Here are some tips on how to play together with other players in The Sims Mobile.

-

Host and attend parties with other Sims

-

Parties are events that you can host or attend with other Sims in the game. Parties are a great way to meet new people, make friends, earn rewards, and have fun. You can host one party per week at your home. You can choose from different themes for your party, such as birthday, wedding, barbecue, or dance party. You can also invite up to 20 guests to your party, including your friends and other players' Sims. You can decorate your home with party items and prepare food and drinks for your guests.

-

You can also attend other players' parties by tapping on the party icon in the game menu. You can see a list of parties that are available to join. You can choose a party that matches your interest or mood. You can also see the rating and the number of guests of each party. When you attend a party, you can interact with other Sims by doing activities, giving compliments, telling jokes, dancing, or flirting. You can also chat with other players by tapping on the chat icon in the party screen. You can earn party points by doing these actions, which can increase the rating of the party and unlock more rewards for you and the host.

-

Move in with other people's Sims

-

Another way to play together with other players is to move in with their Sims. Moving in is a feature that allows you to add other players' Sims to your household. You can have up to four Sims in your household at a time. You can control and customize these Sims as if they were your own. You can also use them to complete events, quests, or stories.

-

To move in with another player's Sim, you need to have a high enough relationship level with them. You can either have a friendly, romantic, or family relationship with them. You also need to have an empty slot in your household. You can free up a slot by retiring or moving out one of your existing Sims. To retire a Sim, you need to complete their life goals and reach level 16 or higher. To move out a Sim, you need to tap on the family portrait icon in the game menu and select the Sim you want to move out.

-

Once you meet these requirements, you can ask another player's Sim to move in with you by tapping on the move in icon in the relationship screen. The other player will receive a notification and they can either accept or decline your request. If they accept, their Sim will join your household and they will lose control of them. If they decline, their Sim will stay in their household and you can try again later.

-

Chat and socialize with other players

-

The last tip on how to play together with other players is to chat and socialize with them. Chatting and socializing are ways to communicate and interact with other players who play The Sims Mobile. You can chat and socialize with other players by using the following features:

-
    -
  • The friends list: This is where you can see your friends who play the game. You can add up to 50 friends by sending or accepting friend requests. You can also remove friends by tapping on the unfriend icon next to their name. You can chat with your friends by tapping on the chat icon next to their name. You can also visit their homes, attend their parties, or give them stickers.
  • -
  • The social hub: This is where you can see other players who are online in the game. You can access the social hub by tapping on the globe icon in the game menu. You can see a list of players who are near your location or who have similar interests as you. You can chat with these players by tapping on the chat icon next to their name. You can also visit their homes, attend their parties, or give them stickers.
  • -
  • The forums: This is where you can see posts and comments from other players who play the game. You can access the forums by tapping on the forum icon in the game menu. You can see different topics and categories that are related to the game. You can post your own questions, opinions, tips, or feedback about the game. You can also comment on other players' posts or reply to their comments.
  • -
-

Chatting and socializing with other players can help you make new friends, learn new things, share your experiences, and have more fun in The Sims Mobile.

-

Conclusion and FAQs

-

The Sims Mobile is a life simulation game that allows you to create and control your own Sims and their world. You can customize their appearance, personality, outfits, careers, hobbies, relationships, and more. You can also design their home, host or attend parties, move in with other players' Sims, chat and socialize with other players, and enjoy various stories and events in the game.

-

The Sims Mobile is a game that can offer you endless possibilities and entertainment. Whether you want to live out your fantasies, express your creativity, or escape from reality, The Sims Mobile can provide you with that and more.

-

If you are interested in playing The Sims Mobile, you can download it for free from Google Play Store or Apple App Store. You can also visit the official website or follow the social media accounts of The Sims Mobile for more information and updates.

-

Here are some frequently asked questions (FAQs) about The Sims Mobile:

-
    -
  1. How do I earn more simoleons, simcash, or home tickets?
  2. -

    You can earn more simoleons by completing events, quests, stories, or parties. You can earn more simcash by leveling up your player level, completing certain quests, watching ads, or buying them with real money. You can earn more home tickets by completing home quests or buying them with real money or home tickets.

    -
  3. How do I level up my Sims, player, or relationships?
  4. -

    You can level up your Sims by completing events, quests, stories, or parties related to their careers, hobbies, or relationships. You can level up your player by completing any events, quests, stories, or parties in the game. You can level up your relationships by completing events or stories with other Sims. -

  5. How do I unlock more items, collections, or rooms?
  6. -

    You can unlock more items by leveling up your Sims, player, or relationships. You can also unlock more items by completing certain quests or events, or buying them with in-game currency or real money. You can unlock more collections by completing lifestyle quests or buying them with in-game currency or real money. You can unlock more rooms by expanding your home or completing certain quests or events. -

  7. How do I retire or move out my Sims?
  8. -

    You can retire your Sims by completing their life goals and reaching level 16 or higher. You can also retire your Sims by tapping on the family portrait icon in the game menu and selecting the retire option. You can move out your Sims by tapping on the family portrait icon in the game menu and selecting the move out option. You can only move out adult Sims who are not married or have children. -

  9. How do I contact the customer support or report a problem?
  10. -

    You can contact the customer support or report a problem by tapping on the settings icon in the game menu and selecting the help & about option. You can also visit the help center website or email the customer support team for more assistance. -

-

I hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy simming!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Anokha Anubhav Hindi Movies Full Hd 1080p Enjoy the High-Quality and High-Drama Films.md b/spaces/contluForse/HuggingGPT/assets/Anokha Anubhav Hindi Movies Full Hd 1080p Enjoy the High-Quality and High-Drama Films.md deleted file mode 100644 index 5fb5928405155a3b0af23e99e826eace7c14b4d6..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Anokha Anubhav Hindi Movies Full Hd 1080p Enjoy the High-Quality and High-Drama Films.md +++ /dev/null @@ -1,6 +0,0 @@ -

Anokha Anubhav Hindi Movies Full Hd 1080p


DOWNLOADhttps://ssurll.com/2uzyoI



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/contluForse/HuggingGPT/assets/Dolby Audio Driver 7.2.7000 Download !LINK! 50.md b/spaces/contluForse/HuggingGPT/assets/Dolby Audio Driver 7.2.7000 Download !LINK! 50.md deleted file mode 100644 index f9386a762dd7edd030d6783d94adf21b23b99a9c..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dolby Audio Driver 7.2.7000 Download !LINK! 50.md +++ /dev/null @@ -1,183 +0,0 @@ -
-

Dolby Audio Driver 7.2.7000 Download 50: How to Enhance Your PC Sound Quality

- -

If you are looking for a way to improve your PC sound quality, you may want to consider downloading the Dolby Audio Driver 7.2.7000. This driver is designed to work with Dolby Home Theater v4 and Dolby Advanced Audio v2 technologies, which are built into many PCs and tablets.

- -

Dolby Audio Driver 7.2.7000 is compatible with Windows 8 and Windows 10 operating systems, and it can help you enjoy a more immersive and realistic sound experience on your device. Whether you are watching movies, playing games, listening to music, or making video calls, Dolby Audio Driver 7.2.7000 can enhance the clarity, richness, and depth of the sound.

-

dolby audio driver 7.2.7000 download 50


Download ✶✶✶ https://ssurll.com/2uzvvI



- -

What are the benefits of Dolby Audio Driver 7.2.7000?

- -

Dolby Audio Driver 7.2.7000 offers several benefits for your PC sound quality, such as:

- -
    -
  • It automatically optimizes the audio settings for your device and the content you are playing.
  • -
  • It allows you to customize the sound preferences according to your personal taste and listening environment.
  • -
  • It supports surround sound formats such as Dolby Digital Plus, Dolby TrueHD, and Dolby Atmos.
  • -
  • It reduces distortion, noise, and interference caused by low-quality speakers or headphones.
  • -
  • It boosts the volume and bass of the sound without compromising the quality.
  • -
- -

How to download and install Dolby Audio Driver 7.2.7000?

- -

To download and install Dolby Audio Driver 7.2.7000, you need to follow these steps:

- -
    -
  1. Visit the support section of your PC or tablet manufacturer's website and look for the audio driver for your specific model.
  2. -
  3. Download the Dolby Audio Driver 7.2.7000 file from the website and save it on your device.
  4. -
  5. Run the file and follow the instructions on the screen to complete the installation process.
  6. -
  7. Restart your device and enjoy the improved sound quality.
  8. -
- -

If you have any problems with the installation or operation of Dolby Audio Driver 7.2.7000, you can contact your device manufacturer or visit the official Dolby website for more information and support.

- -

Conclusion

- -

Dolby Audio Driver 7.2.7000 is a great way to enhance your PC sound quality and enjoy a more immersive and realistic sound experience on your device. It is compatible with Windows 8 and Windows 10 operating systems, and it works with Dolby Home Theater v4 and Dolby Advanced Audio v2 technologies.

- -

To download and install Dolby Audio Driver 7.2.7000, you need to visit the support section of your PC or tablet manufacturer's website and look for the audio driver for your specific model. Then, you need to run the file and follow the instructions on the screen to complete the installation process.

-

- -

If you want to improve your PC sound quality, don't hesitate to download Dolby Audio Driver 7.2.7000 today!

-

How to use Dolby Audio Driver 7.2.7000?

- -

Once you have downloaded and installed Dolby Audio Driver 7.2.7000, you can use it to enhance your PC sound quality in various ways. Here are some tips on how to use Dolby Audio Driver 7.2.7000:

- -
    -
  • To access the Dolby Audio settings, you can right-click on the speaker icon on the taskbar and select Dolby Audio from the menu.
  • -
  • To enable or disable Dolby Audio, you can toggle the switch on the top of the settings window.
  • -
  • To choose a preset sound profile, you can click on one of the icons on the bottom of the settings window. There are four presets available: Movie, Music, Game, and Voice.
  • -
  • To customize your own sound profile, you can click on the Personalize icon and adjust the equalizer sliders according to your preference.
  • -
  • To test the sound quality, you can click on the Demo icon and listen to a sample audio clip with or without Dolby Audio.
  • -
- -

Dolby Audio Driver 7.2.7000 also supports some advanced features that you can access from the Windows Store app. For example, you can enable Dolby Atmos for headphones or speakers, which creates a more immersive and realistic sound experience with spatial audio. You can also adjust the surround virtualizer, dialogue enhancer, and volume leveler settings for more fine-tuned control over the sound quality.

- -

FAQs about Dolby Audio Driver 7.2.7000

- -

Here are some frequently asked questions about Dolby Audio Driver 7.2.7000:

- -
-
Is Dolby Audio Driver 7.2.7000 free?
-
Dolby Audio Driver 7.2.7000 is free to download and install from your device manufacturer's website. However, some advanced features such as Dolby Atmos may require a paid subscription or a one-time purchase from the Windows Store app.
-
Is Dolby Audio Driver 7.2.7000 compatible with my device?
-
Dolby Audio Driver 7.2.7000 is compatible with most PCs and tablets that have Dolby Home Theater v4 or Dolby Advanced Audio v2 technologies built-in. You can check if your device supports Dolby Audio by visiting the support section of your device manufacturer's website or by contacting them directly.
-
What is the difference between Dolby Home Theater v4 and Dolby Advanced Audio v2?
-
Dolby Home Theater v4 and Dolby Advanced Audio v2 are two versions of Dolby Audio technologies that are designed to enhance PC sound quality. The main difference between them is that Dolby Home Theater v4 supports surround sound formats such as Dolby Digital Plus, Dolby TrueHD, and Dolby Atmos, while Dolby Advanced Audio v2 does not.
-
- -

Conclusion

- -

Dolby Audio Driver 7.2.7000 is a great way to enhance your PC sound quality and enjoy a more immersive and realistic sound experience on your device. It is compatible with Windows 8 and Windows 10 operating systems, and it works with Dolby Home Theater v4 and Dolby Advanced Audio v2 technologies.

- -

To download and install Dolby Audio Driver 7.2.7000, you need to visit the support section of your PC or tablet manufacturer's website and look for the audio driver for your specific model. Then, you need to run the file and follow the instructions on the screen to complete the installation process.

- -

To use Dolby Audio Driver 7.2.7000, you need to access the Dolby Audio settings from the speaker icon on the taskbar and choose a preset or customize your own sound profile. You can also access some advanced features such as Dolby Atmos from the Windows Store app.

- -

If you have any questions or problems with Dolby Audio Driver 7.2.7000, you can contact your device manufacturer or visit the official Dolby website for more information and support.

- -

If you want to improve your PC sound quality, don't hesitate to download Dolby Audio Driver 7.2.7000 today!

-

How to fix Dolby Audio Driver 7.2.7000 issues?

- -

Sometimes, you may encounter some issues with Dolby Audio Driver 7.2.7000, such as:

- -
    -
  • The driver is not compatible with your device or operating system.
  • -
  • The driver is corrupted or outdated.
  • -
  • The driver is missing or not installed properly.
  • -
  • The driver is conflicting with other drivers or software.
  • -
  • The driver is causing errors or crashes on your device.
  • -
- -

If you face any of these issues, you can try some of these solutions:

- -
    -
  1. Uninstall and reinstall the driver from your device manufacturer's website.
  2. -
  3. Update the driver to the latest version from your device manufacturer's website.
  4. -
  5. Run the Windows troubleshooter for audio devices and follow the instructions.
  6. -
  7. Disable or uninstall any other audio drivers or software that may interfere with Dolby Audio Driver 7.2.7000.
  8. -
  9. Restore your device to a previous point when Dolby Audio Driver 7.2.7000 was working fine.
  10. -
- -

If none of these solutions work, you can contact your device manufacturer or visit the official Dolby website for more help and support.

- -

How to uninstall Dolby Audio Driver 7.2.7000?

- -

If you want to uninstall Dolby Audio Driver 7.2.7000 from your device, you can follow these steps:

- -
    -
  1. Open the Control Panel and click on Programs and Features.
  2. -
  3. Find and select Dolby Audio Driver 7.2.7000 from the list of installed programs and click on Uninstall.
  4. -
  5. Follow the instructions on the screen to complete the uninstallation process.
  6. -
  7. Restart your device and check if the driver is removed.
  8. -
- -

If you want to reinstall Dolby Audio Driver 7.2.7000, you can download it from your device manufacturer's website and follow the installation steps mentioned above.

- -

Conclusion

- -

Dolby Audio Driver 7.2.7000 is a great way to enhance your PC sound quality and enjoy a more immersive and realistic sound experience on your device. It is compatible with Windows 8 and Windows 10 operating systems, and it works with Dolby Home Theater v4 and Dolby Advanced Audio v2 technologies.

- -

To download and install Dolby Audio Driver 7.2.7000, you need to visit the support section of your PC or tablet manufacturer's website and look for the audio driver for your specific model. Then, you need to run the file and follow the instructions on the screen to complete the installation process.

- -

To use Dolby Audio Driver 7.2.7000, you need to access the Dolby Audio settings from the speaker icon on the taskbar and choose a preset or customize your own sound profile. You can also access some advanced features such as Dolby Atmos from the Windows Store app.

- -

If you have any questions or problems with Dolby Audio Driver 7.2.7000, you can contact your device manufacturer or visit the official Dolby website for more information and support.

- -

If you want to improve your PC sound quality, don't hesitate to download Dolby Audio Driver 7.2.7000 today!

-

How to download Dolby Audio Driver 7.2.7000 from other sources?

- -

If you cannot find Dolby Audio Driver 7.2.7000 on your device manufacturer's website, or if you want to download it from other sources, you can try some of these alternatives:

- -
    -
  • You can visit the Station Drivers website, which is a reliable source of drivers for various devices and operating systems. You can find Dolby Audio Driver 7.2.7000 on this link: http://www.station-drivers.com/index.php?option=com_remository&Itemid=353&func=fileinfo&id=1591
  • -
  • You can use a third-party software such as Driver Booster, which can scan your device and automatically download and install the latest drivers for your hardware and software components. You can download Driver Booster from this link: https://www.iobit.com/en/driver-booster.php
  • -
  • You can use a torrent client such as BitTorrent, which can download files from peer-to-peer networks. You can find Dolby Audio Driver 7.2.7000 on this link: https://www.bittorrent.com/downloads/win
  • -
- -

However, before you download Dolby Audio Driver 7.2.7000 from other sources, you should be careful and check the following:

- -
    -
  • The file size and format of the driver. The file should be around 30 MB and in .exe format.
  • -
  • The reputation and reviews of the source. The source should be trustworthy and have positive feedback from other users.
  • -
  • The security and compatibility of the driver. The driver should be free of viruses and malware, and compatible with your device and operating system.
  • -
- -

If you are not sure about any of these aspects, you should avoid downloading Dolby Audio Driver 7.2.7000 from other sources and stick to your device manufacturer's website.

- -

How to update Dolby Audio Driver 7.2.7000?

- -

If you already have Dolby Audio Driver 7.2.7000 installed on your device, you may want to update it to the latest version to enjoy the best performance and features. To update Dolby Audio Driver 7.2.7000, you can follow these steps:

- -
    -
  1. Open the Dolby Audio settings window by right-clicking on the speaker icon on the taskbar and selecting Dolby Audio.
  2. -
  3. Click on the About icon on the top right corner of the window.
  4. -
  5. Check the version number of your driver and compare it with the latest version available on your device manufacturer's website or other sources.
  6. -
  7. If there is a newer version available, click on the Update button and follow the instructions on the screen to download and install the update.
  8. -
  9. Restart your device and enjoy the improved sound quality.
  10. -
- -

If you do not see an Update button on the About window, it means that your driver is already up to date or that there is no update available for your device.

- -

Conclusion

- -

Dolby Audio Driver 7.2.7000 is a great way to enhance your PC sound quality and enjoy a more immersive and realistic sound experience on your device. It is compatible with Windows 8 and Windows 10 operating systems, and it works with Dolby Home Theater v4 and Dolby Advanced Audio v2 technologies.

- -

To download and install Dolby Audio Driver 7.2.7000, you need to visit the support section of your PC or tablet manufacturer's website and look for the audio driver for your specific model. Then, you need to run the file and follow the instructions on the screen to complete the installation process.

- -

To use Dolby Audio Driver 7.2.7000, you need to access the Dolby Audio settings from the speaker icon on the taskbar and choose a preset or customize your own sound profile. You can also access some advanced features such as Dolby Atmos from the Windows Store app.

- -

If you have any questions or problems with Dolby Audio Driver 7.2.7000, you can contact your device manufacturer or visit the official Dolby website for more information and support.

- -

If you want to improve your PC sound quality, don't hesitate to download Dolby Audio Driver 7.2.7000 today!

-

Dolby Audio Driver 7.2.7000 is a great way to enhance your PC sound quality and enjoy a more immersive and realistic sound experience on your device. It is compatible with Windows 8 and Windows 10 operating systems, and it works with Dolby Home Theater v4 and Dolby Advanced Audio v2 technologies.

- -

To download and install Dolby Audio Driver 7.2.7000, you need to visit the support section of your PC or tablet manufacturer's website and look for the audio driver for your specific model. Then, you need to run the file and follow the instructions on the screen to complete the installation process.

- -

To use Dolby Audio Driver 7.2.7000, you need to access the Dolby Audio settings from the speaker icon on the taskbar and choose a preset or customize your own sound profile. You can also access some advanced features such as Dolby Atmos from the Windows Store app.

- -

If you have any questions or problems with Dolby Audio Driver 7.2.7000, you can contact your device manufacturer or visit the official Dolby website for more information and support.

- -

If you want to improve your PC sound quality, don't hesitate to download Dolby Audio Driver 7.2.7000 today!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/handlers/base.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/handlers/base.py deleted file mode 100644 index 288878bc57282fbb2f12b32290152ca8e9d3cab0..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/fileio/handlers/base.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseFileHandler(metaclass=ABCMeta): - # `str_like` is a flag to indicate whether the type of file object is - # str-like object or bytes-like object. Pickle only processes bytes-like - # objects but json only processes str-like object. If it is str-like - # object, `StringIO` will be used to process the buffer. - str_like = True - - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode='r', **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode='w', **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) diff --git a/spaces/crashedice/signify/signify/gan/models/networks.py b/spaces/crashedice/signify/signify/gan/models/networks.py deleted file mode 100644 index b3a10c99c20eea0aa6ddd7797e47f16f5f92e5ff..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/signify/gan/models/networks.py +++ /dev/null @@ -1,615 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn import init -import functools -from torch.optim import lr_scheduler - - -############################################################################### -# Helper Functions -############################################################################### - - -class Identity(nn.Module): - def forward(self, x): - return x - - -def get_norm_layer(norm_type='instance'): - """Return a normalization layer - - Parameters: - norm_type (str) -- the name of the normalization layer: batch | instance | none - - For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). - For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics. - """ - if norm_type == 'batch': - norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True) - elif norm_type == 'instance': - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) - elif norm_type == 'none': - def norm_layer(x): return Identity() - else: - raise NotImplementedError('normalization layer [%s] is not found' % norm_type) - return norm_layer - - -def get_scheduler(optimizer, opt): - """Return a learning rate scheduler - - Parameters: - optimizer -- the optimizer of the network - opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.  - opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine - - For 'linear', we keep the same learning rate for the first epochs - and linearly decay the rate to zero over the next epochs. - For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. - See https://pytorch.org/docs/stable/optim.html for more details. - """ - if opt.lr_policy == 'linear': - def lambda_rule(epoch): - lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs_decay + 1) - return lr_l - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -def init_weights(net, init_type='normal', init_gain=0.02): - """Initialize network weights. - - Parameters: - net (network) -- network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - - We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might - work better for some applications. Feel free to try yourself. - """ - def init_func(m): # define the initialization function - classname = m.__class__.__name__ - if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - init.normal_(m.weight.data, 0.0, init_gain) - elif init_type == 'xavier': - init.xavier_normal_(m.weight.data, gain=init_gain) - elif init_type == 'kaiming': - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight.data, gain=init_gain) - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies. - init.normal_(m.weight.data, 1.0, init_gain) - init.constant_(m.bias.data, 0.0) - - print('initialize network with %s' % init_type) - net.apply(init_func) # apply the initialization function - - -def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]): - """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights - Parameters: - net (network) -- the network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Return an initialized network. - """ - if len(gpu_ids) > 0: - assert(torch.cuda.is_available()) - net.to(gpu_ids[0]) - net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs - init_weights(net, init_type, init_gain=init_gain) - return net - - -def define_G(input_nc, output_nc, ngf, netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[]): - """Create a generator - - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_256 | unet_128 - norm (str) -- the name of normalization layers used in the network: batch | instance | none - use_dropout (bool) -- if use dropout layers. - init_type (str) -- the name of our initialization method. - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Returns a generator - - Our current implementation provides two types of generators: - U-Net: [unet_128] (for 128x128 input images) and [unet_256] (for 256x256 input images) - The original U-Net paper: https://arxiv.org/abs/1505.04597 - - Resnet-based generator: [resnet_6blocks] (with 6 Resnet blocks) and [resnet_9blocks] (with 9 Resnet blocks) - Resnet-based generator consists of several Resnet blocks between a few downsampling/upsampling operations. - We adapt Torch code from Justin Johnson's neural style transfer project (https://github.com/jcjohnson/fast-neural-style). - - - The generator has been initialized by . It uses RELU for non-linearity. - """ - net = None - norm_layer = get_norm_layer(norm_type=norm) - - if netG == 'resnet_9blocks': - net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9) - elif netG == 'resnet_6blocks': - net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=6) - elif netG == 'unet_128': - net = UnetGenerator(input_nc, output_nc, 7, ngf, norm_layer=norm_layer, use_dropout=use_dropout) - elif netG == 'unet_256': - net = UnetGenerator(input_nc, output_nc, 8, ngf, norm_layer=norm_layer, use_dropout=use_dropout) - else: - raise NotImplementedError('Generator model name [%s] is not recognized' % netG) - return init_net(net, init_type, init_gain, gpu_ids) - - -def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', init_type='normal', init_gain=0.02, gpu_ids=[]): - """Create a discriminator - - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the first conv layer - netD (str) -- the architecture's name: basic | n_layers | pixel - n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers' - norm (str) -- the type of normalization layers used in the network. - init_type (str) -- the name of the initialization method. - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Returns a discriminator - - Our current implementation provides three types of discriminators: - [basic]: 'PatchGAN' classifier described in the original pix2pix paper. - It can classify whether 70×70 overlapping patches are real or fake. - Such a patch-level discriminator architecture has fewer parameters - than a full-image discriminator and can work on arbitrarily-sized images - in a fully convolutional fashion. - - [n_layers]: With this mode, you can specify the number of conv layers in the discriminator - with the parameter (default=3 as used in [basic] (PatchGAN).) - - [pixel]: 1x1 PixelGAN discriminator can classify whether a pixel is real or not. - It encourages greater color diversity but has no effect on spatial statistics. - - The discriminator has been initialized by . It uses Leakly RELU for non-linearity. - """ - net = None - norm_layer = get_norm_layer(norm_type=norm) - - if netD == 'basic': # default PatchGAN classifier - net = NLayerDiscriminator(input_nc, ndf, n_layers=3, norm_layer=norm_layer) - elif netD == 'n_layers': # more options - net = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer) - elif netD == 'pixel': # classify if each pixel is real or fake - net = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer) - else: - raise NotImplementedError('Discriminator model name [%s] is not recognized' % netD) - return init_net(net, init_type, init_gain, gpu_ids) - - -############################################################################## -# Classes -############################################################################## -class GANLoss(nn.Module): - """Define different GAN objectives. - - The GANLoss class abstracts away the need to create the target label tensor - that has the same size as the input. - """ - - def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0): - """ Initialize the GANLoss class. - - Parameters: - gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp. - target_real_label (bool) - - label for a real image - target_fake_label (bool) - - label of a fake image - - Note: Do not use sigmoid as the last layer of Discriminator. - LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss. - """ - super(GANLoss, self).__init__() - self.register_buffer('real_label', torch.tensor(target_real_label)) - self.register_buffer('fake_label', torch.tensor(target_fake_label)) - self.gan_mode = gan_mode - if gan_mode == 'lsgan': - self.loss = nn.MSELoss() - elif gan_mode == 'vanilla': - self.loss = nn.BCEWithLogitsLoss() - elif gan_mode in ['wgangp']: - self.loss = None - else: - raise NotImplementedError('gan mode %s not implemented' % gan_mode) - - def get_target_tensor(self, prediction, target_is_real): - """Create label tensors with the same size as the input. - - Parameters: - prediction (tensor) - - tpyically the prediction from a discriminator - target_is_real (bool) - - if the ground truth label is for real images or fake images - - Returns: - A label tensor filled with ground truth label, and with the size of the input - """ - - if target_is_real: - target_tensor = self.real_label - else: - target_tensor = self.fake_label - return target_tensor.expand_as(prediction) - - def __call__(self, prediction, target_is_real): - """Calculate loss given Discriminator's output and grount truth labels. - - Parameters: - prediction (tensor) - - tpyically the prediction output from a discriminator - target_is_real (bool) - - if the ground truth label is for real images or fake images - - Returns: - the calculated loss. - """ - if self.gan_mode in ['lsgan', 'vanilla']: - target_tensor = self.get_target_tensor(prediction, target_is_real) - loss = self.loss(prediction, target_tensor) - elif self.gan_mode == 'wgangp': - if target_is_real: - loss = -prediction.mean() - else: - loss = prediction.mean() - return loss - - -def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0): - """Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028 - - Arguments: - netD (network) -- discriminator network - real_data (tensor array) -- real images - fake_data (tensor array) -- generated images from the generator - device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') - type (str) -- if we mix real and fake data or not [real | fake | mixed]. - constant (float) -- the constant used in formula ( ||gradient||_2 - constant)^2 - lambda_gp (float) -- weight for this loss - - Returns the gradient penalty loss - """ - if lambda_gp > 0.0: - if type == 'real': # either use real images, fake images, or a linear interpolation of two. - interpolatesv = real_data - elif type == 'fake': - interpolatesv = fake_data - elif type == 'mixed': - alpha = torch.rand(real_data.shape[0], 1, device=device) - alpha = alpha.expand(real_data.shape[0], real_data.nelement() // real_data.shape[0]).contiguous().view(*real_data.shape) - interpolatesv = alpha * real_data + ((1 - alpha) * fake_data) - else: - raise NotImplementedError('{} not implemented'.format(type)) - interpolatesv.requires_grad_(True) - disc_interpolates = netD(interpolatesv) - gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolatesv, - grad_outputs=torch.ones(disc_interpolates.size()).to(device), - create_graph=True, retain_graph=True, only_inputs=True) - gradients = gradients[0].view(real_data.size(0), -1) # flat the data - gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant) ** 2).mean() * lambda_gp # added eps - return gradient_penalty, gradients - else: - return 0.0, None - - -class ResnetGenerator(nn.Module): - """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations. - - We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style) - """ - - def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'): - """Construct a Resnet-based generator - - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers - n_blocks (int) -- the number of ResNet blocks - padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero - """ - assert(n_blocks >= 0) - super(ResnetGenerator, self).__init__() - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - model = [nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias), - norm_layer(ngf), - nn.ReLU(True)] - - n_downsampling = 2 - for i in range(n_downsampling): # add downsampling layers - mult = 2 ** i - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias), - norm_layer(ngf * mult * 2), - nn.ReLU(True)] - - mult = 2 ** n_downsampling - for i in range(n_blocks): # add ResNet blocks - - model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)] - - for i in range(n_downsampling): # add upsampling layers - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), - kernel_size=3, stride=2, - padding=1, output_padding=1, - bias=use_bias), - norm_layer(int(ngf * mult / 2)), - nn.ReLU(True)] - model += [nn.ReflectionPad2d(3)] - model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - model += [nn.Tanh()] - - self.model = nn.Sequential(*model) - - def forward(self, input): - """Standard forward""" - return self.model(input) - - -class ResnetBlock(nn.Module): - """Define a Resnet block""" - - def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias): - """Initialize the Resnet block - - A resnet block is a conv block with skip connections - We construct a conv block with build_conv_block function, - and implement skip connections in function. - Original Resnet paper: https://arxiv.org/pdf/1512.03385.pdf - """ - super(ResnetBlock, self).__init__() - self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias) - - def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias): - """Construct a convolutional block. - - Parameters: - dim (int) -- the number of channels in the conv layer. - padding_type (str) -- the name of padding layer: reflect | replicate | zero - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers. - use_bias (bool) -- if the conv layer uses bias or not - - Returns a conv block (with a conv layer, a normalization layer, and a non-linearity layer (ReLU)) - """ - conv_block = [] - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - """Forward function (with skip connections)""" - out = x + self.conv_block(x) # add skip connections - return out - - -class UnetGenerator(nn.Module): - """Create a Unet-based generator""" - - def __init__(self, input_nc, output_nc, num_downs, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False): - """Construct a Unet generator - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7, - image of size 128x128 will become of size 1x1 # at the bottleneck - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - - We construct the U-Net from the innermost layer to the outermost layer. - It is a recursive process. - """ - super(UnetGenerator, self).__init__() - # construct unet structure - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer - for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout) - # gradually reduce the number of filters from ngf * 8 to ngf - unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - self.model = UnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer - - def forward(self, input): - """Standard forward""" - return self.model(input) - - -class UnetSkipConnectionBlock(nn.Module): - """Defines the Unet submodule with skip connection. - X -------------------identity---------------------- - |-- downsampling -- |submodule| -- upsampling --| - """ - - def __init__(self, outer_nc, inner_nc, input_nc=None, - submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False): - """Construct a Unet submodule with skip connections. - - Parameters: - outer_nc (int) -- the number of filters in the outer conv layer - inner_nc (int) -- the number of filters in the inner conv layer - input_nc (int) -- the number of channels in input images/features - submodule (UnetSkipConnectionBlock) -- previously defined submodules - outermost (bool) -- if this module is the outermost module - innermost (bool) -- if this module is the innermost module - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers. - """ - super(UnetSkipConnectionBlock, self).__init__() - self.outermost = outermost - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - if input_nc is None: - input_nc = outer_nc - downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4, - stride=2, padding=1, bias=use_bias) - downrelu = nn.LeakyReLU(0.2, True) - downnorm = norm_layer(inner_nc) - uprelu = nn.ReLU(True) - upnorm = norm_layer(outer_nc) - - if outermost: - upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, - kernel_size=4, stride=2, - padding=1) - down = [downconv] - up = [uprelu, upconv, nn.Tanh()] - model = down + [submodule] + up - elif innermost: - upconv = nn.ConvTranspose2d(inner_nc, outer_nc, - kernel_size=4, stride=2, - padding=1, bias=use_bias) - down = [downrelu, downconv] - up = [uprelu, upconv, upnorm] - model = down + up - else: - upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, - kernel_size=4, stride=2, - padding=1, bias=use_bias) - down = [downrelu, downconv, downnorm] - up = [uprelu, upconv, upnorm] - - if use_dropout: - model = down + [submodule] + up + [nn.Dropout(0.5)] - else: - model = down + [submodule] + up - - self.model = nn.Sequential(*model) - - def forward(self, x): - if self.outermost: - return self.model(x) - else: # add skip connections - return torch.cat([x, self.model(x)], 1) - - -class NLayerDiscriminator(nn.Module): - """Defines a PatchGAN discriminator""" - - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d): - """Construct a PatchGAN discriminator - - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the last conv layer - n_layers (int) -- the number of conv layers in the discriminator - norm_layer -- normalization layer - """ - super(NLayerDiscriminator, self).__init__() - if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - kw = 4 - padw = 1 - sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)] - nf_mult = 1 - nf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - nf_mult_prev = nf_mult - nf_mult = min(2 ** n, 8) - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - - nf_mult_prev = nf_mult - nf_mult = min(2 ** n_layers, 8) - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - - sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map - self.model = nn.Sequential(*sequence) - - def forward(self, input): - """Standard forward.""" - return self.model(input) - - -class PixelDiscriminator(nn.Module): - """Defines a 1x1 PatchGAN discriminator (pixelGAN)""" - - def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d): - """Construct a 1x1 PatchGAN discriminator - - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - """ - super(PixelDiscriminator, self).__init__() - if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - self.net = [ - nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0), - nn.LeakyReLU(0.2, True), - nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias), - norm_layer(ndf * 2), - nn.LeakyReLU(0.2, True), - nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)] - - self.net = nn.Sequential(*self.net) - - def forward(self, input): - """Standard forward.""" - return self.net(input) diff --git a/spaces/cscan/CodeFormer/app.py b/spaces/cscan/CodeFormer/app.py deleted file mode 100644 index 7da0fc9479dcc28d3b3183980c80154885137571..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/app.py +++ /dev/null @@ -1,280 +0,0 @@ -""" -This file is used for deploying hugging face demo: -https://huggingface.co/spaces/sczhou/CodeFormer -""" - -import sys -sys.path.append('CodeFormer') -import os -import cv2 -import torch -import torch.nn.functional as F -import gradio as gr - -from torchvision.transforms.functional import normalize - -from basicsr.utils import imwrite, img2tensor, tensor2img -from basicsr.utils.download_util import load_file_from_url -from facelib.utils.face_restoration_helper import FaceRestoreHelper -from facelib.utils.misc import is_gray -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.realesrgan_utils import RealESRGANer - -from basicsr.utils.registry import ARCH_REGISTRY - - -os.system("pip freeze") - -pretrain_model_url = { - 'codeformer': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth', - 'detection': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth', - 'parsing': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth', - 'realesrgan': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/RealESRGAN_x2plus.pth' -} -# download weights -if not os.path.exists('CodeFormer/weights/CodeFormer/codeformer.pth'): - load_file_from_url(url=pretrain_model_url['codeformer'], model_dir='CodeFormer/weights/CodeFormer', progress=True, file_name=None) -if not os.path.exists('CodeFormer/weights/facelib/detection_Resnet50_Final.pth'): - load_file_from_url(url=pretrain_model_url['detection'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None) -if not os.path.exists('CodeFormer/weights/facelib/parsing_parsenet.pth'): - load_file_from_url(url=pretrain_model_url['parsing'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None) -if not os.path.exists('CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth'): - load_file_from_url(url=pretrain_model_url['realesrgan'], model_dir='CodeFormer/weights/realesrgan', progress=True, file_name=None) - -# download images -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/fa3fe3d1-76b0-4ca8-ac0d-0a925cb0ff54/06.png', - '01.png') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/a1daba8e-af14-4b00-86a4-69cec9619b53/04.jpg', - '02.jpg') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/542d64f9-1712-4de7-85f7-3863009a7c3d/03.jpg', - '03.jpg') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/a11098b0-a18a-4c02-a19a-9a7045d68426/010.jpg', - '04.jpg') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/7cf19c2c-e0cf-4712-9af8-cf5bdbb8d0ee/012.jpg', - '05.jpg') - -def imread(img_path): - img = cv2.imread(img_path) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - return img - -# set enhancer with RealESRGAN -def set_realesrgan(): - half = True if torch.cuda.is_available() else False - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=2, - ) - upsampler = RealESRGANer( - scale=2, - model_path="CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth", - model=model, - tile=400, - tile_pad=40, - pre_pad=0, - half=half, - ) - return upsampler - -upsampler = set_realesrgan() -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -codeformer_net = ARCH_REGISTRY.get("CodeFormer")( - dim_embd=512, - codebook_size=1024, - n_head=8, - n_layers=9, - connect_list=["32", "64", "128", "256"], -).to(device) -ckpt_path = "CodeFormer/weights/CodeFormer/codeformer.pth" -checkpoint = torch.load(ckpt_path)["params_ema"] -codeformer_net.load_state_dict(checkpoint) -codeformer_net.eval() - -os.makedirs('output', exist_ok=True) - -def inference(image, background_enhance, face_upsample, upscale, codeformer_fidelity): - """Run a single prediction on the model""" - try: # global try - # take the default setting for the demo - has_aligned = False - only_center_face = False - draw_box = False - detection_model = "retinaface_resnet50" - print('Inp:', image, background_enhance, face_upsample, upscale, codeformer_fidelity) - - img = cv2.imread(str(image), cv2.IMREAD_COLOR) - print('\timage size:', img.shape) - - upscale = int(upscale) # convert type to int - if upscale > 4: # avoid memory exceeded due to too large upscale - upscale = 4 - if upscale > 2 and max(img.shape[:2])>1000: # avoid memory exceeded due to too large img resolution - upscale = 2 - if max(img.shape[:2]) > 1500: # avoid memory exceeded due to too large img resolution - upscale = 1 - background_enhance = False - face_upsample = False - - face_helper = FaceRestoreHelper( - upscale, - face_size=512, - crop_ratio=(1, 1), - det_model=detection_model, - save_ext="png", - use_parse=True, - device=device, - ) - bg_upsampler = upsampler if background_enhance else None - face_upsampler = upsampler if face_upsample else None - - if has_aligned: - # the input faces are already cropped and aligned - img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR) - face_helper.is_gray = is_gray(img, threshold=5) - if face_helper.is_gray: - print('\tgrayscale input: True') - face_helper.cropped_faces = [img] - else: - face_helper.read_image(img) - # get face landmarks for each face - num_det_faces = face_helper.get_face_landmarks_5( - only_center_face=only_center_face, resize=640, eye_dist_threshold=5 - ) - print(f'\tdetect {num_det_faces} faces') - # align and warp each face - face_helper.align_warp_face() - - # face restoration for each cropped face - for idx, cropped_face in enumerate(face_helper.cropped_faces): - # prepare data - cropped_face_t = img2tensor( - cropped_face / 255.0, bgr2rgb=True, float32=True - ) - normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - cropped_face_t = cropped_face_t.unsqueeze(0).to(device) - - try: - with torch.no_grad(): - output = codeformer_net( - cropped_face_t, w=codeformer_fidelity, adain=True - )[0] - restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1)) - del output - torch.cuda.empty_cache() - except RuntimeError as error: - print(f"Failed inference for CodeFormer: {error}") - restored_face = tensor2img( - cropped_face_t, rgb2bgr=True, min_max=(-1, 1) - ) - - restored_face = restored_face.astype("uint8") - face_helper.add_restored_face(restored_face) - - # paste_back - if not has_aligned: - # upsample the background - if bg_upsampler is not None: - # Now only support RealESRGAN for upsampling background - bg_img = bg_upsampler.enhance(img, outscale=upscale)[0] - else: - bg_img = None - face_helper.get_inverse_affine(None) - # paste each restored face to the input image - if face_upsample and face_upsampler is not None: - restored_img = face_helper.paste_faces_to_input_image( - upsample_img=bg_img, - draw_box=draw_box, - face_upsampler=face_upsampler, - ) - else: - restored_img = face_helper.paste_faces_to_input_image( - upsample_img=bg_img, draw_box=draw_box - ) - - # save restored img - save_path = f'output/out.png' - imwrite(restored_img, str(save_path)) - - restored_img = cv2.cvtColor(restored_img, cv2.COLOR_BGR2RGB) - return restored_img, save_path - except Exception as error: - print('Global exception', error) - return None, None - - -title = "CodeFormer: Robust Face Restoration and Enhancement Network" -description = r"""
CodeFormer logo
-Official Gradio demo for Towards Robust Blind Face Restoration with Codebook Lookup Transformer (NeurIPS 2022).
-🔥 CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
-🤗 Try CodeFormer for improved stable-diffusion generation!
-""" -article = r""" -If CodeFormer is helpful, please help to ⭐ the Github Repo. Thanks! -[![GitHub Stars](https://img.shields.io/github/stars/sczhou/CodeFormer?style=social)](https://github.com/sczhou/CodeFormer) - ---- - -📝 **Citation** - -If our work is useful for your research, please consider citing: -```bibtex -@inproceedings{zhou2022codeformer, - author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change}, - title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer}, - booktitle = {NeurIPS}, - year = {2022} -} -``` - -📋 **License** - -This project is licensed under S-Lab License 1.0. -Redistribution and use for non-commercial purposes should follow this license. - -📧 **Contact** - -If you have any questions, please feel free to reach me out at shangchenzhou@gmail.com. - -
- 🤗 Find Me: - Twitter Follow - Github Follow -
- -
visitors
-""" - -demo = gr.Interface( - inference, [ - gr.inputs.Image(type="filepath", label="Input"), - gr.inputs.Checkbox(default=True, label="Background_Enhance"), - gr.inputs.Checkbox(default=True, label="Face_Upsample"), - gr.inputs.Number(default=2, label="Rescaling_Factor (up to 4)"), - gr.Slider(0, 1, value=0.5, step=0.01, label='Codeformer_Fidelity (0 for better quality, 1 for better identity)') - ], [ - gr.outputs.Image(type="numpy", label="Output"), - gr.outputs.File(label="Download the output") - ], - title=title, - description=description, - article=article, - examples=[ - ['01.png', True, True, 2, 0.7], - ['02.jpg', True, True, 2, 0.7], - ['03.jpg', True, True, 2, 0.7], - ['04.jpg', True, True, 2, 0.1], - ['05.jpg', True, True, 2, 0.1] - ] - ) - -demo.queue(concurrency_count=2) -demo.launch() \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/qtPen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/qtPen.py deleted file mode 100644 index eb13d03d2f611de4ce0b29ce3995f85e8f9e491a..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/qtPen.py +++ /dev/null @@ -1,29 +0,0 @@ -from fontTools.pens.basePen import BasePen - - -__all__ = ["QtPen"] - - -class QtPen(BasePen): - def __init__(self, glyphSet, path=None): - BasePen.__init__(self, glyphSet) - if path is None: - from PyQt5.QtGui import QPainterPath - - path = QPainterPath() - self.path = path - - def _moveTo(self, p): - self.path.moveTo(*p) - - def _lineTo(self, p): - self.path.lineTo(*p) - - def _curveToOne(self, p1, p2, p3): - self.path.cubicTo(*p1, *p2, *p3) - - def _qCurveToOne(self, p1, p2): - self.path.quadTo(*p1, *p2) - - def _closePath(self): - self.path.closeSubpath() diff --git a/spaces/dcq/freegpt-webui/client/css/sidebar.css b/spaces/dcq/freegpt-webui/client/css/sidebar.css deleted file mode 100644 index a274970e2c354ef58344e9168fdd8cd5b28ac76d..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/client/css/sidebar.css +++ /dev/null @@ -1,202 +0,0 @@ -.sidebar { - max-width: 260px; - padding: var(--section-gap); - flex-shrink: 0; - display: flex; - flex-direction: column; - justify-content: space-between; -} - -.sidebar .title { - font-size: 14px; - font-weight: 500; -} - -.sidebar .conversation-sidebar { - padding: 8px 12px; - display: flex; - gap: 18px; - align-items: center; - user-select: none; - justify-content: space-between; -} - -.sidebar .conversation-sidebar .left { - cursor: pointer; - display: flex; - align-items: center; - gap: 10px; -} - -.sidebar i { - color: var(--conversations); - cursor: pointer; -} - -.sidebar .top { - display: flex; - flex-direction: column; - overflow: hidden; - gap: 16px; - padding-right: 8px; -} - -.sidebar .top:hover { - overflow: auto; -} - -.sidebar .info { - padding: 8px 12px 0px 12px; - display: flex; - align-items: center; - justify-content: center; - user-select: none; - background: transparent; - width: 100%; - border: none; - text-decoration: none; -} - -.sidebar .info span { - color: var(--conversations); - line-height: 1.5; - font-size: 0.75rem; -} - -.sidebar .info i::before { - margin-right: 8px; -} - -.sidebar-footer { - width: 100%; - margin-top: 16px; - display: flex; - flex-direction: column; -} - -.sidebar-footer button { - cursor: pointer; - user-select: none; - background: transparent; -} - -.sidebar.shown { - position: fixed; - top: 0; - left: 0; - width: 100%; - height: 100%; - z-index: 1000; -} - -.sidebar.shown .box { - background-color: #16171a; - width: 80%; - height: 100%; - overflow-y: auto; -} - -@keyframes spinner { - to { - transform: rotate(360deg); - } -} - -/* scrollbar */ -.sidebar .top::-webkit-scrollbar { - width: 4px; - padding: 8px 0px; -} - -.sidebar .top::-webkit-scrollbar-track { - background-color: #ffffff00; -} - -.sidebar .top::-webkit-scrollbar-thumb { - background-color: #555555; - border-radius: 10px; -} - -.spinner:before { - content: ""; - box-sizing: border-box; - position: absolute; - top: 50%; - left: 45%; - width: 20px; - height: 20px; - border-radius: 50%; - border: 1px solid var(--conversations); - border-top-color: white; - animation: spinner 0.6s linear infinite; -} - -.mobile-sidebar { - display: none !important; - position: absolute; - z-index: 100000; - top: 0; - left: 0; - margin: 10px; - font-size: 20px; - cursor: pointer; - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - border-radius: 10px; - border: 1px solid var(--blur-border); - width: 40px; - height: 40px; - justify-content: center; - align-items: center; - transition: 0.33s; -} - -.mobile-sidebar i { - transition: 0.33s; -} - -.rotated { - transform: rotate(360deg); -} - -.mobile-sidebar.rotated { - position: fixed; - top: 10px; - left: 10px; - z-index: 1001; -} - -@media screen and (max-width: 990px) { - .sidebar { - display: none; - width: 100%; - max-width: none; - } - - .mobile-sidebar { - display: flex !important; - } -} - -@media (max-width: 990px) { - .sidebar .top { - padding-top: 48px; - } -} - -@media (min-width: 768px) { - .sidebar.shown { - position: static; - width: auto; - height: auto; - background-color: transparent; - } - - .sidebar.shown .box { - background-color: #16171a; - width: auto; - height: auto; - overflow-y: auto; - } -} diff --git a/spaces/declare-lab/tango/diffusers/scripts/convert_ldm_original_checkpoint_to_diffusers.py b/spaces/declare-lab/tango/diffusers/scripts/convert_ldm_original_checkpoint_to_diffusers.py deleted file mode 100644 index 0624ac66dd7ea8f0bd867db606562daacb878247..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/scripts/convert_ldm_original_checkpoint_to_diffusers.py +++ /dev/null @@ -1,359 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Conversion script for the LDM checkpoints. """ - -import argparse -import json - -import torch - -from diffusers import DDPMScheduler, LDMPipeline, UNet2DModel, VQModel - - -def shave_segments(path, n_shave_prefix_segments=1): - """ - Removes segments. Positive values shave the first segments, negative shave the last segments. - """ - if n_shave_prefix_segments >= 0: - return ".".join(path.split(".")[n_shave_prefix_segments:]) - else: - return ".".join(path.split(".")[:n_shave_prefix_segments]) - - -def renew_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item.replace("in_layers.0", "norm1") - new_item = new_item.replace("in_layers.2", "conv1") - - new_item = new_item.replace("out_layers.0", "norm2") - new_item = new_item.replace("out_layers.3", "conv2") - - new_item = new_item.replace("emb_layers.1", "time_emb_proj") - new_item = new_item.replace("skip_connection", "conv_shortcut") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -def renew_attention_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("norm.weight", "group_norm.weight") - new_item = new_item.replace("norm.bias", "group_norm.bias") - - new_item = new_item.replace("proj_out.weight", "proj_attn.weight") - new_item = new_item.replace("proj_out.bias", "proj_attn.bias") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -def assign_to_checkpoint( - paths, checkpoint, old_checkpoint, attention_paths_to_split=None, additional_replacements=None, config=None -): - """ - This does the final conversion step: take locally converted weights and apply a global renaming - to them. It splits attention layers, and takes into account additional replacements - that may arise. - - Assigns the weights to the new checkpoint. - """ - assert isinstance(paths, list), "Paths should be a list of dicts containing 'old' and 'new' keys." - - # Splits the attention layers into three variables. - if attention_paths_to_split is not None: - for path, path_map in attention_paths_to_split.items(): - old_tensor = old_checkpoint[path] - channels = old_tensor.shape[0] // 3 - - target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1) - - num_heads = old_tensor.shape[0] // config["num_head_channels"] // 3 - - old_tensor = old_tensor.reshape((num_heads, 3 * channels // num_heads) + old_tensor.shape[1:]) - query, key, value = old_tensor.split(channels // num_heads, dim=1) - - checkpoint[path_map["query"]] = query.reshape(target_shape) - checkpoint[path_map["key"]] = key.reshape(target_shape) - checkpoint[path_map["value"]] = value.reshape(target_shape) - - for path in paths: - new_path = path["new"] - - # These have already been assigned - if attention_paths_to_split is not None and new_path in attention_paths_to_split: - continue - - # Global renaming happens here - new_path = new_path.replace("middle_block.0", "mid_block.resnets.0") - new_path = new_path.replace("middle_block.1", "mid_block.attentions.0") - new_path = new_path.replace("middle_block.2", "mid_block.resnets.1") - - if additional_replacements is not None: - for replacement in additional_replacements: - new_path = new_path.replace(replacement["old"], replacement["new"]) - - # proj_attn.weight has to be converted from conv 1D to linear - if "proj_attn.weight" in new_path: - checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0] - else: - checkpoint[new_path] = old_checkpoint[path["old"]] - - -def convert_ldm_checkpoint(checkpoint, config): - """ - Takes a state dict and a config, and returns a converted checkpoint. - """ - new_checkpoint = {} - - new_checkpoint["time_embedding.linear_1.weight"] = checkpoint["time_embed.0.weight"] - new_checkpoint["time_embedding.linear_1.bias"] = checkpoint["time_embed.0.bias"] - new_checkpoint["time_embedding.linear_2.weight"] = checkpoint["time_embed.2.weight"] - new_checkpoint["time_embedding.linear_2.bias"] = checkpoint["time_embed.2.bias"] - - new_checkpoint["conv_in.weight"] = checkpoint["input_blocks.0.0.weight"] - new_checkpoint["conv_in.bias"] = checkpoint["input_blocks.0.0.bias"] - - new_checkpoint["conv_norm_out.weight"] = checkpoint["out.0.weight"] - new_checkpoint["conv_norm_out.bias"] = checkpoint["out.0.bias"] - new_checkpoint["conv_out.weight"] = checkpoint["out.2.weight"] - new_checkpoint["conv_out.bias"] = checkpoint["out.2.bias"] - - # Retrieves the keys for the input blocks only - num_input_blocks = len({".".join(layer.split(".")[:2]) for layer in checkpoint if "input_blocks" in layer}) - input_blocks = { - layer_id: [key for key in checkpoint if f"input_blocks.{layer_id}" in key] - for layer_id in range(num_input_blocks) - } - - # Retrieves the keys for the middle blocks only - num_middle_blocks = len({".".join(layer.split(".")[:2]) for layer in checkpoint if "middle_block" in layer}) - middle_blocks = { - layer_id: [key for key in checkpoint if f"middle_block.{layer_id}" in key] - for layer_id in range(num_middle_blocks) - } - - # Retrieves the keys for the output blocks only - num_output_blocks = len({".".join(layer.split(".")[:2]) for layer in checkpoint if "output_blocks" in layer}) - output_blocks = { - layer_id: [key for key in checkpoint if f"output_blocks.{layer_id}" in key] - for layer_id in range(num_output_blocks) - } - - for i in range(1, num_input_blocks): - block_id = (i - 1) // (config["num_res_blocks"] + 1) - layer_in_block_id = (i - 1) % (config["num_res_blocks"] + 1) - - resnets = [key for key in input_blocks[i] if f"input_blocks.{i}.0" in key] - attentions = [key for key in input_blocks[i] if f"input_blocks.{i}.1" in key] - - if f"input_blocks.{i}.0.op.weight" in checkpoint: - new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.weight"] = checkpoint[ - f"input_blocks.{i}.0.op.weight" - ] - new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.bias"] = checkpoint[ - f"input_blocks.{i}.0.op.bias" - ] - continue - - paths = renew_resnet_paths(resnets) - meta_path = {"old": f"input_blocks.{i}.0", "new": f"down_blocks.{block_id}.resnets.{layer_in_block_id}"} - resnet_op = {"old": "resnets.2.op", "new": "downsamplers.0.op"} - assign_to_checkpoint( - paths, new_checkpoint, checkpoint, additional_replacements=[meta_path, resnet_op], config=config - ) - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = { - "old": f"input_blocks.{i}.1", - "new": f"down_blocks.{block_id}.attentions.{layer_in_block_id}", - } - to_split = { - f"input_blocks.{i}.1.qkv.bias": { - "key": f"down_blocks.{block_id}.attentions.{layer_in_block_id}.key.bias", - "query": f"down_blocks.{block_id}.attentions.{layer_in_block_id}.query.bias", - "value": f"down_blocks.{block_id}.attentions.{layer_in_block_id}.value.bias", - }, - f"input_blocks.{i}.1.qkv.weight": { - "key": f"down_blocks.{block_id}.attentions.{layer_in_block_id}.key.weight", - "query": f"down_blocks.{block_id}.attentions.{layer_in_block_id}.query.weight", - "value": f"down_blocks.{block_id}.attentions.{layer_in_block_id}.value.weight", - }, - } - assign_to_checkpoint( - paths, - new_checkpoint, - checkpoint, - additional_replacements=[meta_path], - attention_paths_to_split=to_split, - config=config, - ) - - resnet_0 = middle_blocks[0] - attentions = middle_blocks[1] - resnet_1 = middle_blocks[2] - - resnet_0_paths = renew_resnet_paths(resnet_0) - assign_to_checkpoint(resnet_0_paths, new_checkpoint, checkpoint, config=config) - - resnet_1_paths = renew_resnet_paths(resnet_1) - assign_to_checkpoint(resnet_1_paths, new_checkpoint, checkpoint, config=config) - - attentions_paths = renew_attention_paths(attentions) - to_split = { - "middle_block.1.qkv.bias": { - "key": "mid_block.attentions.0.key.bias", - "query": "mid_block.attentions.0.query.bias", - "value": "mid_block.attentions.0.value.bias", - }, - "middle_block.1.qkv.weight": { - "key": "mid_block.attentions.0.key.weight", - "query": "mid_block.attentions.0.query.weight", - "value": "mid_block.attentions.0.value.weight", - }, - } - assign_to_checkpoint( - attentions_paths, new_checkpoint, checkpoint, attention_paths_to_split=to_split, config=config - ) - - for i in range(num_output_blocks): - block_id = i // (config["num_res_blocks"] + 1) - layer_in_block_id = i % (config["num_res_blocks"] + 1) - output_block_layers = [shave_segments(name, 2) for name in output_blocks[i]] - output_block_list = {} - - for layer in output_block_layers: - layer_id, layer_name = layer.split(".")[0], shave_segments(layer, 1) - if layer_id in output_block_list: - output_block_list[layer_id].append(layer_name) - else: - output_block_list[layer_id] = [layer_name] - - if len(output_block_list) > 1: - resnets = [key for key in output_blocks[i] if f"output_blocks.{i}.0" in key] - attentions = [key for key in output_blocks[i] if f"output_blocks.{i}.1" in key] - - resnet_0_paths = renew_resnet_paths(resnets) - paths = renew_resnet_paths(resnets) - - meta_path = {"old": f"output_blocks.{i}.0", "new": f"up_blocks.{block_id}.resnets.{layer_in_block_id}"} - assign_to_checkpoint(paths, new_checkpoint, checkpoint, additional_replacements=[meta_path], config=config) - - if ["conv.weight", "conv.bias"] in output_block_list.values(): - index = list(output_block_list.values()).index(["conv.weight", "conv.bias"]) - new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.weight"] = checkpoint[ - f"output_blocks.{i}.{index}.conv.weight" - ] - new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.bias"] = checkpoint[ - f"output_blocks.{i}.{index}.conv.bias" - ] - - # Clear attentions as they have been attributed above. - if len(attentions) == 2: - attentions = [] - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = { - "old": f"output_blocks.{i}.1", - "new": f"up_blocks.{block_id}.attentions.{layer_in_block_id}", - } - to_split = { - f"output_blocks.{i}.1.qkv.bias": { - "key": f"up_blocks.{block_id}.attentions.{layer_in_block_id}.key.bias", - "query": f"up_blocks.{block_id}.attentions.{layer_in_block_id}.query.bias", - "value": f"up_blocks.{block_id}.attentions.{layer_in_block_id}.value.bias", - }, - f"output_blocks.{i}.1.qkv.weight": { - "key": f"up_blocks.{block_id}.attentions.{layer_in_block_id}.key.weight", - "query": f"up_blocks.{block_id}.attentions.{layer_in_block_id}.query.weight", - "value": f"up_blocks.{block_id}.attentions.{layer_in_block_id}.value.weight", - }, - } - assign_to_checkpoint( - paths, - new_checkpoint, - checkpoint, - additional_replacements=[meta_path], - attention_paths_to_split=to_split if any("qkv" in key for key in attentions) else None, - config=config, - ) - else: - resnet_0_paths = renew_resnet_paths(output_block_layers, n_shave_prefix_segments=1) - for path in resnet_0_paths: - old_path = ".".join(["output_blocks", str(i), path["old"]]) - new_path = ".".join(["up_blocks", str(block_id), "resnets", str(layer_in_block_id), path["new"]]) - - new_checkpoint[new_path] = checkpoint[old_path] - - return new_checkpoint - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." - ) - - parser.add_argument( - "--config_file", - default=None, - type=str, - required=True, - help="The config json file corresponding to the architecture.", - ) - - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - - args = parser.parse_args() - - checkpoint = torch.load(args.checkpoint_path) - - with open(args.config_file) as f: - config = json.loads(f.read()) - - converted_checkpoint = convert_ldm_checkpoint(checkpoint, config) - - if "ldm" in config: - del config["ldm"] - - model = UNet2DModel(**config) - model.load_state_dict(converted_checkpoint) - - try: - scheduler = DDPMScheduler.from_config("/".join(args.checkpoint_path.split("/")[:-1])) - vqvae = VQModel.from_pretrained("/".join(args.checkpoint_path.split("/")[:-1])) - - pipe = LDMPipeline(unet=model, scheduler=scheduler, vae=vqvae) - pipe.save_pretrained(args.dump_path) - except: # noqa: E722 - model.save_pretrained(args.dump_path) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py deleted file mode 100644 index 9d91ff757799e56942f31dcd7830d96f20e168dc..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py +++ /dev/null @@ -1,562 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import importlib -import inspect -import os -from typing import Any, Dict, List, Optional, Union - -import flax -import numpy as np -import PIL -from flax.core.frozen_dict import FrozenDict -from huggingface_hub import snapshot_download -from PIL import Image -from tqdm.auto import tqdm - -from ..configuration_utils import ConfigMixin -from ..models.modeling_flax_utils import FLAX_WEIGHTS_NAME, FlaxModelMixin -from ..schedulers.scheduling_utils_flax import SCHEDULER_CONFIG_NAME, FlaxSchedulerMixin -from ..utils import CONFIG_NAME, DIFFUSERS_CACHE, BaseOutput, http_user_agent, is_transformers_available, logging - - -if is_transformers_available(): - from transformers import FlaxPreTrainedModel - -INDEX_FILE = "diffusion_flax_model.bin" - - -logger = logging.get_logger(__name__) - - -LOADABLE_CLASSES = { - "diffusers": { - "FlaxModelMixin": ["save_pretrained", "from_pretrained"], - "FlaxSchedulerMixin": ["save_pretrained", "from_pretrained"], - "FlaxDiffusionPipeline": ["save_pretrained", "from_pretrained"], - }, - "transformers": { - "PreTrainedTokenizer": ["save_pretrained", "from_pretrained"], - "PreTrainedTokenizerFast": ["save_pretrained", "from_pretrained"], - "FlaxPreTrainedModel": ["save_pretrained", "from_pretrained"], - "FeatureExtractionMixin": ["save_pretrained", "from_pretrained"], - "ProcessorMixin": ["save_pretrained", "from_pretrained"], - "ImageProcessingMixin": ["save_pretrained", "from_pretrained"], - }, -} - -ALL_IMPORTABLE_CLASSES = {} -for library in LOADABLE_CLASSES: - ALL_IMPORTABLE_CLASSES.update(LOADABLE_CLASSES[library]) - - -def import_flax_or_no_model(module, class_name): - try: - # 1. First make sure that if a Flax object is present, import this one - class_obj = getattr(module, "Flax" + class_name) - except AttributeError: - # 2. If this doesn't work, it's not a model and we don't append "Flax" - class_obj = getattr(module, class_name) - except AttributeError: - raise ValueError(f"Neither Flax{class_name} nor {class_name} exist in {module}") - - return class_obj - - -@flax.struct.dataclass -class FlaxImagePipelineOutput(BaseOutput): - """ - Output class for image pipelines. - - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, - num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - - -class FlaxDiffusionPipeline(ConfigMixin): - r""" - Base class for all models. - - [`FlaxDiffusionPipeline`] takes care of storing all components (models, schedulers, processors) for diffusion - pipelines and handles methods for loading, downloading and saving models as well as a few methods common to all - pipelines to: - - - enabling/disabling the progress bar for the denoising iteration - - Class attributes: - - - **config_name** ([`str`]) -- name of the config file that will store the class and module names of all - components of the diffusion pipeline. - """ - config_name = "model_index.json" - - def register_modules(self, **kwargs): - # import it here to avoid circular import - from diffusers import pipelines - - for name, module in kwargs.items(): - if module is None: - register_dict = {name: (None, None)} - else: - # retrieve library - library = module.__module__.split(".")[0] - - # check if the module is a pipeline module - pipeline_dir = module.__module__.split(".")[-2] - path = module.__module__.split(".") - is_pipeline_module = pipeline_dir in path and hasattr(pipelines, pipeline_dir) - - # if library is not in LOADABLE_CLASSES, then it is a custom module. - # Or if it's a pipeline module, then the module is inside the pipeline - # folder so we set the library to module name. - if library not in LOADABLE_CLASSES or is_pipeline_module: - library = pipeline_dir - - # retrieve class_name - class_name = module.__class__.__name__ - - register_dict = {name: (library, class_name)} - - # save model index config - self.register_to_config(**register_dict) - - # set models - setattr(self, name, module) - - def save_pretrained(self, save_directory: Union[str, os.PathLike], params: Union[Dict, FrozenDict]): - # TODO: handle inference_state - """ - Save all variables of the pipeline that can be saved and loaded as well as the pipelines configuration file to - a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading - method. The pipeline can easily be re-loaded using the `[`~FlaxDiffusionPipeline.from_pretrained`]` class - method. - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to which to save. Will be created if it doesn't exist. - """ - self.save_config(save_directory) - - model_index_dict = dict(self.config) - model_index_dict.pop("_class_name") - model_index_dict.pop("_diffusers_version") - model_index_dict.pop("_module", None) - - for pipeline_component_name in model_index_dict.keys(): - sub_model = getattr(self, pipeline_component_name) - if sub_model is None: - # edge case for saving a pipeline with safety_checker=None - continue - - model_cls = sub_model.__class__ - - save_method_name = None - # search for the model's base class in LOADABLE_CLASSES - for library_name, library_classes in LOADABLE_CLASSES.items(): - library = importlib.import_module(library_name) - for base_class, save_load_methods in library_classes.items(): - class_candidate = getattr(library, base_class, None) - if class_candidate is not None and issubclass(model_cls, class_candidate): - # if we found a suitable base class in LOADABLE_CLASSES then grab its save method - save_method_name = save_load_methods[0] - break - if save_method_name is not None: - break - - save_method = getattr(sub_model, save_method_name) - expects_params = "params" in set(inspect.signature(save_method).parameters.keys()) - - if expects_params: - save_method( - os.path.join(save_directory, pipeline_component_name), params=params[pipeline_component_name] - ) - else: - save_method(os.path.join(save_directory, pipeline_component_name)) - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - r""" - Instantiate a Flax diffusion pipeline from pre-trained pipeline weights. - - The pipeline is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). - - The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come - pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning - task. - - The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those - weights are discarded. - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *repo id* of a pretrained pipeline hosted inside a model repo on - https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like - `CompVis/ldm-text2im-large-256`. - - A path to a *directory* containing pipeline weights saved using - [`~FlaxDiffusionPipeline.save_pretrained`], e.g., `./my_pipeline_directory/`. - dtype (`str` or `jnp.dtype`, *optional*): - Override the default `jnp.dtype` and load the model under this dtype. If `"auto"` is passed the dtype - will be automatically derived from the model's weights. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (i.e., do not try to download the model). - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `huggingface-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - mirror (`str`, *optional*): - Mirror source to accelerate downloads in China. If you are from China and have an accessibility - problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. - Please refer to the mirror site for more information. specify the folder name here. - - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to overwrite load - and saveable variables - *i.e.* the pipeline components - of the - specific pipeline class. The overwritten components are then directly passed to the pipelines - `__init__` method. See example below for more information. - - - - It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated - models](https://huggingface.co/docs/hub/models-gated#gated-models), *e.g.* `"runwayml/stable-diffusion-v1-5"` - - - - - - Activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use - this method in a firewalled environment. - - - - Examples: - - ```py - >>> from diffusers import FlaxDiffusionPipeline - - >>> # Download pipeline from huggingface.co and cache. - >>> # Requires to be logged in to Hugging Face hub, - >>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) - >>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( - ... "runwayml/stable-diffusion-v1-5", - ... revision="bf16", - ... dtype=jnp.bfloat16, - ... ) - - >>> # Download pipeline, but use a different scheduler - >>> from diffusers import FlaxDPMSolverMultistepScheduler - - >>> model_id = "runwayml/stable-diffusion-v1-5" - >>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( - ... model_id, - ... subfolder="scheduler", - ... ) - - >>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( - ... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp - ... ) - >>> dpm_params["scheduler"] = dpmpp_state - ``` - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", False) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - from_pt = kwargs.pop("from_pt", False) - dtype = kwargs.pop("dtype", None) - - # 1. Download the checkpoints and configs - # use snapshot download here to get it working from from_pretrained - if not os.path.isdir(pretrained_model_name_or_path): - config_dict = cls.load_config( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - ) - # make sure we only download sub-folders and `diffusers` filenames - folder_names = [k for k in config_dict.keys() if not k.startswith("_")] - allow_patterns = [os.path.join(k, "*") for k in folder_names] - allow_patterns += [FLAX_WEIGHTS_NAME, SCHEDULER_CONFIG_NAME, CONFIG_NAME, cls.config_name] - - # make sure we don't download PyTorch weights, unless when using from_pt - ignore_patterns = "*.bin" if not from_pt else [] - - if cls != FlaxDiffusionPipeline: - requested_pipeline_class = cls.__name__ - else: - requested_pipeline_class = config_dict.get("_class_name", cls.__name__) - requested_pipeline_class = ( - requested_pipeline_class - if requested_pipeline_class.startswith("Flax") - else "Flax" + requested_pipeline_class - ) - - user_agent = {"pipeline_class": requested_pipeline_class} - user_agent = http_user_agent(user_agent) - - # download all allow_patterns - cached_folder = snapshot_download( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - user_agent=user_agent, - ) - else: - cached_folder = pretrained_model_name_or_path - - config_dict = cls.load_config(cached_folder) - - # 2. Load the pipeline class, if using custom module then load it from the hub - # if we load from explicit class, let's use it - if cls != FlaxDiffusionPipeline: - pipeline_class = cls - else: - diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) - class_name = ( - config_dict["_class_name"] - if config_dict["_class_name"].startswith("Flax") - else "Flax" + config_dict["_class_name"] - ) - pipeline_class = getattr(diffusers_module, class_name) - - # some modules can be passed directly to the init - # in this case they are already instantiated in `kwargs` - # extract them here - expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class) - passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} - - init_dict, _, _ = pipeline_class.extract_init_dict(config_dict, **kwargs) - - init_kwargs = {} - - # inference_params - params = {} - - # import it here to avoid circular import - from diffusers import pipelines - - # 3. Load each module in the pipeline - for name, (library_name, class_name) in init_dict.items(): - if class_name is None: - # edge case for when the pipeline was saved with safety_checker=None - init_kwargs[name] = None - continue - - is_pipeline_module = hasattr(pipelines, library_name) - loaded_sub_model = None - sub_model_should_be_defined = True - - # if the model is in a pipeline module, then we load it from the pipeline - if name in passed_class_obj: - # 1. check that passed_class_obj has correct parent class - if not is_pipeline_module: - library = importlib.import_module(library_name) - class_obj = getattr(library, class_name) - importable_classes = LOADABLE_CLASSES[library_name] - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - expected_class_obj = None - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - expected_class_obj = class_candidate - - if not issubclass(passed_class_obj[name].__class__, expected_class_obj): - raise ValueError( - f"{passed_class_obj[name]} is of type: {type(passed_class_obj[name])}, but should be" - f" {expected_class_obj}" - ) - elif passed_class_obj[name] is None: - logger.warning( - f"You have passed `None` for {name} to disable its functionality in {pipeline_class}. Note" - f" that this might lead to problems when using {pipeline_class} and is not recommended." - ) - sub_model_should_be_defined = False - else: - logger.warning( - f"You have passed a non-standard module {passed_class_obj[name]}. We cannot verify whether it" - " has the correct type" - ) - - # set passed class object - loaded_sub_model = passed_class_obj[name] - elif is_pipeline_module: - pipeline_module = getattr(pipelines, library_name) - class_obj = import_flax_or_no_model(pipeline_module, class_name) - - importable_classes = ALL_IMPORTABLE_CLASSES - class_candidates = {c: class_obj for c in importable_classes.keys()} - else: - # else we just import it from the library. - library = importlib.import_module(library_name) - class_obj = import_flax_or_no_model(library, class_name) - - importable_classes = LOADABLE_CLASSES[library_name] - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - if loaded_sub_model is None and sub_model_should_be_defined: - load_method_name = None - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - load_method_name = importable_classes[class_name][1] - - load_method = getattr(class_obj, load_method_name) - - # check if the module is in a subdirectory - if os.path.isdir(os.path.join(cached_folder, name)): - loadable_folder = os.path.join(cached_folder, name) - else: - loaded_sub_model = cached_folder - - if issubclass(class_obj, FlaxModelMixin): - loaded_sub_model, loaded_params = load_method(loadable_folder, from_pt=from_pt, dtype=dtype) - params[name] = loaded_params - elif is_transformers_available() and issubclass(class_obj, FlaxPreTrainedModel): - if from_pt: - # TODO(Suraj): Fix this in Transformers. We should be able to use `_do_init=False` here - loaded_sub_model = load_method(loadable_folder, from_pt=from_pt) - loaded_params = loaded_sub_model.params - del loaded_sub_model._params - else: - loaded_sub_model, loaded_params = load_method(loadable_folder, _do_init=False) - params[name] = loaded_params - elif issubclass(class_obj, FlaxSchedulerMixin): - loaded_sub_model, scheduler_state = load_method(loadable_folder) - params[name] = scheduler_state - else: - loaded_sub_model = load_method(loadable_folder) - - init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...) - - # 4. Potentially add passed objects if expected - missing_modules = set(expected_modules) - set(init_kwargs.keys()) - passed_modules = list(passed_class_obj.keys()) - - if len(missing_modules) > 0 and missing_modules <= set(passed_modules): - for module in missing_modules: - init_kwargs[module] = passed_class_obj.get(module, None) - elif len(missing_modules) > 0: - passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs - raise ValueError( - f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed." - ) - - model = pipeline_class(**init_kwargs, dtype=dtype) - return model, params - - @staticmethod - def _get_signature_keys(obj): - parameters = inspect.signature(obj.__init__).parameters - required_parameters = {k: v for k, v in parameters.items() if v.default == inspect._empty} - optional_parameters = set({k for k, v in parameters.items() if v.default != inspect._empty}) - expected_modules = set(required_parameters.keys()) - {"self"} - return expected_modules, optional_parameters - - @property - def components(self) -> Dict[str, Any]: - r""" - - The `self.components` property can be useful to run different pipelines with the same weights and - configurations to not have to re-allocate memory. - - Examples: - - ```py - >>> from diffusers import ( - ... FlaxStableDiffusionPipeline, - ... FlaxStableDiffusionImg2ImgPipeline, - ... ) - - >>> text2img = FlaxStableDiffusionPipeline.from_pretrained( - ... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jnp.bfloat16 - ... ) - >>> img2img = FlaxStableDiffusionImg2ImgPipeline(**text2img.components) - ``` - - Returns: - A dictionary containing all the modules needed to initialize the pipeline. - """ - expected_modules, optional_parameters = self._get_signature_keys(self) - components = { - k: getattr(self, k) for k in self.config.keys() if not k.startswith("_") and k not in optional_parameters - } - - if set(components.keys()) != expected_modules: - raise ValueError( - f"{self} has been incorrectly initialized or {self.__class__} is incorrectly implemented. Expected" - f" {expected_modules} to be defined, but {components} are defined." - ) - - return components - - @staticmethod - def numpy_to_pil(images): - """ - Convert a numpy image or a batch of images to a PIL image. - """ - if images.ndim == 3: - images = images[None, ...] - images = (images * 255).round().astype("uint8") - if images.shape[-1] == 1: - # special case for grayscale (single channel) images - pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images] - else: - pil_images = [Image.fromarray(image) for image in images] - - return pil_images - - # TODO: make it compatible with jax.lax - def progress_bar(self, iterable): - if not hasattr(self, "_progress_bar_config"): - self._progress_bar_config = {} - elif not isinstance(self._progress_bar_config, dict): - raise ValueError( - f"`self._progress_bar_config` should be of type `dict`, but is {type(self._progress_bar_config)}." - ) - - return tqdm(iterable, **self._progress_bar_config) - - def set_progress_bar_config(self, **kwargs): - self._progress_bar_config = kwargs diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/__init__.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/devthedeveloper/Bark-with-Voice-Cloning/bark/hubert/hubert_manager.py b/spaces/devthedeveloper/Bark-with-Voice-Cloning/bark/hubert/hubert_manager.py deleted file mode 100644 index 1a6c2fb1a878e5e54d78d9d50826a508fedff88c..0000000000000000000000000000000000000000 --- a/spaces/devthedeveloper/Bark-with-Voice-Cloning/bark/hubert/hubert_manager.py +++ /dev/null @@ -1,48 +0,0 @@ -import os.path -import shutil -import urllib.request - -import huggingface_hub - - -class HuBERTManager: - - - @staticmethod - def make_sure_hubert_installed(download_url: str = 'https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt', file_name: str = 'hubert.pt'): - install_dir = os.path.join('models', 'hubert') - if not os.path.isdir(install_dir): - os.makedirs(install_dir, exist_ok=True) - install_file = os.path.join(install_dir, file_name) - if not os.path.isfile(install_file): - print(f'Downloading HuBERT base model from {download_url}') - urllib.request.urlretrieve(download_url, install_file) - print('Downloaded HuBERT') - return install_file - - - @staticmethod - def make_sure_tokenizer_installed(model: str = 'quantifier_hubert_base_ls960_14.pth', repo: str = 'GitMylo/bark-voice-cloning', tokenizer_lang: str = 'en'): - local_file = tokenizer_lang + '_tokenizer.pth' - install_dir = os.path.join('models', 'hubert') - if not os.path.isdir(install_dir): - os.makedirs(install_dir, exist_ok=True) - install_file = os.path.join(install_dir, local_file) - if not os.path.isfile(install_file): - # refactor to use lists - if tokenizer_lang == 'en': - repo = 'GitMylo/bark-voice-cloning' - model = 'quantifier_hubert_base_ls960_14.pth' - elif tokenizer_lang == 'de': - repo = 'CountFloyd/bark-voice-cloning-german-HuBERT-quantizer' - model = 'german-HuBERT-quantizer_14_epoch.pth' - elif tokenizer_lang == 'pl': - repo = 'Hobis/bark-voice-cloning-polish-HuBERT-quantizer' - model = 'polish-HuBERT-quantizer_8_epoch.pth' - else: - raise 'Unknown Tokenizer Language!' - print(f'{local_file} not found. Downloading HuBERT custom tokenizer') - huggingface_hub.hf_hub_download(repo, model, local_dir=install_dir, local_dir_use_symlinks=False) - shutil.move(os.path.join(install_dir, model), install_file) - print('Downloaded tokenizer') - return install_file diff --git a/spaces/diacanFperku/AutoGPT/Cool Edit Pro Expired Registration Program.md b/spaces/diacanFperku/AutoGPT/Cool Edit Pro Expired Registration Program.md deleted file mode 100644 index 0530714194ce578624a3dd3b365a7be1b48ecf3b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Cool Edit Pro Expired Registration Program.md +++ /dev/null @@ -1,26 +0,0 @@ -

Cool Edit Pro Expired Registration Program


Download 🗹 https://gohhs.com/2uFU9X



-
-code - -Call us at the official Xilinx support phone number +1-800-983-8321. All these configurations are configured to match the cache in your case, some of them may require an enhanced Linux kernel. If so, it is not and its contents will be eerily similar to the generic data on the other drives. - -This will create a new file called install. You must connect this to your desktop computer. For information on how to determine the type of computer you have, refer to the sidebar section below for an overview of several methods that can help determine this. I will say this is a good way to learn a bit about Linux and Linux internals, and it's well worth the price of admission. - -After the install finishes and the new files have been copied to your desktop, double-click on the mydumper tool to run it. To find a file is normally done by calling the file function, which is documented in the built-in help. The combination of RAID-1 and mirrored RAID-0 pairs in some cases results in a better performance, like when the redundant data must travel faster than the user-requested speed of either storage medium. - -If the above step is successful, you will see an area designated as net-pf-5, set to 0, containing a fat32 partition. The performance of the server is also not great. Enable SMART monitoring on the drive you are using by running hdparm -N /dev/sdX or alternatively, sudo hdparm -N /dev/sdX where X is the drive number. - -This can be done manually using the DiskMover program. You can also edit this via any text editor (e. Create a file called net-pf-4 and set it to a size of 1MB and a value of 1MB. The Xilinx Support number is +1-800-983-8321. - -You can make your life easier by just specifying a Fs= on the kernel command line. - -Read more about the security risks and how you can secure your network and computer. Click OK and you are done. - -You can also use Novell Netware client to update the new version of the OS. - -Click on the [Reg] button at the bottom of the page to open the System registry. - -You can find a great article on the steps to install GParted here. The other two types 4fefd39f24
-
-
-

diff --git a/spaces/ds520/bingo/src/app/loading.css b/spaces/ds520/bingo/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/dusanstanis/TheBloke-guanaco-65B-HF/app.py b/spaces/dusanstanis/TheBloke-guanaco-65B-HF/app.py deleted file mode 100644 index 74264a7b5866b872e0c1386cc21d4168d2e68c33..0000000000000000000000000000000000000000 --- a/spaces/dusanstanis/TheBloke-guanaco-65B-HF/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/TheBloke/guanaco-65B-HF").launch() \ No newline at end of file diff --git a/spaces/edenehuyh/Demo_RealESRGAN/README.md b/spaces/edenehuyh/Demo_RealESRGAN/README.md deleted file mode 100644 index 09bed30a6e37df1c5a60e06ce54a0971e4f7e572..0000000000000000000000000000000000000000 --- a/spaces/edenehuyh/Demo_RealESRGAN/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Real-ESRGAN Demo for Image Restoration and Upscaling -emoji: 🖼️ -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: true ---- \ No newline at end of file diff --git a/spaces/elpsycongroo19/simple_chatbot/simple_app.py b/spaces/elpsycongroo19/simple_chatbot/simple_app.py deleted file mode 100644 index 66587582141736f525e1a6cc68de2ce8187e25eb..0000000000000000000000000000000000000000 --- a/spaces/elpsycongroo19/simple_chatbot/simple_app.py +++ /dev/null @@ -1,58 +0,0 @@ -import openai -import os -import gradio as gr - -openai.api_key = os.environ.get("OPENAI_API_KEY") - -class Conversation: - def __init__(self, prompt, num_of_round): - self.prompt = prompt - self.num_of_round = num_of_round - self.messages = [] - self.messages.append({"role": "system", "content": self.prompt}) - - def ask(self, question): - try: - self.messages.append( {"role": "user", "content": question}) - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=self.messages, - temperature=0.5, - max_tokens=2048, - top_p=1, - ) - except Exception as e: - print(e) - return e - - message = response["choices"][0]["message"]["content"] - self.messages.append({"role": "assistant", "content": message}) - - if len(self.messages) > self.num_of_round*2 + 1: - del self.messages[1:3] - return message - - -prompt = """你是GPT4。你的回答需要满足以下要求: -1. 你的回答必须是中文 -2. 回答限制在500个字以内""" - -conv = Conversation(prompt, 10) - -def predict(input, history=[]): - history.append(input) - response = conv.ask(input) - history.append(response) - responses = [(u,b) for u,b in zip(history[::2], history[1::2])] - return responses, history - -with gr.Blocks(css="#chatbot{height:350px} .overflow-y-auto{height:500px}") as demo: - chatbot = gr.Chatbot(elem_id="chatbot") - state = gr.State([]) - - with gr.Row(): - txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(container=False) - - txt.submit(predict, [txt, state], [chatbot, state]) - -demo.launch() \ No newline at end of file diff --git a/spaces/emilylearning/spurious_correlation_evaluation/README.md b/spaces/emilylearning/spurious_correlation_evaluation/README.md deleted file mode 100644 index cef6e2e01ae463470bece9d5c97c7dbb7dadf2d8..0000000000000000000000000000000000000000 --- a/spaces/emilylearning/spurious_correlation_evaluation/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Specification-induced correlations -emoji: 💻 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.0.10 -app_file: app.py -pinned: false ---- - -This is a demo is a simplified version of Method 1 described in the paper, ["Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution -"](https://arxiv.org/abs/2210.00131) - -``` -@misc{mcmilin2023underspecification, - title={Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution}, - author={Emily McMilin}, - year={2023}, - eprint={2210.00131}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} diff --git a/spaces/enzostvs/stable-diffusion-tpu/app/api/collections/route.ts b/spaces/enzostvs/stable-diffusion-tpu/app/api/collections/route.ts deleted file mode 100644 index 8a9361c1a9da80e8157173e58efb162cd58bd712..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/app/api/collections/route.ts +++ /dev/null @@ -1,64 +0,0 @@ -import { PrismaClient } from '@prisma/client' - -import { isAdmin } from '@/utils/checker/is_admin' - -const prisma = new PrismaClient() - -export async function GET(request: Request) { - const { headers } = request - const { searchParams } = new URL(request.url) - const userId = searchParams.get('userId') ?? undefined - const page = searchParams.get('page') ? parseInt(searchParams.get('page') as string) : 0 - - let is_admin = false - if (headers.get('Authorization') ) { - is_admin = await isAdmin(headers) as boolean - } - - const query: any = { - orderBy: { - id: 'desc' - }, - where: { - is_visible: { - equals: true - } - }, - take: 15, - skip: page * 15 - } - - if (userId) { - query.where = { - userId: { - equals: userId - }, - is_visible: { - equals: true - } - } - } else if (is_admin) { - query.where = { - is_visible: { - equals: undefined - } - } - } - - const collections = await prisma.collection.findMany(query) - - const total = await prisma.collection.count() - - return Response.json( - { - collections, - pagination: { - total, - page, - total_pages: Math.ceil(total / 15) - }, - status: 200, - ok: true - } - ) -} \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/kplug/langconv.py b/spaces/eson/tokenizer-arena/vocab/kplug/langconv.py deleted file mode 100644 index 7cf914af231c0eca8399ab6f65fa7563e4c7fbae..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/kplug/langconv.py +++ /dev/null @@ -1,276 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from copy import deepcopy -import re - -try: - import psyco - psyco.full() -except: - pass - -try: - from zh_wiki import zh2Hant, zh2Hans -except ImportError: - from zhtools.zh_wiki import zh2Hant, zh2Hans - -import sys -py3k = sys.version_info >= (3, 0, 0) - -if py3k: - UEMPTY = '' -else: - _zh2Hant, _zh2Hans = {}, {} - for old, new in ((zh2Hant, _zh2Hant), (zh2Hans, _zh2Hans)): - for k, v in old.items(): - new[k.decode('utf8')] = v.decode('utf8') - zh2Hant = _zh2Hant - zh2Hans = _zh2Hans - UEMPTY = ''.decode('utf8') - -# states -(START, END, FAIL, WAIT_TAIL) = list(range(4)) -# conditions -(TAIL, ERROR, MATCHED_SWITCH, UNMATCHED_SWITCH, CONNECTOR) = list(range(5)) - -MAPS = {} - -class Node(object): - def __init__(self, from_word, to_word=None, is_tail=True, - have_child=False): - self.from_word = from_word - if to_word is None: - self.to_word = from_word - self.data = (is_tail, have_child, from_word) - self.is_original = True - else: - self.to_word = to_word or from_word - self.data = (is_tail, have_child, to_word) - self.is_original = False - self.is_tail = is_tail - self.have_child = have_child - - def is_original_long_word(self): - return self.is_original and len(self.from_word)>1 - - def is_follow(self, chars): - return chars != self.from_word[:-1] - - def __str__(self): - return '' % (repr(self.from_word), - repr(self.to_word), self.is_tail, self.have_child) - - __repr__ = __str__ - -class ConvertMap(object): - def __init__(self, name, mapping=None): - self.name = name - self._map = {} - if mapping: - self.set_convert_map(mapping) - - def set_convert_map(self, mapping): - convert_map = {} - have_child = {} - max_key_length = 0 - for key in sorted(mapping.keys()): - if len(key)>1: - for i in range(1, len(key)): - parent_key = key[:i] - have_child[parent_key] = True - have_child[key] = False - max_key_length = max(max_key_length, len(key)) - for key in sorted(have_child.keys()): - convert_map[key] = (key in mapping, have_child[key], - mapping.get(key, UEMPTY)) - self._map = convert_map - self.max_key_length = max_key_length - - def __getitem__(self, k): - try: - is_tail, have_child, to_word = self._map[k] - return Node(k, to_word, is_tail, have_child) - except: - return Node(k) - - def __contains__(self, k): - return k in self._map - - def __len__(self): - return len(self._map) - -class StatesMachineException(Exception): pass - -class StatesMachine(object): - def __init__(self): - self.state = START - self.final = UEMPTY - self.len = 0 - self.pool = UEMPTY - - def clone(self, pool): - new = deepcopy(self) - new.state = WAIT_TAIL - new.pool = pool - return new - - def feed(self, char, map): - node = map[self.pool+char] - - if node.have_child: - if node.is_tail: - if node.is_original: - cond = UNMATCHED_SWITCH - else: - cond = MATCHED_SWITCH - else: - cond = CONNECTOR - else: - if node.is_tail: - cond = TAIL - else: - cond = ERROR - - new = None - if cond == ERROR: - self.state = FAIL - elif cond == TAIL: - if self.state == WAIT_TAIL and node.is_original_long_word(): - self.state = FAIL - else: - self.final += node.to_word - self.len += 1 - self.pool = UEMPTY - self.state = END - elif self.state == START or self.state == WAIT_TAIL: - if cond == MATCHED_SWITCH: - new = self.clone(node.from_word) - self.final += node.to_word - self.len += 1 - self.state = END - self.pool = UEMPTY - elif cond == UNMATCHED_SWITCH or cond == CONNECTOR: - if self.state == START: - new = self.clone(node.from_word) - self.final += node.to_word - self.len += 1 - self.state = END - else: - if node.is_follow(self.pool): - self.state = FAIL - else: - self.pool = node.from_word - elif self.state == END: - # END is a new START - self.state = START - new = self.feed(char, map) - elif self.state == FAIL: - raise StatesMachineException('Translate States Machine ' - 'have error with input data %s' % node) - return new - - def __len__(self): - return self.len + 1 - - def __str__(self): - return '' % ( - id(self), self.pool, self.state, self.final) - __repr__ = __str__ - -class Converter(object): - def __init__(self, to_encoding): - self.to_encoding = to_encoding - self.map = MAPS[to_encoding] - self.start() - - def feed(self, char): - branches = [] - for fsm in self.machines: - new = fsm.feed(char, self.map) - if new: - branches.append(new) - if branches: - self.machines.extend(branches) - self.machines = [fsm for fsm in self.machines if fsm.state != FAIL] - all_ok = True - for fsm in self.machines: - if fsm.state != END: - all_ok = False - if all_ok: - self._clean() - return self.get_result() - - def _clean(self): - if len(self.machines): - self.machines.sort(key=lambda x: len(x)) - # self.machines.sort(cmp=lambda x,y: cmp(len(x), len(y))) - self.final += self.machines[0].final - self.machines = [StatesMachine()] - - def start(self): - self.machines = [StatesMachine()] - self.final = UEMPTY - - def end(self): - self.machines = [fsm for fsm in self.machines - if fsm.state == FAIL or fsm.state == END] - self._clean() - - def convert(self, string): - self.start() - for char in string: - self.feed(char) - self.end() - return self.get_result() - - def get_result(self): - return self.final - - -def registery(name, mapping): - global MAPS - MAPS[name] = ConvertMap(name, mapping) - -registery('zh-hant', zh2Hant) -registery('zh-hans', zh2Hans) -del zh2Hant, zh2Hans - - -def run(): - import sys - from optparse import OptionParser - parser = OptionParser() - parser.add_option('-e', type='string', dest='encoding', - help='encoding') - parser.add_option('-f', type='string', dest='file_in', - help='input file (- for stdin)') - parser.add_option('-t', type='string', dest='file_out', - help='output file') - (options, args) = parser.parse_args() - if not options.encoding: - parser.error('encoding must be set') - if options.file_in: - if options.file_in == '-': - file_in = sys.stdin - else: - file_in = open(options.file_in) - else: - file_in = sys.stdin - if options.file_out: - if options.file_out == '-': - file_out = sys.stdout - else: - file_out = open(options.file_out, 'wb') - else: - file_out = sys.stdout - - c = Converter(options.encoding) - for line in file_in: - # print >> file_out, c.convert(line.rstrip('\n').decode( - file_out.write(c.convert(line.rstrip('\n').decode( - 'utf8')).encode('utf8')) - - -if __name__ == '__main__': - run() \ No newline at end of file diff --git a/spaces/evaluate-metric/cer/cer.py b/spaces/evaluate-metric/cer/cer.py deleted file mode 100644 index c5f4a907243b5755b97dca18276ded5b1473828c..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/cer/cer.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright 2021 The HuggingFace Evaluate Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Character Error Ratio (CER) metric. """ - -from typing import List - -import datasets -import jiwer -import jiwer.transforms as tr -from datasets.config import PY_VERSION -from packaging import version - -import evaluate - - -if PY_VERSION < version.parse("3.8"): - import importlib_metadata -else: - import importlib.metadata as importlib_metadata - - -SENTENCE_DELIMITER = "" - - -if version.parse(importlib_metadata.version("jiwer")) < version.parse("2.3.0"): - - class SentencesToListOfCharacters(tr.AbstractTransform): - def __init__(self, sentence_delimiter: str = " "): - self.sentence_delimiter = sentence_delimiter - - def process_string(self, s: str): - return list(s) - - def process_list(self, inp: List[str]): - chars = [] - for sent_idx, sentence in enumerate(inp): - chars.extend(self.process_string(sentence)) - if self.sentence_delimiter is not None and self.sentence_delimiter != "" and sent_idx < len(inp) - 1: - chars.append(self.sentence_delimiter) - return chars - - cer_transform = tr.Compose( - [tr.RemoveMultipleSpaces(), tr.Strip(), SentencesToListOfCharacters(SENTENCE_DELIMITER)] - ) -else: - cer_transform = tr.Compose( - [ - tr.RemoveMultipleSpaces(), - tr.Strip(), - tr.ReduceToSingleSentence(SENTENCE_DELIMITER), - tr.ReduceToListOfListOfChars(), - ] - ) - - -_CITATION = """\ -@inproceedings{inproceedings, - author = {Morris, Andrew and Maier, Viktoria and Green, Phil}, - year = {2004}, - month = {01}, - pages = {}, - title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.} -} -""" - -_DESCRIPTION = """\ -Character error rate (CER) is a common metric of the performance of an automatic speech recognition system. - -CER is similar to Word Error Rate (WER), but operates on character instead of word. Please refer to docs of WER for further information. - -Character error rate can be computed as: - -CER = (S + D + I) / N = (S + D + I) / (S + D + C) - -where - -S is the number of substitutions, -D is the number of deletions, -I is the number of insertions, -C is the number of correct characters, -N is the number of characters in the reference (N=S+D+C). - -CER's output is not always a number between 0 and 1, in particular when there is a high number of insertions. This value is often associated to the percentage of characters that were incorrectly predicted. The lower the value, the better the -performance of the ASR system with a CER of 0 being a perfect score. -""" - -_KWARGS_DESCRIPTION = """ -Computes CER score of transcribed segments against references. -Args: - references: list of references for each speech input. - predictions: list of transcribtions to score. - concatenate_texts: Whether or not to concatenate sentences before evaluation, set to True for more accurate result. -Returns: - (float): the character error rate - -Examples: - - >>> predictions = ["this is the prediction", "there is an other sample"] - >>> references = ["this is the reference", "there is another one"] - >>> cer = evaluate.load("cer") - >>> cer_score = cer.compute(predictions=predictions, references=references) - >>> print(cer_score) - 0.34146341463414637 -""" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class CER(evaluate.Metric): - def _info(self): - return evaluate.MetricInfo( - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "predictions": datasets.Value("string", id="sequence"), - "references": datasets.Value("string", id="sequence"), - } - ), - codebase_urls=["https://github.com/jitsi/jiwer/"], - reference_urls=[ - "https://en.wikipedia.org/wiki/Word_error_rate", - "https://sites.google.com/site/textdigitisation/qualitymeasures/computingerrorrates", - ], - ) - - def _compute(self, predictions, references, concatenate_texts=False): - if concatenate_texts: - return jiwer.compute_measures( - references, - predictions, - truth_transform=cer_transform, - hypothesis_transform=cer_transform, - )["wer"] - - incorrect = 0 - total = 0 - for prediction, reference in zip(predictions, references): - measures = jiwer.compute_measures( - reference, - prediction, - truth_transform=cer_transform, - hypothesis_transform=cer_transform, - ) - incorrect += measures["substitutions"] + measures["deletions"] + measures["insertions"] - total += measures["substitutions"] + measures["deletions"] + measures["hits"] - - return incorrect / total diff --git a/spaces/falterWliame/Face_Mask_Detection/Dont Chat With Strangers Download PATCHED Blackbox.md b/spaces/falterWliame/Face_Mask_Detection/Dont Chat With Strangers Download PATCHED Blackbox.md deleted file mode 100644 index 7af5d8561dab4fcfbf49464d1380fb76a7800e40..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Dont Chat With Strangers Download PATCHED Blackbox.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

protecting company information is the companies responsibility. whether or not the it department is responsible for the protection, theres no excuse for individuals to be putting corporate info out for the internet to grab without proper authorization.

-

a very popular and free way to secure the data on a computer is via a virtual machine. this is done by accessing a virtual disk that allows you to run the os as if you were booting from a hard drive. this way if a virus is found, the boot files are stored on the virtual disk, and the virus can be deleted.

-

Don't Chat With Strangers download blackbox


DOWNLOAD ☆☆☆☆☆ https://urlca.com/2uDbVK



-

2. they show you how to use the technology by demonstrating the user how to use the software or technology, the attacker can have an opportunity to infect, gather or leak information. to the extent the attacker is successful, the victim will be left vulnerable because the user has become infected.

-

7. they own you the attacker can learn how to affect your computer on an ongoing basis by communicating with your computer over the network. they can install keystroke loggers, and turn your computer into a zombie (computer that can be controlled without human interaction). they may even build and deploy a malicious application.

-

when you are using facebook's chat app, facebook messenger, or some other chat app, some people might want to make calls on your behalf. this is usually called phishing. if someone sends you a url to a website that looks like facebook, and asks you to "open with facebook messenger", you have just been phished by an email.

-

if you want to chat with someone on a social media app, such as snapchat, instagram, twitter, or facebook, be careful when you look at their profile. these apps will show you a picture of the profile of a random person, or of a person you're not following. you could accidentally end up chatting with a stranger.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Download Xtools Pro 9.1 Crackl.md b/spaces/falterWliame/Face_Mask_Detection/Download Xtools Pro 9.1 Crackl.md deleted file mode 100644 index 6324cac05bd922c206ba0e0a3f5b261335adc51a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Download Xtools Pro 9.1 Crackl.md +++ /dev/null @@ -1,118 +0,0 @@ - -

Download Xtools Pro 9.1 Crackl: A Solution for iCloud Unlocking

-

If you have an iPhone that is locked by iCloud and you don't remember the password, you might be looking for a way to unlock it. There are many tools and services that claim to be able to unlock iCloud accounts, but not all of them are reliable and safe. Some of them may scam you, damage your device or steal your personal information. One of the tools that has gained some popularity among iCloud users is Xtools Pro 9.1 Crackl. This tool promises to remove the iCloud password and let you access all the features of your iPhone. But what is Xtools Pro 9.1 Crackl and how does it work? Is it safe and effective? In this article, we will answer these questions and provide you with some alternatives to Xtools Pro 9.1 Crackl.

-

What is Xtools Pro 9.1 Crackl?

-

Xtools Pro 9.1 Crackl is a software that is designed to unlock iCloud accounts without requiring the original password. It is a cracked version of XTools Pro, which is a professional GIS software for ArcGIS users. The cracked version has been modified by hackers to bypass the iCloud activation lock and grant full access to the iPhone. Xtools Pro 9.1 Crackl claims to be compatible with all iOS devices and versions, including the latest iOS 15.

-

Download Xtools Pro 9.1 Crackl


Download File >>>>> https://urlca.com/2uDe0G



-

How to download and use Xtools Pro 9.1 Crackl?

-

Xtools Pro 9.1 Crackl is not available on any official website or app store. It can only be downloaded from some third-party websites that offer free software downloads. However, these websites may contain malicious software that can harm your computer or mobile device. Therefore, you should be careful and do some research before downloading any software from the internet.

-

One of the websites that we recommend for downloading Xtools Pro 9.1 Crackl is LexCliq. LexCliq is a website that provides free and legal downloads of various software for different purposes. You can download Xtools Pro 9.1 Crackl from this link: https://lexcliq.com/download-xtools-pro-9-1-crack-top/

-

After downloading Xtools Pro 9.1 Crackl from LexCliq, you need to install it on your computer and connect your iPhone to it with a USB cable. Then, you need to run the software and follow the instructions on the screen. The software will scan your device and remove the iCloud password within a few minutes.

-

Is Xtools Pro 9.1 Crackl safe and effective?

-

Xtools Pro 9.1 Crackl may sound like a tempting solution for iCloud unlocking, but it is not without risks and drawbacks. Here are some of the main disadvantages of using Xtools Pro 9.1 Crackl:

-
    -
  • Xtools Pro 9.1 Crackl is an illegal software that violates the terms and conditions of Apple and XTools Pro. Using it may result in legal consequences or penalties.
  • -
  • Xtools Pro 9.1 Crackl is not guaranteed to work on every device and iOS version. It may fail to unlock your device or cause some errors or malfunctions.
  • -
  • Xtools Pro 9.1 Crackl may contain viruses, malware or spyware that can infect your computer or mobile device and compromise your security and privacy.
  • -
  • Xtools Pro 9.1 Crackl may not be able to unlock your device permanently. It may only bypass the iCloud activation lock temporarily and lock your device again after a reboot or update.
  • -
-

Therefore, we do not recommend using Xtools Pro 9.1 Crackl for iCloud unlocking. It is better to look for some other alternatives that are safer and more reliable.

-

What are some alternatives to Xtools Pro 9.1 Crackl?

-

If you want to unlock your iCloud account without using Xtools Pro 9.1 Crackl, there are some other options that you can try:

-

-
    -
  • The first option is to contact the original owner of the device and ask them to remove the iCloud account from their Apple ID settings.
  • -
  • The second option is to contact Apple support and provide them with proof of purchase and ownership of the device.
  • -
  • The third option is to use a professional iCloud unlocking service that can unlock your device remotely and legally.
  • -
-

One of the best iCloud unlocking services that we recommend is Fone Tips. Fone Tips is a website that offers various solutions for iOS users, including iCloud unlocking, data recovery, data transfer, data backup and more.

-

Fone Tips can unlock any iCloud account within 24 hours with a simple online process:

-
    -
  1. You need to visit their website and choose the iCloud unlocking service.
  2. -
  3. You need to enter your device model and IMEI number.
  4. -
  5. You need to make a payment with a secure method.
  6. -
  7. You need to wait for an email confirmation with instructions on how to complete the unlocking process.
  8. -
-

Fone Tips has many advantages over Xtools Pro 9.1 Crackl:

-
    -
  • Fone Tips is a legal service that complies with Apple's policies and regulations.
  • -
  • Fone Tips is a reliable service that has a high success rate and positive customer reviews.
  • -
  • Fone Tips is a safe service that does not require any software installation or device connection.
  • -
  • Fone Tips is a permanent service that unlocks your device for good and allows you to use any Apple ID and iCloud features.
  • -
-

Conclusion

-

Xtools Pro 9.1 Crackl is a software that claims to unlock iCloud accounts without requiring the original password. However, it is not a safe or effective solution for iCloud unlocking. It has many risks and drawbacks that may cause more problems than solutions.

- -

If you want to unlock your iCloud account without using Xtools Pro 9.1 Crackl, you should look for some other alternatives that are safer and more reliable.

- -

One of the best alternatives that we recommend is Fone Tips, which is a professional iCloud unlocking service that can unlock your device remotely and legally within 24 hours.

- -

If you want to download Xtools Pro 9.1 Crackl or Fone Tips, you can use the links below:

- -

Xtools Pro 9.1 Crackl: https://lexcliq.com/download-xtools-pro-9-1-crack-top/

- -

Fone Tips: https://fone.tips/xtools-ultimate/

-

What are the benefits of using Xtools Pro 9.1 Crackl?

-

Xtools Pro 9.1 Crackl is a software that has some benefits for iCloud users who want to unlock their devices. Here are some of the main benefits of using Xtools Pro 9.1 Crackl:

-
    -
  • Xtools Pro 9.1 Crackl is a free software that does not require any payment or subscription to use.
  • -
  • Xtools Pro 9.1 Crackl is a simple software that does not require any technical skills or knowledge to use.
  • -
  • Xtools Pro 9.1 Crackl is a fast software that can unlock your device within a few minutes.
  • -
  • Xtools Pro 9.1 Crackl is a versatile software that can work with any iOS device and version.
  • -
-

What are the drawbacks of using Xtools Pro 9.1 Crackl?

-

Xtools Pro 9.1 Crackl is a software that has some drawbacks for iCloud users who want to unlock their devices. Here are some of the main drawbacks of using Xtools Pro 9.1 Crackl:

-
    -
  • Xtools Pro 9.1 Crackl is an illegal software that violates the terms and conditions of Apple and XTools Pro.
  • -
  • Xtools Pro 9.1 Crackl is an unreliable software that may not work on every device and iOS version.
  • -
  • Xtools Pro 9.1 Crackl is an unsafe software that may contain viruses, malware or spyware that can harm your computer or mobile device.
  • -
  • Xtools Pro 9.1 Crackl is a temporary software that may not unlock your device permanently and may lock it again after a reboot or update.
  • -
-

How to avoid using Xtools Pro 9.1 Crackl?

-

If you want to avoid using Xtools Pro 9.1 Crackl for iCloud unlocking, you should follow some tips and precautions:

-
    -
  • Do not buy or accept any used iPhone that is locked by iCloud without verifying its original owner and status.
  • -
  • Do not forget or lose your iCloud password and keep it safe and secure.
  • -
  • Do not download or install any software from unknown or untrusted sources on your computer or mobile device.
  • -
  • Do not trust any tool or service that claims to unlock iCloud accounts without requiring the original password.
  • -
-

Conclusion

-

Xtools Pro 9.1 Crackl is a software that claims to unlock iCloud accounts without requiring the original password. However, it is not a safe or effective solution for iCloud unlocking. It has many risks and drawbacks that may cause more problems than solutions.

- -

If you want to unlock your iCloud account without using Xtools Pro 9.1 Crackl, you should look for some other alternatives that are safer and more reliable.

- -

One of the best alternatives that we recommend is Fone Tips, which is a professional iCloud unlocking service that can unlock your device remotely and legally within 24 hours.

- -

If you want to download Xtools Pro 9.1 Crackl or Fone Tips, you can use the links below:

- -

Xtools Pro 9.1 Crackl: https://lexcliq.com/download-xtools-pro-9-1-crack-top/

- -

Fone Tips: https://fone.tips/xtools-ultimate/

-

How to troubleshoot Xtools Pro 9.1 Crackl?

-

Xtools Pro 9.1 Crackl is a software that may not work on every device and iOS version. It may also cause some errors or malfunctions on your device or computer. Therefore, you may need to troubleshoot Xtools Pro 9.1 Crackl if you encounter any problems while using it.

-

Here are some of the common problems and solutions that you may face with Xtools Pro 9.1 Crackl:

-
    -
  • The software does not detect your device: This may happen if your device is not connected properly to the computer or if the USB driver is not installed correctly. To fix this, you should check the USB cable and port and make sure they are working. You should also update the USB driver on your computer and restart it.
  • -
  • The software fails to unlock your device: This may happen if your device or iOS version is not compatible with the software or if the software is corrupted or outdated. To fix this, you should check the compatibility of your device and iOS version with the software and make sure they match. You should also download the latest version of the software from LexCliq and reinstall it.
  • -
  • The software damages your device or computer: This may happen if the software contains viruses, malware or spyware that can infect your device or computer and compromise your security and privacy. To fix this, you should scan your device and computer with a reliable antivirus program and remove any threats. You should also backup your data and restore your device or computer to a previous state.
  • -
  • The software locks your device again after a reboot or update: This may happen if the software only bypasses the iCloud activation lock temporarily and does not remove it permanently. To fix this, you should avoid rebooting or updating your device until you have added a new iCloud account and password. You should also use a permanent iCloud unlocking service like Fone Tips instead of Xtools Pro 9.1 Crackl.
  • -
-

If you still have any problems with Xtools Pro 9.1 Crackl, you can contact their support team through their website or email.

-

Conclusion

-

Xtools Pro 9.1 Crackl is a software that claims to unlock iCloud accounts without requiring the original password. However, it is not a safe or effective solution for iCloud unlocking. It has many risks and drawbacks that may cause more problems than solutions.

- -

If you want to unlock your iCloud account without using Xtools Pro 9.1 Crackl, you should look for some other alternatives that are safer and more reliable.

- -

One of the best alternatives that we recommend is Fone Tips, which is a professional iCloud unlocking service that can unlock your device remotely and legally within 24 hours.

- -

If you want to download Xtools Pro 9.1 Crackl or Fone Tips, you can use the links below:

- -

Xtools Pro 9.1 Crackl: https://lexcliq.com/download-xtools-pro-9-1-crack-top/

- -

Fone Tips: https://fone.tips/xtools-ultimate/

-

Conclusion

-

In this article, we have discussed everything you need to know about Xtools Pro 9.1 Crackl, a software that claims to unlock iCloud accounts without requiring the original password. We have explained what it is, how to download it, how to use it, how to compare it with other iCloud unlocking tools, how to troubleshoot it and how to avoid using it.

-

We have also recommended a better alternative to Xtools Pro 9.1 Crackl, which is Fone Tips, a professional iCloud unlocking service that can unlock your device remotely and legally within 24 hours.

-

We hope this article was helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Easycafe Crack Serial 36.md b/spaces/falterWliame/Face_Mask_Detection/Easycafe Crack Serial 36.md deleted file mode 100644 index 91fb2137b5af6adc622416e662ef81d3399b8aa6..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Easycafe Crack Serial 36.md +++ /dev/null @@ -1,6 +0,0 @@ -

Easycafe Crack Serial 36


DOWNLOADhttps://urlca.com/2uDdA9



-
-Paypal Money Adder Hack patch; Tinasoft Easycafe 2. turn the seatbelt chime off 2. cases of vehicle repair ... 1 try to exclude using words such as: serial, code, keygen, hacked, patch, warez, etc. ... Vag com software vcds cracked software download: Vcds vag 12. exe! You can ... 36/5 rating by 3396 users. 1fdad05405
-
-
-

diff --git a/spaces/falterWliame/Face_Mask_Detection/Gta 3 Please Insert Disk 2 Crack __LINK__.md b/spaces/falterWliame/Face_Mask_Detection/Gta 3 Please Insert Disk 2 Crack __LINK__.md deleted file mode 100644 index b8e0458d31fac34e1566ac52b71fb55044f6008a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Gta 3 Please Insert Disk 2 Crack __LINK__.md +++ /dev/null @@ -1,6 +0,0 @@ -

gta 3 please insert disk 2 crack


Download Zip →→→ https://urlca.com/2uDcGT



-
-3. 1 registry backup 3 Nero 6 Reloaded 6. 89 SparkBooth 4. e windows xp or internet download manager and press search button then, please, don't add ... 2. 2 Crack + License Key [2021]. 01. Start the YouTube to mp3 conversion process by ... in one ofHow to insert your license key into MP3Studio YouTube Downloader. 1fdad05405
-
-
-

diff --git a/spaces/falterWliame/Face_Mask_Detection/Hiren Boot Cd 16 Iso Free EXCLUSIVE Download.md b/spaces/falterWliame/Face_Mask_Detection/Hiren Boot Cd 16 Iso Free EXCLUSIVE Download.md deleted file mode 100644 index ec2b7e03df8df3484e4278bf265a756edadd8c16..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Hiren Boot Cd 16 Iso Free EXCLUSIVE Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

hiren boot cd 16 iso free download


Download Zip ☆☆☆ https://urlca.com/2uDdtJ



- -Raymond Updated 4 years ago Software 16 Comments ... Released in 2004, Hiren's Boot CD boasts a huge array of repair utilities and diagnostic ... When you download the tool it is in ISO format, so you'll need burning software such as ... Useful Emergency Bootable CD/USB Wondershare LiveBoot 2012 for Free usb icon ... 1fdad05405
-
-
-

diff --git a/spaces/fanzhuyu/Code-Interpreter/bot_backend.py b/spaces/fanzhuyu/Code-Interpreter/bot_backend.py deleted file mode 100644 index 7c97f486f62fb9028a6fb3605f0cefb0c8c71290..0000000000000000000000000000000000000000 --- a/spaces/fanzhuyu/Code-Interpreter/bot_backend.py +++ /dev/null @@ -1,232 +0,0 @@ -import json -import openai -import os -import copy -import shutil -from jupyter_backend import * -from typing import * - -functions = [ - { - "name": "execute_code", - "description": "This function allows you to execute Python code and retrieve the terminal output. If the code " - "generates image output, the function will return the text '[image]'. The code is sent to a " - "Jupyter kernel for execution. The kernel will remain active after execution, retaining all " - "variables in memory.", - "parameters": { - "type": "object", - "properties": { - "code": { - "type": "string", - "description": "The code text" - } - }, - "required": ["code"], - } - } -] - -system_msg = '''You are an AI code interpreter. -Your goal is to help users do a variety of jobs by executing Python code. - -You should: -1. Comprehend the user's requirements carefully & to the letter. -2. Give a brief description for what you plan to do & call the execute_code function to run code -3. Provide results analysis based on the execution output. -4. If error occurred, try to fix it. - -Note: If the user uploads a file, you will receive a system message "User uploaded a file: filename". Use the filename as the path in the code. ''' - -with open('config.json') as f: - config = json.load(f) - -if not config['API_KEY']: - config['API_KEY'] = os.getenv('OPENAI_API_KEY') - os.unsetenv('OPENAI_API_KEY') - - -def get_config(): - return config - - -def config_openai_api(api_type, api_base, api_version, api_key): - openai.api_type = api_type - openai.api_base = api_base - openai.api_version = api_version - openai.api_key = api_key - - -class GPTResponseLog: - def __init__(self): - self.assistant_role_name = '' - self.content = '' - self.function_name = None - self.function_args_str = '' - self.display_code_block = '' - self.finish_reason = 'stop' - self.bot_history = None - - def reset_gpt_response_log_values(self, exclude=None): - if exclude is None: - exclude = [] - - attributes = {'assistant_role_name': '', - 'content': '', - 'function_name': None, - 'function_args_str': '', - 'display_code_block': '', - 'finish_reason': 'stop', - 'bot_history': None} - - for attr_name in exclude: - del attributes[attr_name] - for attr_name, value in attributes.items(): - setattr(self, attr_name, value) - - def set_assistant_role_name(self, assistant_role_name: str): - self.assistant_role_name = assistant_role_name - - def add_content(self, content: str): - self.content += content - - def set_function_name(self, function_name: str): - self.function_name = function_name - - def copy_current_bot_history(self, bot_history: List): - self.bot_history = copy.deepcopy(bot_history) - - def add_function_args_str(self, function_args_str: str): - self.function_args_str += function_args_str - - def update_display_code_block(self, display_code_block): - self.display_code_block = display_code_block - - def update_finish_reason(self, finish_reason: str): - self.finish_reason = finish_reason - - -class BotBackend(GPTResponseLog): - def __init__(self): - super().__init__() - self.unique_id = hash(id(self)) - self.jupyter_work_dir = f'cache/work_dir_{self.unique_id}' - self.jupyter_kernel = JupyterKernel(work_dir=self.jupyter_work_dir) - self.gpt_model_choice = "GPT-3.5" - self.revocable_files = [] - self._init_conversation() - self._init_api_config() - self._init_kwargs_for_chat_completion() - - def _init_conversation(self): - first_system_msg = {'role': 'system', 'content': system_msg} - if hasattr(self, 'conversation'): - self.conversation.clear() - self.conversation.append(first_system_msg) - else: - self.conversation: List[Dict] = [first_system_msg] - - def _init_api_config(self): - self.config = get_config() - api_type = self.config['API_TYPE'] - api_base = self.config['API_base'] - api_version = self.config['API_VERSION'] - api_key = config['API_KEY'] - config_openai_api(api_type, api_base, api_version, api_key) - - def _init_kwargs_for_chat_completion(self): - self.kwargs_for_chat_completion = { - 'stream': True, - 'messages': self.conversation, - 'functions': functions, - 'function_call': 'auto' - } - - model_name = self.config['model'][self.gpt_model_choice]['model_name'] - - if self.config['API_TYPE'] == 'azure': - self.kwargs_for_chat_completion['engine'] = model_name - else: - self.kwargs_for_chat_completion['model'] = model_name - - def _clear_all_files_in_work_dir(self): - for filename in os.listdir(self.jupyter_work_dir): - os.remove( - os.path.join(self.jupyter_work_dir, filename) - ) - - def add_gpt_response_content_message(self): - self.conversation.append( - {'role': self.assistant_role_name, 'content': self.content} - ) - - def add_text_message(self, user_text): - self.conversation.append( - {'role': 'user', 'content': user_text} - ) - self.revocable_files.clear() - self.update_finish_reason(finish_reason='new_input') - - def add_file_message(self, path, bot_msg): - filename = os.path.basename(path) - work_dir = self.jupyter_work_dir - - shutil.copy(path, work_dir) - - gpt_msg = {'role': 'system', 'content': f'User uploaded a file: {filename}'} - self.conversation.append(gpt_msg) - self.revocable_files.append( - { - 'bot_msg': bot_msg, - 'gpt_msg': gpt_msg, - 'path': os.path.join(work_dir, filename) - } - ) - - def add_function_call_response_message(self, function_response: str, save_tokens=True): - self.conversation.append( - { - "role": self.assistant_role_name, - "name": self.function_name, - "content": self.function_args_str - } - ) - - if save_tokens and len(function_response) > 500: - function_response = f'{function_response[:200]}\n[Output too much, the middle part output is omitted]\n ' \ - f'End part of output:\n{function_response[-200:]}' - self.conversation.append( - { - "role": "function", - "name": self.function_name, - "content": function_response, - } - ) - - def revoke_file(self): - if self.revocable_files: - file = self.revocable_files[-1] - bot_msg = file['bot_msg'] - gpt_msg = file['gpt_msg'] - path = file['path'] - - assert self.conversation[-1] is gpt_msg - del self.conversation[-1] - - os.remove(path) - - del self.revocable_files[-1] - - return bot_msg - else: - return None - - def update_gpt_model_choice(self, model_choice): - self.gpt_model_choice = model_choice - self._init_kwargs_for_chat_completion() - - def restart(self): - self._clear_all_files_in_work_dir() - self.revocable_files.clear() - self._init_conversation() - self.reset_gpt_response_log_values() - self.jupyter_kernel.restart_jupyter_kernel() diff --git a/spaces/fatiXbelha/sd/A Game of Thrones The Board Game Mod APK - Download and Play Now.md b/spaces/fatiXbelha/sd/A Game of Thrones The Board Game Mod APK - Download and Play Now.md deleted file mode 100644 index 1e1e3480fc35611175e7c87021107f0188f345eb..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/A Game of Thrones The Board Game Mod APK - Download and Play Now.md +++ /dev/null @@ -1,119 +0,0 @@ -
-

A Game of Thrones Board Game Mod APK: How to Download and Play

-

If you are a fan of A Game of Thrones, the epic fantasy series by George R.R. Martin, or its TV adaptation by HBO, you might be interested in trying out A Game of Thrones: The Board Game, a digital edition of the popular tabletop game by Fantasy Flight Games. In this game, you can take control of one of the six Houses of Westeros and compete with other players for the Iron Throne. You will have to use your military, political, and diplomatic skills to outwit and outmaneuver your rivals, while also dealing with the threats from beyond the Wall and across the Narrow Sea.

-

a game of thrones board game mod apk


DOWNLOAD ⚙⚙⚙ https://urllie.com/2uNBoc



-

But what if you want to enhance your gaming experience with some extra features and options? That's where A Game of Thrones Board Game Mod APK comes in handy. A mod APK is a modified version of an original application that allows you to access some features that are not available in the official version. For example, you can unlock all the Houses, maps, scenarios, and expansions in A Game of Thrones Board Game Mod APK, as well as customize your game settings, graphics, sound, and more. You can also play online with other players who have installed the mod APK, or offline with AI opponents.

-

In this article, we will show you how to download and play A Game of Thrones Board Game Mod APK on your Android device. We will also explain the rules and objectives of the game, as well as some tips and tricks for playing it. So, if you are ready to enter the world of Westeros and claim your rightful place on the Iron Throne, read on!

-

How to Download A Game of Thrones Board Game Mod APK

-

Before you can download and play A Game of Thrones Board Game Mod APK, you need to make sure that your device meets some requirements and that you follow some precautions. Here are some things you need to know:

-
    -
  • Your device must have Android 4.4 or higher operating system.
  • -
  • Your device must have at least 1 GB of RAM and 500 MB of free storage space.
  • -
  • You need to enable unknown sources in your device settings. This will allow you to install applications from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  • -
  • You need to disable any antivirus or security software on your device. This will prevent them from interfering with the installation process or deleting the mod APK file.
  • -
  • You need to have a stable internet connection for downloading and playing the game.
  • -
-

Once you have checked these requirements and precautions, you can proceed to download A Game of Thrones Board Game Mod APK from a reliable source. There are many websites that offer mod APK files for various games and applications, but not all of them are safe and trustworthy. Some may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful when choosing where to download your mod APK file from.

-

One of the best sources for downloading A Game of Thrones Board Game Mod APK is APKMODY, a website that provides mod APK files for various games and applications. APKMODY is safe, secure, and easy to use. You can download A Game of Thrones Board Game Mod APK v0.9.4 from this link: A Game of Thrones Board Game Mod APK. The file size is 34.34 MB and the publisher is Twin Sails Interactive.

-

a game of thrones the board game apk download
-a game of thrones the board game android mod
-a game of thrones the board game modded apk free
-a game of thrones the board game apk mod unlimited
-a game of thrones the board game hack apk
-a game of thrones the board game premium apk
-a game of thrones the board game unlocked apk
-a game of thrones the board game full version apk
-a game of thrones the board game latest apk mod
-a game of thrones the board game cracked apk
-a game of thrones the board game apk mod offline
-a game of thrones the board game cheat apk
-a game of thrones the board game pro apk
-a game of thrones the board game apk mod online
-a game of thrones the board game mod apk no root
-a game of thrones the board game apk mod money
-a game of thrones the board game mega mod apk
-a game of thrones the board game vip mod apk
-a game of thrones the board game mod apk obb
-a game of thrones the board game mod apk data
-a game of thrones the board game mod apk revdl
-a game of thrones the board game mod apk rexdl
-a game of thrones the board game mod apk android 1
-a game of thrones the board game mod apk android oyun club
-a game of thrones the board game mod apk happymod
-a game of thrones the board game mod apk an1
-a game of thrones the board game mod apk apkpure
-a game of thrones the board game mod apk apkmody
-a game of thrones the board game mod apk apknite
-a game of thrones the board game mod apk apkmirror
-a game of thrones the board game mod apk apksfree
-a game of thrones the board game mod apk uptodown
-a game of thrones the board game mod apk mob.org
-a game of thrones the boar

-

After you have downloaded the mod APK file, you need to install and run it on your device. Here are the steps you need to follow:

-
    -
  1. Locate the mod APK file on your device storage and tap on it.
  2. -
  3. A pop-up window will appear asking you to confirm the installation. Tap on Install and wait for the process to complete.
  4. -
  5. Once the installation is done, tap on Open to launch the game.
  6. -
  7. You may need to grant some permissions to the game, such as access to your device storage, location, microphone, etc. Tap on Allow when prompted.
  8. -
  9. You will see the game's main menu, where you can choose your preferred language, adjust your game settings, and start playing.
  10. -
-

How to Play A Game of Thrones Board Game

-

Now that you have installed A Game of Thrones Board Game Mod APK on your device, you are ready to play the game. But before you dive into the action, you need to understand the rules and objectives of the game, as well as how to use the interface and controls of the game. Here are some basic guidelines for playing A Game of Thrones Board Game:

-

What are the rules and objectives of the game?

-

A Game of Thrones Board Game is a strategy game that simulates the political and military conflicts in Westeros, the fictional continent where A Game of Thrones takes place. The game can be played by 3 to 6 players, each controlling one of the six Houses of Westeros: Stark, Lannister, Baratheon, Greyjoy, Tyrell, or Martell. Each House has its own strengths and weaknesses, as well as its own starting position and resources on the map.

-

The objective of the game is to be the first player to control 7 castles or strongholds on the map by the end of 10 rounds. To do this, you need to manage your troops, resources, and influence tokens wisely, as well as plan your moves carefully and anticipate your enemies' actions. You also need to deal with random events that can affect the game, such as wildling attacks, supply shortages, or bidding wars for influence tracks.

-

The game is divided into phases: Westeros Phase, Planning Phase, Action Phase, and Cleanup Phase. In each phase, players perform different actions according to a specific order and rules. For example, in the Westeros Phase, players draw cards from three decks that represent different aspects of Westeros: events, supply, and influence. These cards can have positive or negative effects on the game state, such as changing the turn order, granting bonuses or penalties, or triggering special actions. In the Planning Phase, players secretly assign orders to their units on the map using order tokens. These orders determine what actions their units can perform in the Action Phase, such as moving, attacking, defending, raiding, supporting, or consolidating power. In the Action Phase, players reveal their orders and execute them one by one in a clockwise order. This is where most of the conflicts and negotiations take place between players. In the Cleanup Phase, players remove their order tokens from the map and collect power tokens from their controlled areas.

-

How to set up and start a game session?

-

To set up and start a game session in A Game of Thrones Board Game Mod APK, you need to follow these steps:

-
    -
  1. From the main menu, tap on Play.
  2. -
  3. Choose whether you want to play online or offline. If you choose online, you need to sign in with your Google Play account and join or create a lobby with other players who have installed the mod APK. If you choose offline, you can play solo or with friends using hotseat mode.
  4. -
  5. Select your preferred map from four options: A Feast for Crows (4 players), A Dance with Dragons (6 players), A Clash of Kings (6 players), or A Storm of Swords (6 players). Each map has its own scenario and setup that reflect different stages of A Game of Thrones story.
  6. -
  7. Select your preferred House from six options: Stark, Lannister, Baratheon, Greyjoy, Tyrell, or Martell. Each House has its own leader card that grants a special ability once per game.
  8. -
  9. Tap on Start Game to begin the game session.
  10. -

How to use the interface and controls of the game?

-

The interface and controls of A Game of Thrones Board Game Mod APK are designed to be intuitive and user-friendly. You can easily navigate the game screen and perform various actions using touch gestures or buttons. Here are some of the main elements of the interface and controls of the game:

-
    -
  • The map shows the different regions and areas of Westeros, as well as the units and tokens of each House. You can zoom in and out of the map by pinching or spreading your fingers on the screen. You can also drag the map to move it around.
  • -
  • The order tokens are the circular icons that you use to assign orders to your units on the map. You can drag and drop them from the bottom of the screen to the desired area. You can also tap on them to see their description and effects.
  • -
  • The leader cards are the rectangular icons that represent the leaders of each House. They are located at the top of the screen, next to the House sigils. You can tap on them to see their special abilities and use them once per game.
  • -
  • The menu button is located at the top left corner of the screen. It allows you to access various options, such as saving or loading your game, adjusting your settings, viewing the rules, or quitting the game.
  • -
  • The chat button is located at the top right corner of the screen. It allows you to communicate with other players online using text messages or voice chat.
  • -
  • The round tracker is located at the bottom left corner of the screen. It shows the current round number and phase, as well as the remaining time for each phase.
  • -
  • The victory tracker is located at the bottom right corner of the screen. It shows the number of castles or strongholds controlled by each House, as well as their position on the influence tracks.
  • -
-

How to interact with other players and negotiate alliances and betrayals?

-

One of the most exciting and challenging aspects of A Game of Thrones Board Game is interacting with other players and negotiating alliances and betrayals. The game is not only about military conquest, but also about political intrigue and diplomacy. You will have to form alliances with other players to gain their support and trust, but also be ready to betray them when it suits your interests. You will also have to deal with their attempts to deceive or manipulate you, or even attack you without warning.

-

To interact with other players, you can use the chat button on the game screen. You can send text messages or voice messages to all players or to specific players privately. You can also use emojis or stickers to express your emotions or intentions. You can use the chat feature to propose or accept alliances, make deals or promises, share information or secrets, or taunt or threaten your enemies.

-

However, you should remember that nothing is binding in A Game of Thrones Board Game. There is no formal mechanism to enforce alliances or agreements between players. You can break your word or betray your allies at any time, as long as it benefits you. But be careful, as your reputation and trustworthiness may suffer as a result. You may also face retaliation or revenge from your former allies or enemies.

-

How to win the game and claim the Iron Throne?

-

To win A Game of Thrones Board Game Mod APK, you need to be the first player to control 7 castles or strongholds on the map by the end of 10 rounds. If no player achieves this goal by then, the player who controls the most castles or strongholds wins. If there is a tie, the player who has higher position on the Iron Throne influence track wins.

-

To control a castle or stronghold, you need to have one of your units (footman, knight, ship, or siege engine) in its area at the end of a round. To move your units from one area to another, you need to assign them a march order in the Planning Phase and execute it in the Action Phase. To attack an enemy unit in an adjacent area, you need to assign your unit a march order and move it into that area in the Action Phase. To resolve a combat between two units, you need to compare their combat strength (the number on their unit token plus any modifiers from order tokens, leader cards, support orders, etc.) and draw a card from your House deck that adds a random value to your combat strength. The unit with higher total combat strength wins and stays in the area, while the unit with lower total combat strength loses and retreats or is destroyed.

-

To claim the Iron Throne, you need to have higher position on the Iron Throne influence track than any other player. The Iron Throne influence track determines the turn order for each round, as well as who breaks ties in case of equal combat strength or votes. To gain higher position on the Iron Throne influence track, you need to bid your power tokens in a bidding war that occurs when a Clash of Kings card is drawn from the Westeros deck in the Westeros Phase. The player who bids the most power tokens gets the highest position, and so on. You can also gain or lose position on the Iron Throne influence track due to some event cards or leader abilities.

-

Conclusion

-

A Game of Thrones Board Game Mod APK is a great way to enjoy the digital edition of the popular tabletop game by Fantasy Flight Games. You can unlock all the features and options of the game, as well as customize your game settings, graphics, sound, and more. You can also play online with other players who have installed the mod APK, or offline with AI opponents.

-

A Game of Thrones Board Game is a strategy game that simulates the political and military conflicts in Westeros, the fictional continent where A Game of Thrones takes place. You can take control of one of the six Houses of Westeros and compete with other players for the Iron Throne. You will have to use your military, political, and diplomatic skills to outwit and outmaneuver your rivals, while also dealing with the threats from beyond the Wall and across the Narrow Sea.

-

To win A Game of Thrones Board Game Mod APK, you need to be the first player to control 7 castles or strongholds on the map by the end of 10 rounds. If no player achieves this goal by then, the player who controls the most castles or strongholds wins. If there is a tie, the player who has higher position on the Iron Throne influence track wins.

-

Here are some tips and tricks for playing A Game of Thrones Board Game Mod APK:

-
    -
  • Choose your House wisely. Each House has its own strengths and weaknesses, as well as its own starting position and resources on the map. For example, House Stark has strong units and loyal bannermen, but is isolated in the north and vulnerable to wildling attacks. House Lannister has rich lands and powerful allies, but is surrounded by enemies and hated by many. House Baratheon has a strong claim to the throne and a strategic location, but is divided by internal conflicts and lacks naval power. House Greyjoy has a formidable fleet and raiding capabilities, but is poor in resources and far from most castles. House Tyrell has fertile lands and abundant food, but is weak in combat and dependent on alliances. House Martell has a resilient population and a secret weapon, but is distant from the center of power and has few units.
  • -
  • Plan your moves carefully and anticipate your enemies' actions. You need to assign orders to your units on the map using order tokens in the Planning Phase. These orders determine what actions your units can perform in the Action Phase, such as moving, attacking, defending, raiding, supporting, or consolidating power. You need to consider your objectives, resources, and influence tokens when choosing your orders. You also need to predict what orders your enemies will assign to their units and how they will affect your plans.
  • -
  • Interact with other players and negotiate alliances and betrayals. You can communicate with other players using text messages or voice chat in the game screen. You can use this feature to propose or accept alliances, make deals or promises, share information or secrets, or taunt or threaten your enemies. However, you should remember that nothing is binding in A Game of Thrones Board Game. You can break your word or betray your allies at any time, as long as it benefits you. But be careful, as your reputation and trustworthiness may suffer as a result. You may also face retaliation or revenge from your former allies or enemies.
  • -
-

We hope this article has helped you understand how to download and play A Game of Thrones Board Game Mod APK on your Android device. If you are a fan of A Game of Thrones, you will surely enjoy this game and its mod APK version. So, what are you waiting for? Download A Game of Thrones Board Game Mod APK today and claim your rightful place on the Iron Throne!

-

FAQs

-

Here are some common questions and answers about A Game of Thrones Board Game Mod APK:

- - - - - - - - - - - -
Q: Is A Game of Thrones Board Game Mod APK safe to download and install?
A: Yes, A Game of Thrones Board Game Mod APK is safe to download and install from a reliable source like APKMODY. However, you need to follow some precautions before downloading and installing it, such as enabling unknown sources in your device settings, disabling any antivirus or security software on your device, and having a stable internet connection.
Q: Do I need to root my device to use A Game of Thrones Board Game Mod APK?
A: No, you do not need to root your device to use A Game of Thrones Board Game Mod APK. The mod APK works on any Android device that meets the requirements and precautions mentioned above.
Q: What are the differences between A Game of Thrones Board Game Mod APK and the official version?
A: A Game of Thrones Board Game Mod APK is a modified version of the official version that allows you to access some features and options that are not available in the official version. For example, you can unlock all the Houses, maps, scenarios, and expansions in A Game of Thrones Board Game Mod APK, as well as customize your game settings, graphics, sound, and more. You can also play online with other players who have installed the mod APK, or offline with AI opponents.
Q: How can I update A Game of Thrones Board Game Mod APK?
A: To update A Game of Thrones Board Game Mod APK, you need to download and install the latest version of the mod APK file from a reliable source like APKMODY. You do not need to uninstall the previous version of the mod APK, as the new version will overwrite it. However, you may need to backup your game data before updating, as some updates may cause compatibility issues or data loss.
Q: How can I contact the developer of A Game of Thrones Board Game Mod APK?
A: To contact the developer of A Game of Thrones Board Game Mod APK, you can visit their website at Twin Sails Interactive or their Facebook page at Twin Sails Interactive Facebook. You can also send them an email at info@twinsailsinteractive.com. You can use these channels to report any bugs or issues, request new features or improvements, or give feedback or suggestions.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Archero by Habby APK Mirror Download Link and Review.md b/spaces/fatiXbelha/sd/Archero by Habby APK Mirror Download Link and Review.md deleted file mode 100644 index 2dc220bbbd1fcc964666b5f9aaf151470975eaae..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Archero by Habby APK Mirror Download Link and Review.md +++ /dev/null @@ -1,179 +0,0 @@ -
-

Apkmirror Archero: A Guide to Download and Play the Best Action Game of 2019

-

If you are looking for a fun and exciting action game that will test your skills and reflexes, you should definitely check out Apkmirror Archero. This game is a modded version of the original Archero game by Habby, which was voted as the best game of 2019 on Google Play. In this article, we will show you how to download and install Apkmirror Archero APK on your Android device, how to play the game, and why you should play it.

-

What is Apkmirror Archero?

-

A brief introduction to the game and its features

-

Apkmirror Archero is a roguelike action game that puts you in the role of a lone archer who has to fight against waves of enemies in different worlds. You can move or shoot, but not both at the same time, so you have to dodge enemy attacks while unleashing your own arrows. Along the way, you can level up and choose from various skills that will help you survive. You can also equip yourself with different weapons, armor, rings, pets, and more.

-

apkmirror archero


Download Ziphttps://urllie.com/2uNFVn



-

Some of the features of Apkmirror Archero are:

-
    -
  • Random and unique skills to create endless combinations
  • -
  • Beautiful worlds and hundreds of maps to explore
  • -
  • Thousands of monsters and bosses with different patterns and abilities
  • -
  • Invincible weapons and unlimited upgrades
  • -
  • Multiple heroes with different stats and skills
  • -
  • Offline mode and cloud save support
  • -
-

How to download and install Apkmirror Archero APK on your Android device

-

To download and install Apkmirror Archero APK on your Android device, you need to follow these simple steps:

-
    -
  1. Go to [Apkdone.com](^10^) or [APKCombo.com](^12^) or [Softpedia.com](^13^) on your browser.
  2. -
  3. Search for "Apkmirror Archero" or use this link [Apkdone.com/archero](^10^).
  4. -
  5. Download the latest version of Apkmirror Archero APK file.
  6. -
  7. Enable "Unknown Sources" on your device settings if you haven't done so already.
  8. -
  9. Locate the downloaded APK file on your device storage and tap on it.
  10. -
  11. Follow the installation instructions on the screen.
  12. -
  13. Launch the game and enjoy!
  14. -
-

How to play Apkmirror Archero?

-

The basic gameplay mechanics and controls

-

The gameplay mechanics of Apkmirror Archero are very simple and intuitive. You just need to use one finger to control your character. Here are some tips on how to play:

-
    The best tips and tricks to master the game -

    Apkmirror Archero is not an easy game, but with some practice and the right tips and tricks, you can become a pro archer in no time. Here are some of the best tips and tricks to master the game:

    -
      -
    • Reduce lag and frame drops: Apkmirror Archero is not a demanding game, but it can still lag or drop frames on some devices. To avoid this, you can try lowering the graphics quality, turning off sound effects, closing other apps, and clearing your cache.
    • -
    • Upgrade your armor and weapons: As you progress in the game, you will unlock new equipment that can boost your stats and abilities. You can upgrade your armor and weapons using coins and scrolls, which you can get from chests, enemies, and events. Upgrading your equipment will make you stronger and more durable.
    • -
    • Use the stutter-step technique: This is a simple but effective trick that can increase your damage output by 20% or more. The idea is to cancel your attack animation by moving slightly after each shot. This will make you shoot faster and also dodge enemy attacks more easily.
    • -
    • Guaranteed devil spawn: The devil is a mysterious character that can offer you powerful abilities in exchange for some of your max HP. To guarantee his appearance after a boss fight, you have to beat the boss without getting hit by any attacks. He may still appear if you do get hit, but most of the time it will spawn the spinning wheel instead.
    • -
    • Aura stacking: This is a glitch that can give you multiple auras of the same type at once. To do this, you need to have an aura ability (such as fire circle or ice circle) and then get another aura ability from the devil or an angel. Then, force close the game and reopen it. You will see that you have two auras of the same type instead of one.
    • -
    -

    The best equipment, weapons, abilities, and heroes to use

    -

    Apkmirror Archero has a lot of equipment, weapons, abilities, and heroes to choose from, but some are better than others. Here are some of the best ones to use:

    - - - - - - - - -
    CategoryNameDescription
    WeaponStaffThe staff is the best weapon in the game because it has high damage, homing projectiles, diagonal arrows, and elemental effects. It also synergizes well with many abilities such as bounce wall, ricochet, multishot, and diagonal arrows.
    ArmorVest of DexterityThe vest of dexterity is the best armor in the game because it gives you a 7% chance to dodge attacks, which can save your life many times. It also increases your attack speed by 5%, which is always useful.
    RingSerpent RingThe serpent ring is the best ring in the game because it gives you a 7% chance to dodge attacks as well as a 5% damage boost against ranged enemies. You can equip two serpent rings for a total of 14% dodge chance and 10% damage boost.
    PetBatThe bat is the best pet in the game because it has high damage, fast attack speed, homing projectiles, and piercing effect. It also has a small hitbox, which makes it less likely to block your view or get hit by enemy attacks.
    HeroShadeShade is the best hero in the game because she has high attack, high HP, high crit rate, and high crit damage. She also has a unique ability called Shadow Clone that creates a clone of herself that deals 100% of her damage and takes 100% of her damage. This ability can double your damage output and also act as a decoy for enemy attacks.
    AbilityMultishotMultishot is the best ability in the game because it doubles your number of projectiles per shot. This means double the damage, double the chance to trigger elemental effects, and double the coverage area. It also stacks with other abilities such as diagonal arrows or rear arrows for even more projectiles.

    Why should you play Apkmirror Archero?

    -

    The benefits of playing Apkmirror Archero mod APK

    -

    Apkmirror Archero mod APK is not just a regular version of the game, but a modified one that offers you some extra benefits that can enhance your gaming experience. Some of the benefits of playing Apkmirror Archero mod APK are:

    -
      -
    • Unlimited gems and coins: Gems and coins are the main currencies in the game that you can use to buy and upgrade items, unlock heroes, and revive yourself. With Apkmirror Archero mod APK, you can get unlimited gems and coins for free, so you don't have to worry about running out of them or spending real money on them.
    • -
    • All heroes unlocked: Heroes are special characters that you can play as in the game, each with their own stats and skills. Normally, you have to unlock heroes by completing certain stages or paying gems, but with Apkmirror Archero mod APK, you can access all heroes from the start, including the premium ones.
    • -
    • All items unlocked: Items are equipment that you can use to boost your performance in the game, such as weapons, armor, rings, pets, and more. Normally, you have to unlock items by opening chests, completing events, or paying gems, but with Apkmirror Archero mod APK, you can get all items for free, including the rare and legendary ones.
    • -
    • No ads: Ads are annoying interruptions that can ruin your immersion and enjoyment of the game. Normally, you have to watch ads to get rewards or skip them by paying gems, but with Apkmirror Archero mod APK, you can enjoy the game without any ads at all.
    • -
    -

    The positive reviews and ratings from other players

    -

    Apkmirror Archero is not only a great game in itself, but also a popular one among other players. The game has received positive reviews and ratings from thousands of players who have downloaded and played it. Here are some of the comments from other players:

    -

    apkmirror archero mod apk
    -apkmirror archero latest version
    -apkmirror archero download
    -apkmirror archero 3.7.5
    -apkmirror archero update
    -apkmirror archero hack
    -apkmirror archero reddit
    -apkmirror archero 2.5.2
    -apkmirror archero 2.0.2
    -apkmirror archero apk pure
    -apkmirror archero unlimited gems
    -apkmirror archero god mode
    -apkmirror archero cheats
    -apkmirror archero tips
    -apkmirror archero guide
    -apkmirror archero review
    -apkmirror archero gameplay
    -apkmirror archero best skills
    -apkmirror archero weapons
    -apkmirror archero heroes
    -apkmirror archero tier list
    -apkmirror archero codes
    -apkmirror archero events
    -apkmirror archero chapters
    -apkmirror archero wiki
    -apkmirror archero android
    -apkmirror archero ios
    -apkmirror archero pc
    -apkmirror archero emulator
    -apkmirror archero online
    -apkmirror archero multiplayer
    -apkmirror archero co op mode
    -apkmirror archero battle pass
    -apkmirror archero season 25
    -apkmirror archero new update 2023
    -apkmirror archero habby games
    -apkmirror archero free download
    -apkmirror archero old version
    -apkmirror archero beta version
    -apkmirror archero patch notes
    -apkmirror archero error fix
    -apkmirror archero install guide
    -apkmirror archero how to play
    -apkmirror archero fun facts
    -apkmirror archero secrets and easter eggs
    -apkmirror archero fan art and memes
    -apkmirror archero discord server and community

    -
    -

    "This is the best action game I have ever played. The graphics are amazing, the gameplay is smooth and challenging, and the skills are awesome. I love how I can customize my character with different weapons and abilities. I also like how I can play offline and save my progress on the cloud. This game is a masterpiece."

    -- A 5-star review on [Apkdone.com] -
    -
    -

    "I have been playing this game for a long time and I still enjoy it every day. The game is very addictive and fun to play. The levels are randomly generated, so it never gets boring or repetitive. The enemies are diverse and have different patterns and behaviors. The bosses are epic and challenging. The game is also very fair and balanced. It does not force you to pay or watch ads to progress. It rewards you for your skill and effort."

    -- A 5-star review on [APKCombo.com] -
    -
    -

    "This game is amazing. It has everything I want in an action game: fast-paced gameplay, cool graphics, awesome sound effects, and tons of content. The game is very easy to play but hard to master. It requires strategy and reflexes to survive. The game is also very generous with its rewards and updates. It always gives me something new to look forward to."

    -- A 5-star review on [Softpedia.com] -

    The fun and addictive challenges and rewards

    -

    Apkmirror Archero is not just a game that you can play casually, but also a game that you can challenge yourself with. The game has many modes and features that can keep you hooked and motivated. Some of the fun and addictive challenges and rewards are:

    -
      -
    • Endless mode: This is a mode where you can play as long as you can without dying. You can choose from different difficulties and see how far you can go. You can also compete with other players on the global leaderboard and earn gems and coins based on your rank.
    • -
    • Hero mode: This is a mode where you can play with your hero's special skill activated at all times. This makes the game more challenging but also more rewarding. You can earn more coins and scrolls in this mode, as well as unlock new hero chapters.
    • -
    • Hero duel: This is a mode where you can battle against other players in real-time. You can choose from different arenas and use your skills and equipment to defeat your opponent. You can also chat with your opponent and send emojis during the match. You can earn trophies and chests in this mode, as well as unlock new heroes and skins.
    • -
    • Events: These are special modes that are available for a limited time. They offer unique gameplay and rewards that you can't get elsewhere. Some examples of events are Halloween, Christmas, Lunar New Year, Valentine's Day, Easter, and more.
    • -
    • Achievements: These are goals that you can complete to earn gems, coins, scrolls, chests, and more. They range from easy to hard, and from simple to complex. Some examples of achievements are killing a certain number of enemies, clearing a certain number of stages, using a certain number of skills, and more.
    • -
    -

    Conclusion

    -

    A summary of the main points and a call to action

    -

    Apkmirror Archero is a game that you should not miss if you love action games. It is a game that offers you:

    -
      -
    • A thrilling and satisfying gameplay that will test your skills and reflexes
    • -
    • A variety of equipment, weapons, abilities, and heroes to customize your character
    • -
    • A modded version that gives you unlimited gems, coins, items, heroes, and no ads
    • -
    • A positive feedback from other players who have enjoyed the game
    • -
    • A fun and addictive challenges and rewards that will keep you hooked and motivated
    • -
    -

    If you want to download and play Apkmirror Archero APK on your Android device, you can follow the steps we have provided in this article. You can also visit [Apkdone.com] or [APKCombo.com] or [Softpedia.com] for more information and updates about the game.

    -

    What are you waiting for? Download Apkmirror Archero APK now and become the ultimate archer!

    -

    FAQs

    -

    Q1: Is Apkmirror Archero safe and legal to download and play?

    -

    A1: Yes, Apkmirror Archero is safe and legal to download and play. The APK file is scanned for viruses and malware before being uploaded on the website. The modded version does not violate any terms of service or policies of the original game. However, you should always download the APK file from trusted sources such as [Apkdone.com] or [APKCombo.com] or [Softpedia.com] to avoid any risks or issues.

    -

    Q2: What are the differences between Apkmirror Archero and the original Archero game?

    -

    A2: The main differences between Apkmirror Archero and the original Archero game are:

    -
      -
    • Apkmirror Archero gives you unlimited gems, coins, items, heroes, and no ads for free.
    • -
    • Apkmirror Archero has all the latest features and updates of the original game.
    • -
    • Apkmirror Archero has a slightly different interface and graphics than the original game.
    • -
    • Apkmirror Archero may not be compatible with some devices or regions that the original game supports.
    • -
    -

    Q3: How can I update Apkmirror Archero to the latest version?

    -

    A3: To update Apkmirror Archero to the latest version, you need to follow these steps:

    -
      -
    1. Go to [Apkdone.com] or [APKCombo.com] or [Softpedia.com] on your browser.
    2. -
    3. Search for "Apkmirror Archero" or use this link [Apkdone.com/archero].
    4. -
    5. Download the latest version of Apkmirror Archero APK file.
    6. -
    7. Delete the old version of Apkmirror Archero from your device.
    8. -
    9. Follow the installation instructions on the screen.
    10. -
    11. Launch the game and enjoy the new features and improvements.
    12. -
    -

    Q4: How can I play Apkmirror Archero with friends online?

    -

    A4: To play Apkmirror Archero with friends online, you need to follow these steps:

    -
      -
    1. Make sure you and your friends have the same version of Apkmirror Archero installed on your devices.
    2. -
    3. Connect your devices to the same Wi-Fi network or use a hotspot.
    4. -
    5. Launch the game and tap on the "Co-op" button on the main menu.
    6. -
    7. Create a room or join a room created by your friend.
    8. -
    9. Select your hero and equipment and start the game.
    10. -
    11. Work together with your friend to clear the stages and defeat the enemies.
    12. -
    -

    Q5: What are some of the best alternatives to Apkmirror Archero?

    -

    A5: If you are looking for some other games that are similar to Apkmirror Archero, you can try these alternatives:

    -
      -
    • Soul Knight: This is a roguelike shooter game that has pixel graphics, smooth controls, and tons of weapons and characters to choose from. You can also play with up to three friends in co-op mode. You can download it from [Google Play] or [Apkdone.com].
    • -
    • Zen Archer: This is a relaxing archery game that has minimalist graphics, soothing music, and simple gameplay. You just have to aim and shoot at the targets while avoiding obstacles. You can also unlock different bows and arrows with different effects. You can download it from [Google Play] or [APKCombo.com].
    • -
    • Archery King: This is a competitive archery game that has realistic graphics, physics, and controls. You can challenge other players online in various modes and arenas. You can also customize your bow and arrows with different parts and skins. You can download it from [Google Play] or [Softpedia.com].
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download the 3 Movie Love Proposal Ringtone in High Quality.md b/spaces/fatiXbelha/sd/Download the 3 Movie Love Proposal Ringtone in High Quality.md deleted file mode 100644 index 8aca9b7ae8ca2d0490338bf22a5836044e207125..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download the 3 Movie Love Proposal Ringtone in High Quality.md +++ /dev/null @@ -1,121 +0,0 @@ -
    -

    3 Movie Love Proposal Ringtone Download: How to Get the Romantic BGM of Dhanush and Shruti Haasan

    -

    If you are a fan of Tamil movies, you must have watched or heard of 3, a romantic drama film starring Dhanush and Shruti Haasan. The film was released in 2012 and became a hit among the audience, especially for its catchy songs and emotional scenes.

    -

    One of the most memorable scenes in the film is the love proposal scene, where Dhanush confesses his feelings to Shruti Haasan in a classroom. The scene is filled with humor, innocence, and romance, and is accompanied by a beautiful background music (bgm) that enhances the mood.

    -

    3 movie love proposal ringtone download


    Download Filehttps://urllie.com/2uNwP8



    -

    The bgm in the love proposal scene is a soft melody that captures the essence of first love. It has a soothing and catchy tune that makes you want to listen to it over and over again. If you are looking for a way to download this bgm as your ringtone, you have come to the right place. In this article, we will show you how to get 3 movie love proposal ringtone download in easy steps.

    -

    How to Download 3 Movie Love Proposal Ringtone

    -

    There are two main ways to download 3 movie love proposal ringtone: from YouTube or from Prokerala. We will explain both methods in detail below.

    -

    From YouTube

    -

    YouTube is one of the best sources to find any video or audio you want. You can easily find the video of 3 movie love proposal scene on YouTube by searching for keywords like "3 movie love proposal scene" or "3 movie love proposal bgm". For example, you can use this link to watch the video.

    -

    Once you have found the video, you need to convert it to mp3 format so that you can use it as your ringtone. There are many online tools that can help you do this, such as [YTMP3](^5^) or [OnlineVideoConverter](^6^). All you need to do is copy and paste the URL of the video into the tool and click on convert. Then, you can download the mp3 file to your computer.

    -

    After downloading the mp3 file, you need to transfer it to your phone. You can do this by using a USB cable, Bluetooth, or cloud storage. Once you have transferred the file, you can set it as your ringtone by following these steps:

    -
      -
    • Go to Settings > Sound > Phone ringtone.
    • -
    • Browse and select the mp3 file from your phone's storage.
    • -
    • Tap on OK or Save.
    • -
    -

    Congratulations! You have successfully downloaded and set 3 movie love proposal ringtone from YouTube.

    -

    3 movie love proposal bgm ringtone download
    -3 movie love proposal scene ringtone download
    -3 movie love proposal song ringtone download
    -3 movie love proposal music ringtone download
    -3 movie love proposal theme ringtone download
    -3 movie love proposal mp3 ringtone download
    -3 movie love proposal video ringtone download
    -3 movie love proposal instrumental ringtone download
    -3 movie love proposal flute ringtone download
    -3 movie love proposal piano ringtone download
    -3 movie love proposal guitar ringtone download
    -3 movie love proposal violin ringtone download
    -3 movie love proposal tamil ringtone download
    -3 movie love proposal telugu ringtone download
    -3 movie love proposal hindi ringtone download
    -3 movie love proposal kannada ringtone download
    -3 movie love proposal malayalam ringtone download
    -3 movie love proposal english ringtone download
    -3 movie love proposal remix ringtone download
    -3 movie love proposal mashup ringtone download
    -3 movie love proposal female ringtone download
    -3 movie love proposal male ringtone download
    -3 movie love proposal duet ringtone download
    -3 movie love proposal sad ringtone download
    -3 movie love proposal happy ringtone download
    -3 movie love proposal romantic ringtone download
    -3 movie love proposal emotional ringtone download
    -3 movie love proposal cute ringtone download
    -3 movie love proposal funny ringtone download
    -3 movie love proposal best ringtone download
    -3 movie love proposal original ringtone download
    -3 movie love proposal new ringtone download
    -3 movie love proposal latest ringtone download
    -3 movie love proposal popular ringtone download
    -3 movie love proposal free ringtone download
    -3 movie love proposal online ringtone download
    -3 movie love proposal offline ringtone download
    -3 movie love proposal high quality ringtone download
    -3 movie love proposal low quality ringtone download
    -3 movie love proposal hd ringtone download
    -3 movie love proposal mobile ringtone download
    -3 movie love proposal android ringtone download
    -3 movie love proposal iphone ringtone download
    -3 movie love proposal zedge ringtone download
    -3 movie love proposal mobcup ringtone download[^1^]

    -

    From Prokerala

    -

    If you want a simpler and faster way to download 3 movie love proposal ringtone, you can use Prokerala, a website that offers free ringtones for various categories. You can access Prokerala by clicking here. On the homepage, you can see a search box where you can type "3 movie love proposal" and hit enter. You will see a list of results that match your query. You can preview the ringtones by clicking on the play button next to each result.

    -

    Once you have found the ringtone you want, you can download it directly to your phone by clicking on the download button next to the result. You will see a pop-up window that asks you to choose the format of the ringtone. You can choose either mp3 or m4r, depending on your phone's compatibility. Then, you can click on the download link and save the file to your phone's storage.

    -

    After downloading the ringtone, you can set it as your ringtone by following these steps:

    -
      -
    • Go to Settings > Sound > Phone ringtone.
    • -
    • Browse and select the ringtone file from your phone's storage.
    • -
    • Tap on OK or Save.
    • -
    -

    Voila! You have successfully downloaded and set 3 movie love proposal ringtone from Prokerala.

    -

    Conclusion

    -

    In this article, we have shown you how to download 3 movie love proposal ringtone in two easy ways: from YouTube or from Prokerala. Both methods are simple and fast, and you can get the romantic bgm of Dhanush and Shruti Haasan in no time. We hope you enjoy listening to this ringtone and reliving the sweet moments of 3 movie.

    -

    If you liked this article, please share it with your friends and family who are also fans of 3 movie. Also, don't forget to leave a comment below and let us know what you think of this ringtone. We would love to hear from you!

    -

    FAQs

    -

    What is the name of the bgm in 3 movie love proposal scene?

    -

    The name of the bgm in 3 movie love proposal scene is "Kannazhaga". It is composed by Anirudh Ravichander, who also composed all the other songs in the film.

    -

    Who composed the music for 3 movie?

    -

    The music for 3 movie was composed by Anirudh Ravichander, a young and talented music director who made his debut with this film. He is also known for his other hit songs like "Why This Kolaveri Di", "Ethir Neechal", and "Maari".

    -

    Where can I watch 3 movie online?

    -

    You can watch 3 movie online on various streaming platforms like [Hotstar], [Zee5], or [YouTube]. However, you may need to pay a subscription fee or rent the film to watch it legally.

    -

    What are some other romantic ringtones from Tamil movies?

    -

    Some other romantic ringtones from Tamil movies are:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    SongMovieComposer
    Munbe VaaSillunu Oru KaadhalA.R. Rahman
    Anbe En AnbeDhaam DhoomHarris Jayaraj
    Po UraveKaatrin MozhiA.H. Kaashif
    KannammaKaalaSanthosh Narayanan
    Innum Konjam NaeramMaryanA.R. Rahman
    -

    How can I make my own ringtone from any song?

    -

    You can make your own ringtone from any song by using online tools like [Ringtone Maker] or [MP3 Cutter]. All you need to do is upload the song file, select the part you want as your ringtone, and download the edited file. Then, you can transfer it to your phone and set it as your ringtone.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/feizhengcong/video-stable-diffusion/app.py b/spaces/feizhengcong/video-stable-diffusion/app.py deleted file mode 100644 index 245e2d3607874f91473c6fd130957c017a4bae6f..0000000000000000000000000000000000000000 --- a/spaces/feizhengcong/video-stable-diffusion/app.py +++ /dev/null @@ -1,13 +0,0 @@ -import gradio as gr - - -def generate_video(prompt): - return "spring.mp4" - - - -interface = gr.Interface(generate_video, gr.inputs.Textbox(placeholder="people in the street to celebrate spring festival"), gr.outputs.Video()) - -interface.launch() - - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Boyama 3D Klasik Paintin yeni ve gelimi versiyonu.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Boyama 3D Klasik Paintin yeni ve gelimi versiyonu.md deleted file mode 100644 index f0c0f7d464b0392f7761e332ec7cc418e820b747..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Boyama 3D Klasik Paintin yeni ve gelimi versiyonu.md +++ /dev/null @@ -1,99 +0,0 @@ -
    -

    Boyama 3D Yukle: Nәdir vә Necә İstifadә Edilir?

    -

    Boyama 3D, fiziki boyama ilә möhtәşәm artırılmış reallıq texnologiyasını birlәşdirәn, maraqlı, tәhsilli vә sәhrli bir tәtbiqdir. Bu tәtbiq ilә siz öyrәnmәk, kәşf etmәk vә yaradıcılığınızı göstәrmәk üçün müxtәlif boyama sәhifәlәrindәn istifadә edә bilersiniz. Boyama 3D-ni necә yüklәyib istifadә edƏcƏyiniz haqqında daha çox mәlumat almaq üçün oxumaya davam edin.

    -

    boyama 3d yukle


    Download Filehttps://gohhs.com/2uPmAy



    -

    Boyama 3D-nin XüsusiyyƏtlƏri

    -

    Boyama 3D, sizin üçün bir çox xüsusiyyƏtlƏr tƏklif edir. Bunların bƏzilƏri:

    -

    Fiziki Boyama vƏ MöhtƏşƏm Artırılmış Reallıq BirlƏşdirir

    -

    Boyama 3D ilƏ siz hƏr hansı bir boyama sƏhifƏsini çap edib rƏnglƏyib sonra Quiver tƏtbiqindƏn istifadƏ edib onu canlandıra bilersiniz. Bu, sizin boyamanızın hƏyat bulmasını vƏ sizin onunla interaktiv olmanızı sağlayır. Siz boyamanızın hƏr bir bucağından baxa, onu hƏrƏkЄt etdirЄ, sЄslЄndirЄ v

    Öyrәnmәyi vә Kәşfi Dәstәklәyir

    -

    Boyama 3D ilә siz yalnız rәnglәmirlә kifayЄtlЄnmirsiniz, hЄm dЄ öyrЄnmЄk vЄ kЄşf etmЄk üçün bir çox imkanlardan istifadЄ edirsiniz. Boyama sЄhifЄlЄri sizin maraqlandığınız mövzulara uyğun olaraq hazırlanmışdır. Siz hЄyvanlar, bitkilЄr, mifoloji, tarix, coğrafiya, astronomiya vЄ daha bir çox sahЄdЄ biliklЄrinizi artıra bilersiniz. HЄr bir boyama sЄhifЄsindƏ sizƏ maraqlı faktlar, mәlumatlar vә suallar tƏqdim olunur. Siz bu suallara cavab verƏrƏk öyrәndiklәrinizi yoxlaya bilersiniz.

    -

    Müxtәlif Boyama Paketlәri ilә Yaradıcılığınızı Göstәrin

    -

    Boyama 3D ilә siz yalnız standart boyama sәhifәlәri ilә mЄhdudlaşmırsınız. Siz müxtәlif boyama paketlәri alaraq daha çox sƏhifƏ vƏ xüsusiyyƏt ƏldƏ edƏ bilersiniz. Bu paketlƏr sizin yaradıcılığınızı daha da artırır. Siz öz boyama sƏhifƏlƏrinizi yarada, öz rƏnglƏrinizi seçƏ vƏ öz xüsusi effektlƏrinizi ƏlavƏ edƏ bilersiniz. Siz hatta öz səs və musiqi fayllarınızı da əlavə edərək boyamanızı daha da canlı və maraqlı edə bilersiniz.

    -

    Boyama 3D-ni Necə İstifadə Edirsiniz?

    -

    Boyama 3D-ni istifadə etmək çox asandır. Siz yalnız aşağıdakı addımları izləyin:

    -

    Boyama Səhifələrini Yükləyin və Çap Edin

    -

    Boyama 3D-ni istifadə etmək üçün ilk olaraq boyama səhifələrini yükləməlisiniz. Bu səhifələri Quiver tətbiqinin rəsmi saytından buradan yükləyə bilersiniz. Siz həmçinin Quiver tətbiqindən daxil olaraq da boyama səhifələrini yükləyib çap edə bilersiniz. Boyama səhifələrini yüklədikdən sonra onları standart A4 kağızına çap edin.

    -

    boyama 3d oyunu indir
    -boyama 3d programı indir
    -boyama 3d uygulaması indir
    -boyama 3d apk indir
    -boyama 3d pc indir
    -boyama 3d microsoft store indir
    -boyama 3d google play indir
    -boyama 3d sketchfab indir
    -boyama 3d online oyna
    -boyama 3d nasıl oynanır
    -boyama 3d nasıl indirilir
    -boyama 3d nasıl yüklenir
    -boyama 3d nasıl kullanılır
    -boyama 3d hileleri
    -boyama 3d ipuçları
    -boyama 3d özellikleri
    -boyama 3d incelemesi
    -boyama 3d yorumları
    -boyama 3d puanı
    -boyama 3d güncellemesi
    -boyama 3d yeni sürümü
    -boyama 3d yeni resimleri
    -boyama 3d yeni öğeleri
    -boyama 3d yeni fırçaları
    -boyama 3d yeni alatları
    -boyama 3d modelleri
    -boyama 3d tasarımları
    -boyama 3d sanat eserleri
    -boyama 3d galerisi
    -boyama 3d paylaşımı
    -boyama 3d videoları
    -boyama 3d eğitimi
    -boyama 3d dersleri
    -boyama 3d kursu
    -boyama 3d öğrenme
    -numaralarla boyama 3d indir
    -numaralarla boyama 3d oyna
    -numaralarla boyama 3d apk indir
    -numaralarla boyama 3d uygulaması indir
    -numaralarla boyama 3d google play indir
    -sayılarla boyama 3d indir
    -sayılarla boyama 3d oyna
    -sayılarla boyama 3d apk indir
    -sayılarla boyama 3d uygulaması indir
    -sayılarla boyama 3d google play indir
    -renklerle boyama 3d indir
    -renklerle boyama 3d oyna
    -renklerle boyama 3d apk indir
    -renklerle boyama 3d uygulaması indir
    -renklerle boyama 3d google play indir

    -

    Renglerinizle Sizin Üçün Münasib Olan Sähifäläri Seçin

    -

    Boyama sähifälärini çap etdikdän sonra onları ränglämäyä başlayın. Siz istädiginiz rängläri istädiginiz şäkildä istifädä edä bilärsiniz. Yalnız nöqtäli xättlarla çärçivälänmiş sahälärä räng vermäyin, çünki bu sahälär artırılmış reallıq effektläri üçün nözrded

    Quiver Tətbiqindən İstifadə Edin və Sizin Boyama Səhifələrinizi Canlandırın

    -

    Boyama səhifələrinizi rənglədikdən sonra onları Quiver tətbiqində canlandırmaq üçün hazırsınız. Quiver tətbiqini öz cihazınıza yükləyin və açın. Sonra boyama səhifələrinizdən birini seçin və onun üzərində kamera simgesinə toxunun. Bu, tətbiqin sizin boyamanızı tanımasını və onu artırılmış reallıq effektləri ilə zənginləşdirməsini sağlayır. Siz boyamanızın canlandığını ekranda görüb, onunla oynaya, ona toxuna və onu dinlәyә bilәrsiniz.

    -

    Boyama 3D ilә Neler Yarada Bilirsiniz?

    -

    Boyama 3D ilә siz yalnız rәnglәmirlә kifayЄtlЄnmirsiniz, hЄm dЄ bir çox maraqlı şeylЄr yarada bilersiniz. Bunların bЄzilЄri:

    -

    2D vә 3D Şedevrlәr Yaradın

    -

    Boyama 3D ilә siz hЄr hansı bir boyama sЄhifЄsini 2D vЄ ya 3D şedevrЄ çevirЄ bilersiniz. Siz boyama sЄhifЄlЄrindƏki obyektlЄri rƏnglƏyib sonra onları Quiver tƏtbiqindƏ canlandıraraq onların hƏrƏkЄtli, sƏslƏndirilmiş vƏ interaktiv olmasını sağlaya bilersiniz. Siz hatta öz obyektlƏrinizi yarada bilersiniz. Quiver tƏtbiqindƏ sizin üçün xüsusi olaraq hazırlanmış boş bir boyama sƏhifƏsi var. Bu sƏhifƏdƏ siz istƏdiyiniz şeyi çizib sonra onu canlandıra bilersiniz.

    -

    Eğlenceli Oyunlar Oynayın vә Testlere Cavab Verin

    -

    Boyama 3D ilә siz yalnız boyamanızın zövqünü çıxarmırsınız, hәm dә eğlenceli oyunlar oynayıb testlere cavab verirsiniz. Boyama sәhifәlәri sizin üçün müxtәlif oyunlar vә testler tәqdim edir. Siz hәyvanlarla danışa, bitkilәrlә fotosintez ede, mifoloji mahlukatlarla döyüşe, tarixi hadiseleri izle vә daha bir çox maraqlı fәaliyyetlәrdә iştirak ede bilersiniz. Hәr bir boyama sәhifәsindә sizin bilik vә bacarıqlarınızı ölçmәk üçün suallar da var. Siz bu suallara cavab vererek öyrәndiklәrinizi yoxlaya bilersiniz.

    -

    Dostlarınızla Paylaşın vә Birlәşdirin

    -

    Boyama 3D ilә siz yaratdığınız şeylәri dostlarınızla paylaşa bilersiniz. Siz boyamanızın ekran görüntüsünü ala, videoya çеkе vе ya sos yal medyada paylaşa bilersiniz. Siz hәmçinin dostlarınızın boyama sәhifәlәrini görmәk, onlarla müqayisә etmәk vә birlәşdirmәk üçün Quiver tәtbiqindәn istifadә edә bilersiniz. Siz dostlarınızın boyama sәhifәlәrinin üzәrinә öz rәnglәrinizi vә effektlәrinizi әlavә edərək onlarla əməkdaşlıq edə bilersiniz.

    -

    Boyama 3D-ni Harada Yüklәyә Bilәrsiniz?

    -

    Boyama 3D-ni yüklәmәk üçün sizin cihazınızın Android vә ya iOS əməliyyat sistemlərindən birinə malik olması lazımdır. Siz aşağıdakı linklərdən istifadə edərək boyama 3D-ni öz cihazınıza yükləyə bilersiniz:

    -

    Android Cihazlar Üçün Google Play Store-dan Yüklәyin

    -

    Əgər sizin cihazınız Android əməliyyat sistemlidirsə, siz Google Play Store-dan Boyama 3D-ni yükləyə bilersiniz. Bu linki buradan izləyin və yüklə düyməsinə toxunun. Sonra tətbiqi açın və istifadə etmeye başlayın.

    -

    iOS Cihazlar Üçün App Store-dan Yüklәyin

    -

    Əgər sizin cihazınız iOS əməliyyat sistemlidirsə, siz App Store-dan Boyama 3D-ni yükləyə bilersiniz. Bu linki buradan izlеyin vе yüklе düymеsinе toxunun. Sonra tеtbiqi açın vе istifadе еtmеyе başlayın.

    -

    Xulasa

    -

    Boyama 3D, fiziki boyama ilә artırılmış reallıq texnologiyasını birlәşdirЄn, maraqlı, tЄhsilli vЄ sЄhrli bir tЄtbiqdir. Bu tЄtbiq ilЄ siz öyrЄnmЄk, kЄşf etmЄk vЄ yaradıcılığınızı göstЄrmЄk üçün müxtЄlif boyama sЄhifЄlЄrindЄn istifadЄ edЄ bilersiniz. Boyama 3D-ni yüklЄmЄk üçün sizin cihazınızın Android vЄ ya iOS ЄmЄliyyat sistemlЄrindЄn birinЄ malik olması lazımdır. Siz bu yazıda verilmiş linklЄrdЄn istifadЄ edƏrƏk boyama 3D-ni öz cihazınıza yüklƏyƏ bilersiniz. Boyama 3D ilƏ siz hƏyatınıza rƏng gƏtirƏ bilersiniz.

    -

    FAQ

    -
      -
    • Boyama 3D nƏdir?
    • -
    • Boyama 3D, fiziki boyama ilƏ artırılmış reallıq texnologiyasını birlƏşdirƏn, maraqlı, tƏhsilli vƏ sƏhrli bir tƏtbiqdir.
    • -
    • Boyama 3D-ni necƏ yüklƏyib istifadƏ ed edirsiniz?
    • -
    • Boyama 3D-ni istifadə etmək üçün ilk olaraq boyama səhifələrini yükləyib çap etməlisiniz. Sonra onları rəngləyib Quiver tətbiqindən istifadə edib onları canlandıra bilersiniz.
    • -
    • Boyama 3D ilə neler yara bilersiniz?
    • -
    • Boyama 3D ilə siz 2D və 3D şedevrlər yara, eğlenceli oyunlar oyna, testlere cavab verə və dostlarınızla paylaşa bilersiniz.
    • -
    • Boyama 3D-ni harada yükləyə bilersiniz?
    • -
    • Boyama 3D-ni yükləmək üçün sizin cihazınızın Android və ya iOS əməliyyat sistemlərindən birinə malik olması lazımdır. Siz Google Play Store-dan və ya App Store-dan boyama 3D-ni yükləyə bilersiniz.
    • -
    • Boyama 3D pulsuzdur?
    • -
    • Boyama 3D-nin bazik versiyası pulsuzdur. Bu versiyada siz standart boyama səhifələrindən istifadə edə bilersiniz. Əgər daha çox boyama səhifesi və xüsusiyyət istәyirsinizsә, siz müxtәlif boyama paketlәri ala bilersiniz. Bu paketlәrin qiymәtlәri fәrqli olur.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/Music_Source_Separation/separate_scripts/download_checkpoints.sh b/spaces/fffiloni/Music_Source_Separation/separate_scripts/download_checkpoints.sh deleted file mode 100644 index 6d2f3742d92139f2bfdb4e6070980db9af3cead6..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/separate_scripts/download_checkpoints.sh +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/bash - -ZENODO_DIR="https://zenodo.org/record/5507029/files" -DOWNLOADED_CHECKPOINT_DIR="./downloaded_checkpoints" - -mkdir -p $DOWNLOADED_CHECKPOINT_DIR - -MODEL_NAME="resunet143_ismir2021_vocals_8.9dB_350k_steps.pth" -wget -O "${DOWNLOADED_CHECKPOINT_DIR}/${MODEL_NAME}" "${ZENODO_DIR}/${MODEL_NAME}?download=1" - -MODEL_NAME="resunet143_ismir2021_accompaniment_16.8dB_350k_steps.pth" -wget -O "${DOWNLOADED_CHECKPOINT_DIR}/${MODEL_NAME}" "${ZENODO_DIR}/${MODEL_NAME}?download=1" - -MODEL_NAME="resunet143_subbtandtime_vocals_8.8dB_350k_steps.pth" -wget -O "${DOWNLOADED_CHECKPOINT_DIR}/${MODEL_NAME}" "${ZENODO_DIR}/${MODEL_NAME}?download=1" - -MODEL_NAME="resunet143_subbtandtime_accompaniment_16.4dB_350k_steps.pth" -wget -O "${DOWNLOADED_CHECKPOINT_DIR}/${MODEL_NAME}" "${ZENODO_DIR}/${MODEL_NAME}?download=1" \ No newline at end of file diff --git a/spaces/fffiloni/ProPainter/share_btn.py b/spaces/fffiloni/ProPainter/share_btn.py deleted file mode 100644 index 03981efd2e48f2484589f2ccc5542675bfaefa17..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/ProPainter/share_btn.py +++ /dev/null @@ -1,69 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getVideoBlobFile(videoEL){ - const res = await fetch(videoEL.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `propainter-${{videoId}}.mp4`; - const videoBlob = new File([blob], fileName, { type: 'video/mp4' }); - console.log(videoBlob); - return videoBlob; - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const outputVideoMasked = gradioEl.querySelector('#res-masked video'); - const outputVideoCleaned = gradioEl.querySelector('#res_cleaned video'); - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const videoOutFileMasked = await getVideoBlobFile(outputVideoMasked); - const dataOutputVidMasked = await uploadFile(videoOutFileMasked); - const videoOutFileCleaned = await getVideoBlobFile(outputVideoCleaned); - const dataOutputVidCleaned = await uploadFile(videoOutFileCleaned); - - const descriptionMd = ` -#### Video Masked with SAM: -${dataOutputVidMasked} - -#### ProPainter result: -${dataOutputVidCleaned} -`; - const params = new URLSearchParams({ - title: "Please provide a title :)", - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/ProPainter/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-parser/node_modules/debug/src/common.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-parser/node_modules/debug/src/common.js deleted file mode 100644 index e3291b20faa1a61fa5acff50d84dba10a97cc3b6..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-parser/node_modules/debug/src/common.js +++ /dev/null @@ -1,274 +0,0 @@ - -/** - * This is the common logic for both the Node.js and web browser - * implementations of `debug()`. - */ - -function setup(env) { - createDebug.debug = createDebug; - createDebug.default = createDebug; - createDebug.coerce = coerce; - createDebug.disable = disable; - createDebug.enable = enable; - createDebug.enabled = enabled; - createDebug.humanize = require('ms'); - createDebug.destroy = destroy; - - Object.keys(env).forEach(key => { - createDebug[key] = env[key]; - }); - - /** - * The currently active debug mode names, and names to skip. - */ - - createDebug.names = []; - createDebug.skips = []; - - /** - * Map of special "%n" handling functions, for the debug "format" argument. - * - * Valid key names are a single, lower or upper-case letter, i.e. "n" and "N". - */ - createDebug.formatters = {}; - - /** - * Selects a color for a debug namespace - * @param {String} namespace The namespace string for the debug instance to be colored - * @return {Number|String} An ANSI color code for the given namespace - * @api private - */ - function selectColor(namespace) { - let hash = 0; - - for (let i = 0; i < namespace.length; i++) { - hash = ((hash << 5) - hash) + namespace.charCodeAt(i); - hash |= 0; // Convert to 32bit integer - } - - return createDebug.colors[Math.abs(hash) % createDebug.colors.length]; - } - createDebug.selectColor = selectColor; - - /** - * Create a debugger with the given `namespace`. - * - * @param {String} namespace - * @return {Function} - * @api public - */ - function createDebug(namespace) { - let prevTime; - let enableOverride = null; - let namespacesCache; - let enabledCache; - - function debug(...args) { - // Disabled? - if (!debug.enabled) { - return; - } - - const self = debug; - - // Set `diff` timestamp - const curr = Number(new Date()); - const ms = curr - (prevTime || curr); - self.diff = ms; - self.prev = prevTime; - self.curr = curr; - prevTime = curr; - - args[0] = createDebug.coerce(args[0]); - - if (typeof args[0] !== 'string') { - // Anything else let's inspect with %O - args.unshift('%O'); - } - - // Apply any `formatters` transformations - let index = 0; - args[0] = args[0].replace(/%([a-zA-Z%])/g, (match, format) => { - // If we encounter an escaped % then don't increase the array index - if (match === '%%') { - return '%'; - } - index++; - const formatter = createDebug.formatters[format]; - if (typeof formatter === 'function') { - const val = args[index]; - match = formatter.call(self, val); - - // Now we need to remove `args[index]` since it's inlined in the `format` - args.splice(index, 1); - index--; - } - return match; - }); - - // Apply env-specific formatting (colors, etc.) - createDebug.formatArgs.call(self, args); - - const logFn = self.log || createDebug.log; - logFn.apply(self, args); - } - - debug.namespace = namespace; - debug.useColors = createDebug.useColors(); - debug.color = createDebug.selectColor(namespace); - debug.extend = extend; - debug.destroy = createDebug.destroy; // XXX Temporary. Will be removed in the next major release. - - Object.defineProperty(debug, 'enabled', { - enumerable: true, - configurable: false, - get: () => { - if (enableOverride !== null) { - return enableOverride; - } - if (namespacesCache !== createDebug.namespaces) { - namespacesCache = createDebug.namespaces; - enabledCache = createDebug.enabled(namespace); - } - - return enabledCache; - }, - set: v => { - enableOverride = v; - } - }); - - // Env-specific initialization logic for debug instances - if (typeof createDebug.init === 'function') { - createDebug.init(debug); - } - - return debug; - } - - function extend(namespace, delimiter) { - const newDebug = createDebug(this.namespace + (typeof delimiter === 'undefined' ? ':' : delimiter) + namespace); - newDebug.log = this.log; - return newDebug; - } - - /** - * Enables a debug mode by namespaces. This can include modes - * separated by a colon and wildcards. - * - * @param {String} namespaces - * @api public - */ - function enable(namespaces) { - createDebug.save(namespaces); - createDebug.namespaces = namespaces; - - createDebug.names = []; - createDebug.skips = []; - - let i; - const split = (typeof namespaces === 'string' ? namespaces : '').split(/[\s,]+/); - const len = split.length; - - for (i = 0; i < len; i++) { - if (!split[i]) { - // ignore empty strings - continue; - } - - namespaces = split[i].replace(/\*/g, '.*?'); - - if (namespaces[0] === '-') { - createDebug.skips.push(new RegExp('^' + namespaces.slice(1) + '$')); - } else { - createDebug.names.push(new RegExp('^' + namespaces + '$')); - } - } - } - - /** - * Disable debug output. - * - * @return {String} namespaces - * @api public - */ - function disable() { - const namespaces = [ - ...createDebug.names.map(toNamespace), - ...createDebug.skips.map(toNamespace).map(namespace => '-' + namespace) - ].join(','); - createDebug.enable(''); - return namespaces; - } - - /** - * Returns true if the given mode name is enabled, false otherwise. - * - * @param {String} name - * @return {Boolean} - * @api public - */ - function enabled(name) { - if (name[name.length - 1] === '*') { - return true; - } - - let i; - let len; - - for (i = 0, len = createDebug.skips.length; i < len; i++) { - if (createDebug.skips[i].test(name)) { - return false; - } - } - - for (i = 0, len = createDebug.names.length; i < len; i++) { - if (createDebug.names[i].test(name)) { - return true; - } - } - - return false; - } - - /** - * Convert regexp to namespace - * - * @param {RegExp} regxep - * @return {String} namespace - * @api private - */ - function toNamespace(regexp) { - return regexp.toString() - .substring(2, regexp.toString().length - 2) - .replace(/\.\*\?$/, '*'); - } - - /** - * Coerce `val`. - * - * @param {Mixed} val - * @return {Mixed} - * @api private - */ - function coerce(val) { - if (val instanceof Error) { - return val.stack || val.message; - } - return val; - } - - /** - * XXX DO NOT USE. This is a temporary stub function. - * XXX It WILL be removed in the next major release. - */ - function destroy() { - console.warn('Instance method `debug.destroy()` is deprecated and no longer does anything. It will be removed in the next major version of `debug`.'); - } - - createDebug.enable(createDebug.load()); - - return createDebug; -} - -module.exports = setup; diff --git a/spaces/fffiloni/live-ml5-handpose-p5js/style.css b/spaces/fffiloni/live-ml5-handpose-p5js/style.css deleted file mode 100644 index 0ec09fe257d43f3d11dda5ca1dc2c306a58e7836..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/live-ml5-handpose-p5js/style.css +++ /dev/null @@ -1,23 +0,0 @@ -html, body { - margin: 0; - padding: 0; -} - -body { - display: flex; - align-content: center; - justify-content: center; - flex-wrap: wrap; - flex-direction: column; -} - -#canvas{ - position: relative; - top: 0px; - left: 0px; - z-index:1; -} - -* { - font-family: sans-serif; -} \ No newline at end of file diff --git a/spaces/fffiloni/whisper-to-stable-diffusion/app.py b/spaces/fffiloni/whisper-to-stable-diffusion/app.py deleted file mode 100644 index ffc91eb74e3e89054ccb68426668a2abfa425b4a..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/whisper-to-stable-diffusion/app.py +++ /dev/null @@ -1,328 +0,0 @@ -import gradio as gr -#import torch -#import whisper -#from datetime import datetime -from PIL import Image -#import flag -import os -#MY_SECRET_TOKEN=os.environ.get('HF_TOKEN_SD') - -#from diffusers import StableDiffusionPipeline - -whisper = gr.Blocks.load(name="spaces/sanchit-gandhi/whisper-large-v2") -stable_diffusion = gr.Blocks.load(name="spaces/stabilityai/stable-diffusion") -### ———————————————————————————————————————— - -title="Whisper to Stable Diffusion" - -### ———————————————————————————————————————— - -#whisper_model = whisper.load_model("small") - -#device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -#pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=MY_SECRET_TOKEN) -#pipe.to(device) - -### ———————————————————————————————————————— - -def get_images(prompt): - gallery_dir = stable_diffusion(prompt,"", 9, fn_index=2) - return [os.path.join(gallery_dir, img) for img in os.listdir(gallery_dir)] - - -def magic_whisper_to_sd(audio, guidance_scale, nb_iterations, seed): - - whisper_results = translate_better(audio) - prompt = whisper_results[1] - images = get_images(prompt) - - return whisper_results[0], whisper_results[1], images - -#def diffuse(prompt, guidance_scale, nb_iterations, seed): -# -# generator = torch.Generator(device=device).manual_seed(int(seed)) -# -# print(""" -# — -# Sending prompt to Stable Diffusion ... -# — -# """) -# print("prompt: " + prompt) -# print("guidance scale: " + str(guidance_scale)) -# print("inference steps: " + str(nb_iterations)) -# print("seed: " + str(seed)) -# -# images_list = pipe( -# [prompt] * 2, -# guidance_scale=guidance_scale, -# num_inference_steps=nb_iterations, -# generator=generator -# ) -# -# images = [] -# -# safe_image = Image.open(r"unsafe.png") -# -# for i, image in enumerate(images_list["sample"]): -# if(images_list["nsfw_content_detected"][i]): -# images.append(safe_image) -# else: -# images.append(image) -# -# -# print("Stable Diffusion has finished") -# print("———————————————————————————————————————————") -# -# return images - -def translate_better(audio): - print(""" - — - Sending audio to Whisper ... - — - """) - transcribe_text_result = whisper(audio, None, "transcribe", api_name="predict") - translate_text_result = whisper(audio, None, "translate", api_name="predict") - print("transcript: " + transcribe_text_result) - print("———————————————————————————————————————————") - print("translated: " + translate_text_result) - - return transcribe_text_result, translate_text_result - - -#def translate(audio): -# print(""" -# — -# Sending audio to Whisper ... -# — -# """) -# # current dateTime -# now = datetime.now() -# # convert to string -# date_time_str = now.strftime("%Y-%m-%d %H:%M:%S") -# print('DateTime String:', date_time_str) -# -# audio = whisper.load_audio(audio) -# audio = whisper.pad_or_trim(audio) -# -# mel = whisper.log_mel_spectrogram(audio).to(whisper_model.device) -# -# _, probs = whisper_model.detect_language(mel) -# -# transcript_options = whisper.DecodingOptions(task="transcribe", fp16 = False) -# translate_options = whisper.DecodingOptions(task="translate", fp16 = False) -# -# transcription = whisper.decode(whisper_model, mel, transcript_options) -# translation = whisper.decode(whisper_model, mel, translate_options) -# -# print("language spoken: " + transcription.language) -# print("transcript: " + transcription.text) -# print("———————————————————————————————————————————") -# print("translated: " + translation.text) -# if transcription.language == "en": -# tr_flag = flag.flag('GB') -# else: -# tr_flag = flag.flag(transcription.language) -# return tr_flag, transcription.text, translation.text - - - -### ———————————————————————————————————————— - -with gr.Blocks(css="style.css") as demo: - with gr.Column(): - gr.HTML(''' -

    - Whisper to Stable Diffusion -

    -

    - Ask stable diffusion for images by speaking (or singing 🤗) in your native language ! Try it in French 😉 -

    - -

    - This demo is wired to the official SD Space • Offered by Sylvain @fffilonivisitor badge
    - — -

    - - ''') -# with gr.Row(elem_id="w2sd_container"): -# with gr.Column(): - - gr.Markdown( - """ - - ## 1. Record audio or Upload an audio file: - """ - ) - - with gr.Tab(label="Record audio input", elem_id="record_tab"): - with gr.Column(): - record_input = gr.Audio( - source="microphone", - type="filepath", - show_label=False, - elem_id="record_btn" - ) - with gr.Row(): - audio_r_translate = gr.Button("Check Whisper first ? 👍", elem_id="check_btn_1") - audio_r_direct_sd = gr.Button("Magic Whisper › SD right now!", elem_id="magic_btn_1") - - with gr.Tab(label="Upload audio input", elem_id="upload_tab"): - with gr.Column(): - upload_input = gr.Audio( - source="upload", - type="filepath", - show_label=False, - elem_id="upload_area" - ) - with gr.Row(): - audio_u_translate = gr.Button("Check Whisper first ? 👍", elem_id="check_btn_2") - audio_u_direct_sd = gr.Button("Magic Whisper › SD right now!", elem_id="magic_btn_2") - - with gr.Accordion(label="Stable Diffusion Settings", elem_id="sd_settings", visible=False): - with gr.Row(): - guidance_scale = gr.Slider(2, 15, value = 7, label = 'Guidance Scale') - nb_iterations = gr.Slider(10, 50, value = 25, step = 1, label = 'Steps') - seed = gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True) - - gr.Markdown( - """ - ## 2. Check Whisper output, correct it if necessary: - """ - ) - - with gr.Row(): - - transcripted_output = gr.Textbox( - label="Transcription in your detected spoken language", - lines=3, - elem_id="transcripted" - ) - #language_detected_output = gr.Textbox(label="Native language", elem_id="spoken_lang",lines=3) - - with gr.Column(): - translated_output = gr.Textbox( - label="Transcript translated in English by Whisper", - lines=4, - elem_id="translated" - ) - with gr.Row(): - clear_btn = gr.Button(value="Clear") - diffuse_btn = gr.Button(value="OK, Diffuse this prompt !", elem_id="diffuse_btn") - - clear_btn.click(fn=lambda value: gr.update(value=""), inputs=clear_btn, outputs=translated_output) - - - - - -# with gr.Column(): - - - - gr.Markdown(""" - ## 3. Wait for Stable Diffusion Results ☕️ - Inference time is about ~10 seconds, when it's your turn 😬 - """ - ) - - sd_output = gr.Gallery().style(grid=2, height="auto") - - - gr.Markdown(""" - ### 📌 About the models -

    - Whisper is a general-purpose speech recognition model.

    - It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
    - — -

    -

    - Stable Diffusion is a state of the art text-to-image model that generates images from text. -

    -
    -
    - LICENSE -

    - The model is licensed with a CreativeML Open RAIL-M license.

    -

    - The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license.

    -

    - The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups.

    -

    - For the full list of restrictions please read the license. -

    -
    -
    - Biases and content acknowledgment -

    - Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence.

    -

    - The model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes.

    -

    You can read more in the model card. -

    -
    -
    - - """, elem_id="about") - - audio_r_translate.click(translate_better, - inputs = record_input, - outputs = [ - #language_detected_output, - transcripted_output, - translated_output - ]) - - audio_u_translate.click(translate_better, - inputs = upload_input, - outputs = [ - #language_detected_output, - transcripted_output, - translated_output - ]) - - audio_r_direct_sd.click(magic_whisper_to_sd, - inputs = [ - record_input, - guidance_scale, - nb_iterations, - seed - ], - outputs = [ - #language_detected_output, - transcripted_output, - translated_output, - sd_output - ]) - - audio_u_direct_sd.click(magic_whisper_to_sd, - inputs = [ - upload_input, - guidance_scale, - nb_iterations, - seed - ], - outputs = [ - #language_detected_output, - transcripted_output, - translated_output, - sd_output - ]) - - diffuse_btn.click(get_images, - inputs = [ - translated_output - ], - outputs = sd_output - ) - gr.HTML(''' - - ''') - - -if __name__ == "__main__": - demo.queue(max_size=32, concurrency_count=20).launch() \ No newline at end of file diff --git "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py" "b/spaces/fkhuggingme/gpt-academic/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py" deleted file mode 100644 index 26f42cad0c13bf601fc997c4d7cc5b237d2f97df..0000000000000000000000000000000000000000 --- "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py" +++ /dev/null @@ -1,186 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = False - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.md") - - print('Segmentation: done') - -def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - # <-------- 读取Markdown文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 记录删除注释后的文本 - pfg.file_paths.append(fp) - pfg.file_contents.append(file_content) - - # <-------- 拆分过长的Markdown文件 ----------> - pfg.run_file_split(max_token_limit=1500) - n_split = len(pfg.sp_file_contents) - - # <-------- 多线程润色开始 ----------> - if language == 'en->zh': - inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - elif language == 'zh->en': - inputs_array = [f"This is a Markdown file, translate it into English, do not modify any existing Markdown commands:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # OpenAI所允许的最大并行过载 - scroller_max_len = 80 - ) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -def get_files_from_everything(txt): - import glob, os - - success = True - if txt.startswith('http'): - # 网络的远程文件 - txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/") - txt = txt.replace("/blob/", "/") - import requests - from toolbox import get_conf - proxies, = get_conf('proxies') - r = requests.get(txt, proxies=proxies) - with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content) - project_folder = './gpt_log/' - file_manifest = ['./gpt_log/temp.md'] - elif txt.endswith('.md'): - # 直接给定文件 - file_manifest = [txt] - project_folder = os.path.dirname(txt) - elif os.path.exists(txt): - # 本地路径,递归搜索 - project_folder = txt - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)] - else: - success = False - - return success, file_manifest, project_folder - - -@CatchException -def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - import glob, os - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - - success, file_manifest, project_folder = get_files_from_everything(txt) - - if not success: - # 什么都没有 - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh') - - - - - -@CatchException -def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - import glob, os - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - success, file_manifest, project_folder = get_files_from_everything(txt) - if not success: - # 什么都没有 - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en') \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/keycorridor.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/keycorridor.py deleted file mode 100644 index f51dc8c6a0403667c6bb029f5df411672fd40e1d..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/keycorridor.py +++ /dev/null @@ -1,137 +0,0 @@ -from gym_minigrid.roomgrid import RoomGrid -from gym_minigrid.register import register - -class KeyCorridor(RoomGrid): - """ - A ball is behind a locked door, the key is placed in a - random room. - """ - - def __init__( - self, - num_rows=3, - obj_type="ball", - room_size=6, - seed=None - ): - self.obj_type = obj_type - - super().__init__( - room_size=room_size, - num_rows=num_rows, - max_steps=30*room_size**2, - seed=seed, - ) - - def _gen_grid(self, width, height): - super()._gen_grid(width, height) - - # Connect the middle column rooms into a hallway - for j in range(1, self.num_rows): - self.remove_wall(1, j, 3) - - # Add a locked door on the bottom right - # Add an object behind the locked door - room_idx = self._rand_int(0, self.num_rows) - door, _ = self.add_door(2, room_idx, 2, locked=True) - obj, _ = self.add_object(2, room_idx, kind=self.obj_type) - - # Add a key in a random room on the left side - self.add_object(0, self._rand_int(0, self.num_rows), 'key', door.color) - - # Place the agent in the middle - self.place_agent(1, self.num_rows // 2) - - # Make sure all rooms are accessible - self.connect_all() - - self.obj = obj - self.mission = "pick up the %s %s" % (obj.color, obj.type) - - def step(self, action): - obs, reward, done, info = super().step(action) - - if action == self.actions.pickup: - if self.carrying and self.carrying == self.obj: - reward = self._reward() - done = True - - return obs, reward, done, info - -class KeyCorridorS3R1(KeyCorridor): - def __init__(self, seed=None): - super().__init__( - room_size=3, - num_rows=1, - seed=seed - ) - -class KeyCorridorS3R2(KeyCorridor): - def __init__(self, seed=None): - super().__init__( - room_size=3, - num_rows=2, - seed=seed - ) - -class KeyCorridorS3R3(KeyCorridor): - def __init__(self, seed=None): - super().__init__( - room_size=3, - num_rows=3, - seed=seed - ) - -class KeyCorridorS4R3(KeyCorridor): - def __init__(self, seed=None): - super().__init__( - room_size=4, - num_rows=3, - seed=seed - ) - -class KeyCorridorS5R3(KeyCorridor): - def __init__(self, seed=None): - super().__init__( - room_size=5, - num_rows=3, - seed=seed - ) - -class KeyCorridorS6R3(KeyCorridor): - def __init__(self, seed=None): - super().__init__( - room_size=6, - num_rows=3, - seed=seed - ) - -register( - id='MiniGrid-KeyCorridorS3R1-v0', - entry_point='gym_minigrid.envs:KeyCorridorS3R1' -) - -register( - id='MiniGrid-KeyCorridorS3R2-v0', - entry_point='gym_minigrid.envs:KeyCorridorS3R2' -) - -register( - id='MiniGrid-KeyCorridorS3R3-v0', - entry_point='gym_minigrid.envs:KeyCorridorS3R3' -) - -register( - id='MiniGrid-KeyCorridorS4R3-v0', - entry_point='gym_minigrid.envs:KeyCorridorS4R3' -) - -register( - id='MiniGrid-KeyCorridorS5R3-v0', - entry_point='gym_minigrid.envs:KeyCorridorS5R3' -) - -register( - id='MiniGrid-KeyCorridorS6R3-v0', - entry_point='gym_minigrid.envs:KeyCorridorS6R3' -) diff --git a/spaces/fuckyoudeki/AutoGPT/data_ingestion.py b/spaces/fuckyoudeki/AutoGPT/data_ingestion.py deleted file mode 100644 index b89a33dafd15c2e7bded0445a741a4a1c47ed417..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/data_ingestion.py +++ /dev/null @@ -1,96 +0,0 @@ -import argparse -import logging - -from autogpt.commands.file_operations import ingest_file, search_files -from autogpt.config import Config -from autogpt.memory import get_memory - -cfg = Config() - - -def configure_logging(): - logging.basicConfig( - filename="log-ingestion.txt", - filemode="a", - format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s", - datefmt="%H:%M:%S", - level=logging.DEBUG, - ) - return logging.getLogger("AutoGPT-Ingestion") - - -def ingest_directory(directory, memory, args): - """ - Ingest all files in a directory by calling the ingest_file function for each file. - - :param directory: The directory containing the files to ingest - :param memory: An object with an add() method to store the chunks in memory - """ - try: - files = search_files(directory) - for file in files: - ingest_file(file, memory, args.max_length, args.overlap) - except Exception as e: - print(f"Error while ingesting directory '{directory}': {str(e)}") - - -def main() -> None: - logger = configure_logging() - - parser = argparse.ArgumentParser( - description="Ingest a file or a directory with multiple files into memory. " - "Make sure to set your .env before running this script." - ) - group = parser.add_mutually_exclusive_group(required=True) - group.add_argument("--file", type=str, help="The file to ingest.") - group.add_argument( - "--dir", type=str, help="The directory containing the files to ingest." - ) - parser.add_argument( - "--init", - action="store_true", - help="Init the memory and wipe its content (default: False)", - default=False, - ) - parser.add_argument( - "--overlap", - type=int, - help="The overlap size between chunks when ingesting files (default: 200)", - default=200, - ) - parser.add_argument( - "--max_length", - type=int, - help="The max_length of each chunk when ingesting files (default: 4000)", - default=4000, - ) - - args = parser.parse_args() - - # Initialize memory - memory = get_memory(cfg, init=args.init) - print("Using memory of type: " + memory.__class__.__name__) - - if args.file: - try: - ingest_file(args.file, memory, args.max_length, args.overlap) - print(f"File '{args.file}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting file '{args.file}': {str(e)}") - print(f"Error while ingesting file '{args.file}': {str(e)}") - elif args.dir: - try: - ingest_directory(args.dir, memory, args) - print(f"Directory '{args.dir}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting directory '{args.dir}': {str(e)}") - print(f"Error while ingesting directory '{args.dir}': {str(e)}") - else: - print( - "Please provide either a file path (--file) or a directory name (--dir)" - " inside the auto_gpt_workspace directory as input." - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/giswqs/geospatial-dataviz/pages/03_pmtiles.py b/spaces/giswqs/geospatial-dataviz/pages/03_pmtiles.py deleted file mode 100644 index 53dda38e31a247ae6bf3a3089dcaded3c819f527..0000000000000000000000000000000000000000 --- a/spaces/giswqs/geospatial-dataviz/pages/03_pmtiles.py +++ /dev/null @@ -1,83 +0,0 @@ -import leafmap -import solara -import ipyleaflet - - -zoom = solara.reactive(2) -center = solara.reactive((20, 0)) - - -class Map(leafmap.Map): - def __init__(self, **kwargs): - super().__init__(**kwargs) - # Add what you want below - - self.add_basemap('CartoDB.DarkMatter') - url = "https://storage.googleapis.com/ahp-research/overture/pmtiles/overture.pmtiles" - - style={ - "layers": [ - { - "id": "admins", - "source": "example_source", - "source-layer": "admins", - "type": "fill", - "paint": {"fill-color": "#BDD3C7", "fill-opacity": 0.1}, - }, - { - "id": "buildings", - "source": "example_source", - "source-layer": "buildings", - "type": "fill", - "paint": {"fill-color": "#FFFFB3", "fill-opacity": 0.5}, - }, - { - "id": "places", - "source": "example_source", - "source-layer": "places", - "type": "fill", - "paint": {"fill-color": "#BEBADA", "fill-opacity": 0.5}, - }, - { - "id": "roads", - "source": "example_source", - "source-layer": "roads", - "type": "line", - "paint": {"line-color": "#FB8072"}, - }, - ], - } - - layer = ipyleaflet.PMTilesLayer(url=url, style=style) - self.add(layer) - # self.add_pmtiles(url, name='PMTiles', style=style) - - legend_dict = { - 'admins': 'BDD3C7', - 'buildings': 'FFFFB3', - 'places': 'BEBADA', - 'roads': 'FB8072', - } - - self.add_legend(legend_dict=legend_dict) - - -@solara.component -def Page(): - with solara.Column(style={"min-width": "500px"}): - # solara components support reactive variables - # solara.SliderInt(label="Zoom level", value=zoom, min=1, max=20) - # using 3rd party widget library require wiring up the events manually - # using zoom.value and zoom.set - Map.element( # type: ignore - zoom=zoom.value, - on_zoom=zoom.set, - center=center.value, - on_center=center.set, - scroll_wheel_zoom=True, - toolbar_ctrl=False, - data_ctrl=False, - height="780px", - ) - solara.Text(f"Zoom: {zoom.value}") - solara.Text(f"Center: {center.value}") diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download 3gp Cewek Sma Pipis Hit VERIFIED.md b/spaces/gotiQspiryo/whisper-ui/examples/Download 3gp Cewek Sma Pipis Hit VERIFIED.md deleted file mode 100644 index f319c57f665bbf8237f0308cdeb61d2662cef810..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download 3gp Cewek Sma Pipis Hit VERIFIED.md +++ /dev/null @@ -1,9 +0,0 @@ - -

    Streaming Nonton Video Bokep xnxx memek pipis sampai muncrat Ngentot Memek Tembem Crot di Dalam Viral, Xbokepfb Gudang Bokep indo,Koleksi vidio bokep Lokal Twitter, bokep indonesia, download bokeb selebgram, perselingkuhan ngentot istri teman, download videobokepgratis bokepcom pembantu, vpn bokep smp, hijablink, bokep abg, terong bokep pns tidur sleep, Kumpulan Film toge lonte blackpink, kakbokep Hijab barat, itubokep Tudung Terlengkap,mahasiswa Mesum duckduckgo xbokep, dokter bokep xnxx susu gede campuran bunda cctv, bos bokepindoxxi Xnxx Mommy pascol father in law sibokep kamasutra java masih Mama XXX, bokep smu abg bugil, bokepdo, ngebokep melayu meki basah, bokep baru, bokep tante bikini indomaret keponakan, Terbaik berjilbab banci indosex, vibokep orang gemuk pidiobokep kontol gede, ojol paksaan party bokep janda pornoindo xtubecinema,net18plus bokep88 kondom seragam no sensor, linkbokep website bokeparap colmek jordi hamil, bigo live bokep ukhti tik-tok bing nyepong kontol panjang hitam, bokepoi bokepjepangjav murid xxxindo, bokep asia biarawati ayam kampus luar negeri smk memek mukung rangda binal dukun bispak, yandek bokep selingkuh sd terlarang dihutan, ojol tante prank bokep69 bahenol videobokep21 ngentot dikebun, Bokep Artis Tiktok guru dan murid tercantik ngentot digubuk yang lagi viraldong, nonton bokep google drive xnxx memek bocil tembem wanita muda bokepsin, xnxx bokep pemerkosaan zbokep mom and son massage movie jilat tiktok ibu guru dan murid, bokep mahasiswi kamera tersembunyi bokepseks pembantu japanese translation, playbokep luna maya dan agatha telegram, vbokep perawan streamingsexindo memek anime gede,

    -

    download 3gp cewek sma pipis hit


    Download ★★★★★ https://urlgoal.com/2uyNbL



    -

    Video bokep perawan islam viral di instagram, bokep bbw simontok indoxxi masturbasi bokep keluarga tempek, bokep sedarah mangolive dengan hewan barat ibu mekicrot, bokepstar love story tante terbaru, nonton xvideos bokebindo dukun bokong semok sama kakek ngewe, pornogrim nonton bokep8net bollywood bebas urut anime, bokep sma bokong besar tidur big boobs, bokepindi dijilat bokep tt besar gede, xnxx bokep kakek sugiono bokepcuy perawan turis, bokep pelajar tik tok telanjang , rame rame xxnxbokep story instagram, Bokep Jilbab ml bokepxxxxx gratis pornhub kamar mandi indobugil, nonton bokepah, situs bokep indonesia perawan malam pertama, bokep china kesakitan, bokep ngentot pacar online 3gp xxxx, xpanas18 memek mentul gundul bulu lebat tebal, xnxx muncrat spa bokep pijat ml tonton cupi cupit bokep menyusui bokep20video, tempek bokepah payudara besar sekolahan dikelas, bokepku nenek tua wibu minecraft, bokep barat hamil tetek kecil, bokep pasutri kacamata kulit hitam, nonton bokephub pipis gadis upin ipin keras melahirkan bagus yoga pussy hot tidur, nonton bokep coli monster di servis cewek cantik, bokep malaysia orang kerdil, bokep stream, bokep stw kimcil lagi tidur di entot istri tetangga memek bokep jembut lebat tanpa sensor.

    -

    3 Cewek Seksi Lagi Lesbian Parah | Nonton film bokep,bokep barat,film bokep barat,video bokep,video bokep barat, video ngentot barat,film bokep full movie,film bokep terbaru,bokep terupdate, nonton bokep indo viral ,western,bokep harian 2020, bokep siswa sma,video,videobacol fun,bokep kakek sugiono,bokep ngentot memek gede, MEMEK ABG SMA, bokep tante hot indo, cerita bokep ibu kandung dengan sd, ngentot terbaru, Poto gadis payudara besar mulus bugil, video abg semok indonesia sek, foto tante cantik, remaja ngento, kumpulan memek miyabi hot, kisah mama hot, bokepin perawan kencing, bokepgw, malam bokep, nonton bokep geratis, nonton video bokep, nonton vidio bokep, mesum indo, abg binal, streaming bokep, nonton bokep online, bokep gw, bokep gue, bokep lk21, bokep bf, bokep 2019, video bokep 2019, download video dewasa, suka ngentot, majalah bokep, berita bokep, liputan bokep, siaran bokep, Sange Berat, Tv bokep, cctv bokep, siaran mesum, video mesum, jilbab mesum, bokep indonesia 2019, Abg Bugil, memek enak, memek gatel, bokep cewek gatel, bokep jepang, bokep indonesia, bokep abg, artis jakarta, bokep cewek bandung, bokep artis online, bokep thailand, bokep india, bokep hd, bokep chinese, bokep barat terbaru, bokep sd, bokep selingkuh, bokep terbaru, bokep china, bokep indonesia terbaru, bokep abg blogspot, bokep pramugari, bokep pelajar indo, bokepdo.com, bokep abg online, abg indo bokep bokep arab, bokep mahasiswi, bokep8, bokep pelajar jepang, video bokep japan, bokep mandarin, situs bokep thailand , jilbab bokep, Simantan.

    -

    Bokepdo, Full Bokep Paling Seru,bokep bokong semok, foto bugil psk, memek psk, memek mentul,memek spg, memek montok, kumpulan video bokepseks terbaru, sange, bahan coli montok, Memek cina lg ngangkang, bokep sedarah, tetek montok abg,bokep tante cantik, montok tante, abg jepang bungil pamer memek, Nonton Bokep, Asian Sex Diary, Bokep Jepang, Bokep Thailand, Download Bokep Selingkuh, Pemerkosaan, Bokep Korea Terbaru, Bokep Gratis 2020, Foto mesum abg bokep, tante semok, www foto sekertaris hot, foto memek indo, ngocokmemek, belajar bokep, Foto bugil gitar spanyol, Foto2 memek montok com, memek janda, cewek cantik mama jepang, Poto ngentot anak sd, memek tembem arab, foto cewe lagi ngewe, bokep jepang perawat, kumpulan memek muncrat,bokep janda baru, jembut memek abg, bokep full, CEWEK KOREA PAMER MEMEK PIPIS, foto toket gede, ngentot anak menantu, gambar ngentot maria ozawa, memek tembem tante, tante crot dalam, memek pipis, bokep ngentot abg indo, foto tante bugil lebat bulu indonesia, Ayah tiri dan anak, cerita seks memeksiana com, bokel sd, vidio bokep jual pacar, MEMEK ABG SMA, bokep tante hot indo, cerita bokep ibu kandung dengan sd, ngentot terbaru, Poto gadis payudara besar mulus bugil, video abg semok indonesia sek, foto tante cantik, remaja ngentot, kumpulan memek miyabi hot, kisah mama hot.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/models/nat/nat_crf_transformer.py b/spaces/gradio/HuBERT/fairseq/models/nat/nat_crf_transformer.py deleted file mode 100644 index d4b3cd931ceb077eb30db73df1d5d6cd714a86c2..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/nat/nat_crf_transformer.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import NATransformerModel, base_architecture -from fairseq.modules import DynamicCRF - - -@register_model("nacrf_transformer") -class NACRFTransformerModel(NATransformerModel): - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - self.crf_layer = DynamicCRF( - num_embedding=len(self.tgt_dict), - low_rank=args.crf_lowrank_approx, - beam_size=args.crf_beam_approx, - ) - - @property - def allow_ensemble(self): - return False - - @staticmethod - def add_args(parser): - NATransformerModel.add_args(parser) - parser.add_argument( - "--crf-lowrank-approx", - type=int, - help="the dimension of low-rank approximation of transition", - ) - parser.add_argument( - "--crf-beam-approx", - type=int, - help="the beam size for apporixmating the normalizing factor", - ) - parser.add_argument( - "--word-ins-loss-factor", - type=float, - help="weights on NAT loss used to co-training with CRF loss.", - ) - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - word_ins_tgt, word_ins_mask = tgt_tokens, tgt_tokens.ne(self.pad) - - # compute the log-likelihood of CRF - crf_nll = -self.crf_layer(word_ins_out, word_ins_tgt, word_ins_mask) - crf_nll = (crf_nll / word_ins_mask.type_as(crf_nll).sum(-1)).mean() - - return { - "word_ins": { - "out": word_ins_out, - "tgt": word_ins_tgt, - "mask": word_ins_mask, - "ls": self.args.label_smoothing, - "nll_loss": True, - "factor": self.args.word_ins_loss_factor, - }, - "word_crf": {"loss": crf_nll}, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs): - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # execute the decoder and get emission scores - output_masks = output_tokens.ne(self.pad) - word_ins_out = self.decoder( - normalize=False, prev_output_tokens=output_tokens, encoder_out=encoder_out - ) - - # run viterbi decoding through CRF - _scores, _tokens = self.crf_layer.forward_decoder(word_ins_out, output_masks) - output_tokens.masked_scatter_(output_masks, _tokens[output_masks]) - output_scores.masked_scatter_(output_masks, _scores[output_masks]) - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - -@register_model_architecture("nacrf_transformer", "nacrf_transformer") -def nacrf_base_architecture(args): - args.crf_lowrank_approx = getattr(args, "crf_lowrank_approx", 32) - args.crf_beam_approx = getattr(args, "crf_beam_approx", 64) - args.word_ins_loss_factor = getattr(args, "word_ins_loss_factor", 0.5) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - base_architecture(args) diff --git a/spaces/gradio/HuBERT/fairseq/tasks/translation_from_pretrained_xlm.py b/spaces/gradio/HuBERT/fairseq/tasks/translation_from_pretrained_xlm.py deleted file mode 100644 index a05f2891524a8b23482e206c1742c3b816b77afb..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/tasks/translation_from_pretrained_xlm.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from fairseq.data.legacy.masked_lm_dictionary import MaskedLMDictionary -from fairseq.tasks.translation import TranslationConfig, TranslationTask - -from . import register_task - - -@dataclass -class TranslationFromPretrainedXLMConfig(TranslationConfig): - pass - - -@register_task( - "translation_from_pretrained_xlm", dataclass=TranslationFromPretrainedXLMConfig -) -class TranslationFromPretrainedXLMTask(TranslationTask): - """ - Same as TranslationTask except use the MaskedLMDictionary class so that - we can load data that was binarized with the MaskedLMDictionary class. - - This task should be used for the entire training pipeline when we want to - train an NMT model from a pretrained XLM checkpoint: binarizing NMT data, - training NMT with the pretrained XLM checkpoint, and subsequent evaluation - of that trained model. - """ - - @classmethod - def load_dictionary(cls, filename): - """Load the masked LM dictionary from the filename - - Args: - filename (str): the filename - """ - return MaskedLMDictionary.load(filename) diff --git a/spaces/gradio/HuBERT/train.py b/spaces/gradio/HuBERT/train.py deleted file mode 100644 index 321de3d9b53f8194b58c26f5cb2c03281afc2bb1..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/train.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Legacy entry point. Use fairseq_cli/train.py or fairseq-train instead. -""" - -from fairseq_cli.train import cli_main - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/gradio/chatbot/README.md b/spaces/gradio/chatbot/README.md deleted file mode 100644 index e1b84e19fc7dbd2ec9ac8f031e0c4bddce9ef4ff..0000000000000000000000000000000000000000 --- a/spaces/gradio/chatbot/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Chatbot -emoji: 🌍 -colorFrom: yellow -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/gradio/fake_diffusion/DESCRIPTION.md b/spaces/gradio/fake_diffusion/DESCRIPTION.md deleted file mode 100644 index 0eab50a7a18c4208939b013dd63eead420f05839..0000000000000000000000000000000000000000 --- a/spaces/gradio/fake_diffusion/DESCRIPTION.md +++ /dev/null @@ -1 +0,0 @@ -This demo uses a fake model to showcase iterative output. The Image output will update every time a generator is returned until the final image. \ No newline at end of file diff --git a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/renderer/glm.py b/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/renderer/glm.py deleted file mode 100644 index 8be14b50f0d7edcde6328f1f805b392c8e3ab7e2..0000000000000000000000000000000000000000 --- a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/renderer/glm.py +++ /dev/null @@ -1,125 +0,0 @@ -import numpy as np - - -def vec3(x, y, z): - return np.array([x, y, z], dtype=np.float32) - - -def radians(v): - return np.radians(v) - - -def identity(): - return np.identity(4, dtype=np.float32) - - -def empty(): - return np.zeros([4, 4], dtype=np.float32) - - -def magnitude(v): - return np.linalg.norm(v) - - -def normalize(v): - m = magnitude(v) - return v if m == 0 else v / m - - -def dot(u, v): - return np.sum(u * v) - - -def cross(u, v): - res = vec3(0, 0, 0) - res[0] = u[1] * v[2] - u[2] * v[1] - res[1] = u[2] * v[0] - u[0] * v[2] - res[2] = u[0] * v[1] - u[1] * v[0] - return res - - -# below functions can be optimized - -def translate(m, v): - res = np.copy(m) - res[:, 3] = m[:, 0] * v[0] + m[:, 1] * v[1] + m[:, 2] * v[2] + m[:, 3] - return res - - -def rotate(m, angle, v): - a = angle - c = np.cos(a) - s = np.sin(a) - - axis = normalize(v) - temp = (1 - c) * axis - - rot = empty() - rot[0][0] = c + temp[0] * axis[0] - rot[0][1] = temp[0] * axis[1] + s * axis[2] - rot[0][2] = temp[0] * axis[2] - s * axis[1] - - rot[1][0] = temp[1] * axis[0] - s * axis[2] - rot[1][1] = c + temp[1] * axis[1] - rot[1][2] = temp[1] * axis[2] + s * axis[0] - - rot[2][0] = temp[2] * axis[0] + s * axis[1] - rot[2][1] = temp[2] * axis[1] - s * axis[0] - rot[2][2] = c + temp[2] * axis[2] - - res = empty() - res[:, 0] = m[:, 0] * rot[0][0] + m[:, 1] * rot[0][1] + m[:, 2] * rot[0][2] - res[:, 1] = m[:, 0] * rot[1][0] + m[:, 1] * rot[1][1] + m[:, 2] * rot[1][2] - res[:, 2] = m[:, 0] * rot[2][0] + m[:, 1] * rot[2][1] + m[:, 2] * rot[2][2] - res[:, 3] = m[:, 3] - return res - - -def perspective(fovy, aspect, zNear, zFar): - tanHalfFovy = np.tan(fovy / 2) - - res = empty() - res[0][0] = 1 / (aspect * tanHalfFovy) - res[1][1] = 1 / (tanHalfFovy) - res[2][3] = -1 - res[2][2] = - (zFar + zNear) / (zFar - zNear) - res[3][2] = -(2 * zFar * zNear) / (zFar - zNear) - - return res.T - - -def ortho(left, right, bottom, top, zNear, zFar): - # res = np.ones([4, 4], dtype=np.float32) - res = identity() - res[0][0] = 2 / (right - left) - res[1][1] = 2 / (top - bottom) - res[2][2] = - 2 / (zFar - zNear) - res[3][0] = - (right + left) / (right - left) - res[3][1] = - (top + bottom) / (top - bottom) - res[3][2] = - (zFar + zNear) / (zFar - zNear) - return res.T - - -def lookat(eye, center, up): - f = normalize(center - eye) - s = normalize(cross(f, up)) - u = cross(s, f) - - res = identity() - res[0][0] = s[0] - res[1][0] = s[1] - res[2][0] = s[2] - res[0][1] = u[0] - res[1][1] = u[1] - res[2][1] = u[2] - res[0][2] = -f[0] - res[1][2] = -f[1] - res[2][2] = -f[2] - res[3][0] = -dot(s, eye) - res[3][1] = -dot(u, eye) - res[3][2] = -dot(f, eye) - return res.T - - -def transform(d, m): - return np.dot(m, d.T).T diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/utils/app/api.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/utils/app/api.ts deleted file mode 100644 index 813c98c8f8a2ac0272fb96bfe365864cd200cc6f..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/utils/app/api.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { Plugin, PluginID } from '@/types/plugin'; - -export const getEndpoint = (plugin: Plugin | null) => { - if (!plugin) { - return 'api/chat'; - } - - if (plugin.id === PluginID.GOOGLE_SEARCH) { - return 'api/google'; - } - - return 'api/chat'; -}; diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/glutil.cpp b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/glutil.cpp deleted file mode 100644 index 2af3e931b6808e2575d8a209d5485746499b3374..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/glutil.cpp +++ /dev/null @@ -1,403 +0,0 @@ -// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// Common. -//------------------------------------------------------------------------ - -#include "framework.h" -#include "glutil.h" -#include -#include - -// Create the function pointers. -#define GLUTIL_EXT(return_type, name, ...) return_type (GLAPIENTRY* name)(__VA_ARGS__) = 0; -#include "glutil_extlist.h" -#undef GLUTIL_EXT - -// Track initialization status. -static volatile bool s_glExtInitialized = false; - -// Error strings. -const char* getGLErrorString(GLenum err) -{ - switch(err) - { - case GL_NO_ERROR: return "GL_NO_ERROR"; - case GL_INVALID_ENUM: return "GL_INVALID_ENUM"; - case GL_INVALID_VALUE: return "GL_INVALID_VALUE"; - case GL_INVALID_OPERATION: return "GL_INVALID_OPERATION"; - case GL_STACK_OVERFLOW: return "GL_STACK_OVERFLOW"; - case GL_STACK_UNDERFLOW: return "GL_STACK_UNDERFLOW"; - case GL_OUT_OF_MEMORY: return "GL_OUT_OF_MEMORY"; - case GL_INVALID_FRAMEBUFFER_OPERATION: return "GL_INVALID_FRAMEBUFFER_OPERATION"; - case GL_TABLE_TOO_LARGE: return "GL_TABLE_TOO_LARGE"; - case GL_CONTEXT_LOST: return "GL_CONTEXT_LOST"; - } - return "Unknown error"; -} - -//------------------------------------------------------------------------ -// Windows. -//------------------------------------------------------------------------ - -#ifdef _WIN32 - -static CRITICAL_SECTION getInitializedCriticalSection(void) -{ - CRITICAL_SECTION cs; - InitializeCriticalSection(&cs); - return cs; -} - -static CRITICAL_SECTION s_getProcAddressMutex = getInitializedCriticalSection(); - -static void safeGetProcAddress(const char* name, PROC* pfn) -{ - PROC result = wglGetProcAddress(name); - if (!result) - { - LeaveCriticalSection(&s_getProcAddressMutex); // Prepare for thread exit. - LOG(FATAL) << "wglGetProcAddress() failed for '" << name << "'"; - exit(1); // Should never get here but make sure we exit. - } - *pfn = result; -} - -static void initializeGLExtensions(void) -{ - // Use critical section for thread safety. - EnterCriticalSection(&s_getProcAddressMutex); - - // Only dig function pointers if not done already. - if (!s_glExtInitialized) - { - // Generate code to populate the function pointers. -#define GLUTIL_EXT(return_type, name, ...) safeGetProcAddress(#name, (PROC*)&name); -#include "glutil_extlist.h" -#undef GLUTIL_EXT - - // Mark as initialized. - s_glExtInitialized = true; - } - - // Done. - LeaveCriticalSection(&s_getProcAddressMutex); - return; -} - -void setGLContext(GLContext& glctx) -{ - if (!glctx.hglrc) - LOG(FATAL) << "setGLContext() called with null gltcx"; - if (!wglMakeCurrent(glctx.hdc, glctx.hglrc)) - LOG(FATAL) << "wglMakeCurrent() failed when setting GL context"; - - if (glctx.extInitialized) - return; - initializeGLExtensions(); - glctx.extInitialized = 1; -} - -void releaseGLContext(void) -{ - if (!wglMakeCurrent(NULL, NULL)) - LOG(FATAL) << "wglMakeCurrent() failed when releasing GL context"; -} - -extern "C" int set_gpu(const char*); // In setgpu.lib -GLContext createGLContext(int cudaDeviceIdx) -{ - if (cudaDeviceIdx >= 0) - { - char pciBusId[256] = ""; - LOG(INFO) << "Creating GL context for Cuda device " << cudaDeviceIdx; - if (cudaDeviceGetPCIBusId(pciBusId, 255, cudaDeviceIdx)) - { - LOG(INFO) << "PCI bus id query failed"; - } - else - { - int res = set_gpu(pciBusId); - LOG(INFO) << "Selecting device with PCI bus id " << pciBusId << " - " << (res ? "failed, expect crash or major slowdown" : "success"); - } - } - - HINSTANCE hInstance = GetModuleHandle(NULL); - WNDCLASS wc = {}; - wc.style = CS_OWNDC; - wc.lpfnWndProc = DefWindowProc; - wc.hInstance = hInstance; - wc.lpszClassName = "__DummyGLClassCPP"; - int res = RegisterClass(&wc); - - HWND hwnd = CreateWindow( - "__DummyGLClassCPP", // lpClassName - "__DummyGLWindowCPP", // lpWindowName - WS_OVERLAPPEDWINDOW, // dwStyle - CW_USEDEFAULT, // x - CW_USEDEFAULT, // y - 0, 0, // nWidth, nHeight - NULL, NULL, // hWndParent, hMenu - hInstance, // hInstance - NULL // lpParam - ); - - PIXELFORMATDESCRIPTOR pfd = {}; - pfd.dwFlags = PFD_SUPPORT_OPENGL; - pfd.iPixelType = PFD_TYPE_RGBA; - pfd.iLayerType = PFD_MAIN_PLANE; - pfd.cColorBits = 32; - pfd.cDepthBits = 24; - pfd.cStencilBits = 8; - - HDC hdc = GetDC(hwnd); - int pixelformat = ChoosePixelFormat(hdc, &pfd); - SetPixelFormat(hdc, pixelformat, &pfd); - - HGLRC hglrc = wglCreateContext(hdc); - LOG(INFO) << std::hex << std::setfill('0') - << "WGL OpenGL context created (hdc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)hdc - << ", hglrc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)hglrc << ")"; - - GLContext glctx = {hdc, hglrc, 0}; - return glctx; -} - -void destroyGLContext(GLContext& glctx) -{ - if (!glctx.hglrc) - LOG(FATAL) << "destroyGLContext() called with null gltcx"; - - // If this is the current context, release it. - if (wglGetCurrentContext() == glctx.hglrc) - releaseGLContext(); - - HWND hwnd = WindowFromDC(glctx.hdc); - if (!hwnd) - LOG(FATAL) << "WindowFromDC() failed"; - if (!ReleaseDC(hwnd, glctx.hdc)) - LOG(FATAL) << "ReleaseDC() failed"; - if (!wglDeleteContext(glctx.hglrc)) - LOG(FATAL) << "wglDeleteContext() failed"; - if (!DestroyWindow(hwnd)) - LOG(FATAL) << "DestroyWindow() failed"; - - LOG(INFO) << std::hex << std::setfill('0') - << "WGL OpenGL context destroyed (hdc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)glctx.hdc - << ", hglrc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)glctx.hglrc << ")"; - - memset(&glctx, 0, sizeof(GLContext)); -} - -#endif // _WIN32 - -//------------------------------------------------------------------------ -// Linux. -//------------------------------------------------------------------------ - -#ifdef __linux__ - -static pthread_mutex_t s_getProcAddressMutex; - -typedef void (*PROCFN)(); - -static void safeGetProcAddress(const char* name, PROCFN* pfn) -{ - PROCFN result = eglGetProcAddress(name); - if (!result) - { - pthread_mutex_unlock(&s_getProcAddressMutex); // Prepare for thread exit. - LOG(FATAL) << "wglGetProcAddress() failed for '" << name << "'"; - exit(1); // Should never get here but make sure we exit. - } - *pfn = result; -} - -static void initializeGLExtensions(void) -{ - pthread_mutex_lock(&s_getProcAddressMutex); - - // Only dig function pointers if not done already. - if (!s_glExtInitialized) - { - // Generate code to populate the function pointers. -#define GLUTIL_EXT(return_type, name, ...) safeGetProcAddress(#name, (PROCFN*)&name); -#include "glutil_extlist.h" -#undef GLUTIL_EXT - - // Mark as initialized. - s_glExtInitialized = true; - } - - pthread_mutex_unlock(&s_getProcAddressMutex); - return; -} - -void setGLContext(GLContext& glctx) -{ - if (!glctx.context) - LOG(FATAL) << "setGLContext() called with null gltcx"; - - if (!eglMakeCurrent(glctx.display, EGL_NO_SURFACE, EGL_NO_SURFACE, glctx.context)) - LOG(ERROR) << "eglMakeCurrent() failed when setting GL context"; - - if (glctx.extInitialized) - return; - initializeGLExtensions(); - glctx.extInitialized = 1; -} - -void releaseGLContext(void) -{ - EGLDisplay display = eglGetCurrentDisplay(); - if (display == EGL_NO_DISPLAY) - LOG(WARNING) << "releaseGLContext() called with no active display"; - if (!eglMakeCurrent(display, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT)) - LOG(FATAL) << "eglMakeCurrent() failed when releasing GL context"; -} - -static EGLDisplay getCudaDisplay(int cudaDeviceIdx) -{ - typedef EGLBoolean (*eglQueryDevicesEXT_t)(EGLint, EGLDeviceEXT, EGLint*); - typedef EGLBoolean (*eglQueryDeviceAttribEXT_t)(EGLDeviceEXT, EGLint, EGLAttrib*); - typedef EGLDisplay (*eglGetPlatformDisplayEXT_t)(EGLenum, void*, const EGLint*); - - eglQueryDevicesEXT_t eglQueryDevicesEXT = (eglQueryDevicesEXT_t)eglGetProcAddress("eglQueryDevicesEXT"); - if (!eglQueryDevicesEXT) - { - LOG(INFO) << "eglGetProcAddress(\"eglQueryDevicesEXT\") failed"; - return 0; - } - - eglQueryDeviceAttribEXT_t eglQueryDeviceAttribEXT = (eglQueryDeviceAttribEXT_t)eglGetProcAddress("eglQueryDeviceAttribEXT"); - if (!eglQueryDeviceAttribEXT) - { - LOG(INFO) << "eglGetProcAddress(\"eglQueryDeviceAttribEXT\") failed"; - return 0; - } - - eglGetPlatformDisplayEXT_t eglGetPlatformDisplayEXT = (eglGetPlatformDisplayEXT_t)eglGetProcAddress("eglGetPlatformDisplayEXT"); - if (!eglGetPlatformDisplayEXT) - { - LOG(INFO) << "eglGetProcAddress(\"eglGetPlatformDisplayEXT\") failed"; - return 0; - } - - int num_devices = 0; - eglQueryDevicesEXT(0, 0, &num_devices); - if (!num_devices) - return 0; - - EGLDisplay display = 0; - EGLDeviceEXT* devices = (EGLDeviceEXT*)malloc(num_devices * sizeof(void*)); - eglQueryDevicesEXT(num_devices, devices, &num_devices); - for (int i=0; i < num_devices; i++) - { - EGLDeviceEXT device = devices[i]; - intptr_t value = -1; - if (eglQueryDeviceAttribEXT(device, EGL_CUDA_DEVICE_NV, &value) && value == cudaDeviceIdx) - { - display = eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, device, 0); - break; - } - } - - free(devices); - return display; -} - -GLContext createGLContext(int cudaDeviceIdx) -{ - EGLDisplay display = 0; - - if (cudaDeviceIdx >= 0) - { - char pciBusId[256] = ""; - LOG(INFO) << "Creating GL context for Cuda device " << cudaDeviceIdx; - display = getCudaDisplay(cudaDeviceIdx); - if (!display) - LOG(INFO) << "Failed, falling back to default display"; - } - - if (!display) - { - display = eglGetDisplay(EGL_DEFAULT_DISPLAY); - if (display == EGL_NO_DISPLAY) - LOG(FATAL) << "eglGetDisplay() failed"; - } - - EGLint major; - EGLint minor; - if (!eglInitialize(display, &major, &minor)) - LOG(FATAL) << "eglInitialize() failed"; - - // Choose configuration. - - const EGLint context_attribs[] = { - EGL_RED_SIZE, 8, - EGL_GREEN_SIZE, 8, - EGL_BLUE_SIZE, 8, - EGL_ALPHA_SIZE, 8, - EGL_DEPTH_SIZE, 24, - EGL_STENCIL_SIZE, 8, - EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT, - EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, - EGL_NONE - }; - - EGLConfig config; - EGLint num_config; - if (!eglChooseConfig(display, context_attribs, &config, 1, &num_config)) - LOG(FATAL) << "eglChooseConfig() failed"; - - // Create GL context. - - if (!eglBindAPI(EGL_OPENGL_API)) - LOG(FATAL) << "eglBindAPI() failed"; - - EGLContext context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL); - if (context == EGL_NO_CONTEXT) - LOG(FATAL) << "eglCreateContext() failed"; - - // Done. - - LOG(INFO) << "EGL " << (int)minor << "." << (int)major << " OpenGL context created (disp: 0x" - << std::hex << std::setfill('0') - << std::setw(16) << (uintptr_t)display - << ", ctx: 0x" << std::setw(16) << (uintptr_t)context << ")"; - - GLContext glctx = {display, context, 0}; - return glctx; -} - -void destroyGLContext(GLContext& glctx) -{ - if (!glctx.context) - LOG(FATAL) << "destroyGLContext() called with null gltcx"; - - // If this is the current context, release it. - if (eglGetCurrentContext() == glctx.context) - releaseGLContext(); - - if (!eglDestroyContext(glctx.display, glctx.context)) - LOG(ERROR) << "eglDestroyContext() failed"; - - LOG(INFO) << "EGL OpenGL context destroyed (disp: 0x" - << std::hex << std::setfill('0') - << std::setw(16) << (uintptr_t)glctx.display - << ", ctx: 0x" << std::setw(16) << (uintptr_t)glctx.context << ")"; - - memset(&glctx, 0, sizeof(GLContext)); -} - -//------------------------------------------------------------------------ - -#endif // __linux__ - -//------------------------------------------------------------------------ diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/stylegan2/op/fused_bias_act.cpp b/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/stylegan2/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/stylegan2/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/gyugnsu/DragGan-Inversion/gui_utils/imgui_utils.py b/spaces/gyugnsu/DragGan-Inversion/gui_utils/imgui_utils.py deleted file mode 100644 index 6dbc8baaa2b9d1b23a7d9d6bb0cf11465bd236ec..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/gui_utils/imgui_utils.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import contextlib -import imgui - -# ---------------------------------------------------------------------------- - - -def set_default_style(color_scheme='dark', spacing=9, indent=23, scrollbar=27): - s = imgui.get_style() - s.window_padding = [spacing, spacing] - s.item_spacing = [spacing, spacing] - s.item_inner_spacing = [spacing, spacing] - s.columns_min_spacing = spacing - s.indent_spacing = indent - s.scrollbar_size = scrollbar - s.frame_padding = [4, 3] - s.window_border_size = 1 - s.child_border_size = 1 - s.popup_border_size = 1 - s.frame_border_size = 1 - s.window_rounding = 0 - s.child_rounding = 0 - s.popup_rounding = 3 - s.frame_rounding = 3 - s.scrollbar_rounding = 3 - s.grab_rounding = 3 - - getattr(imgui, f'style_colors_{color_scheme}')(s) - c0 = s.colors[imgui.COLOR_MENUBAR_BACKGROUND] - c1 = s.colors[imgui.COLOR_FRAME_BACKGROUND] - s.colors[imgui.COLOR_POPUP_BACKGROUND] = [ - x * 0.7 + y * 0.3 for x, y in zip(c0, c1)][:3] + [1] - -# ---------------------------------------------------------------------------- - - -@contextlib.contextmanager -def grayed_out(cond=True): - if cond: - s = imgui.get_style() - text = s.colors[imgui.COLOR_TEXT_DISABLED] - grab = s.colors[imgui.COLOR_SCROLLBAR_GRAB] - back = s.colors[imgui.COLOR_MENUBAR_BACKGROUND] - imgui.push_style_color(imgui.COLOR_TEXT, *text) - imgui.push_style_color(imgui.COLOR_CHECK_MARK, *grab) - imgui.push_style_color(imgui.COLOR_SLIDER_GRAB, *grab) - imgui.push_style_color(imgui.COLOR_SLIDER_GRAB_ACTIVE, *grab) - imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND, *back) - imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND_HOVERED, *back) - imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND_ACTIVE, *back) - imgui.push_style_color(imgui.COLOR_BUTTON, *back) - imgui.push_style_color(imgui.COLOR_BUTTON_HOVERED, *back) - imgui.push_style_color(imgui.COLOR_BUTTON_ACTIVE, *back) - imgui.push_style_color(imgui.COLOR_HEADER, *back) - imgui.push_style_color(imgui.COLOR_HEADER_HOVERED, *back) - imgui.push_style_color(imgui.COLOR_HEADER_ACTIVE, *back) - imgui.push_style_color(imgui.COLOR_POPUP_BACKGROUND, *back) - yield - imgui.pop_style_color(14) - else: - yield - -# ---------------------------------------------------------------------------- - - -@contextlib.contextmanager -def item_width(width=None): - if width is not None: - imgui.push_item_width(width) - yield - imgui.pop_item_width() - else: - yield - -# ---------------------------------------------------------------------------- - - -def scoped_by_object_id(method): - def decorator(self, *args, **kwargs): - imgui.push_id(str(id(self))) - res = method(self, *args, **kwargs) - imgui.pop_id() - return res - return decorator - -# ---------------------------------------------------------------------------- - - -def button(label, width=0, enabled=True): - with grayed_out(not enabled): - clicked = imgui.button(label, width=width) - clicked = clicked and enabled - return clicked - -# ---------------------------------------------------------------------------- - - -def collapsing_header(text, visible=None, flags=0, default=False, enabled=True, show=True): - expanded = False - if show: - if default: - flags |= imgui.TREE_NODE_DEFAULT_OPEN - if not enabled: - flags |= imgui.TREE_NODE_LEAF - with grayed_out(not enabled): - expanded, visible = imgui.collapsing_header( - text, visible=visible, flags=flags) - expanded = expanded and enabled - return expanded, visible - -# ---------------------------------------------------------------------------- - - -def popup_button(label, width=0, enabled=True): - if button(label, width, enabled): - imgui.open_popup(label) - opened = imgui.begin_popup(label) - return opened - -# ---------------------------------------------------------------------------- - - -def input_text(label, value, buffer_length, flags, width=None, help_text=''): - old_value = value - color = list(imgui.get_style().colors[imgui.COLOR_TEXT]) - if value == '': - color[-1] *= 0.5 - with item_width(width): - imgui.push_style_color(imgui.COLOR_TEXT, *color) - value = value if value != '' else help_text - changed, value = imgui.input_text(label, value, buffer_length, flags) - value = value if value != help_text else '' - imgui.pop_style_color(1) - if not flags & imgui.INPUT_TEXT_ENTER_RETURNS_TRUE: - changed = (value != old_value) - return changed, value - -# ---------------------------------------------------------------------------- - - -def drag_previous_control(enabled=True): - dragging = False - dx = 0 - dy = 0 - if imgui.begin_drag_drop_source(imgui.DRAG_DROP_SOURCE_NO_PREVIEW_TOOLTIP): - if enabled: - dragging = True - dx, dy = imgui.get_mouse_drag_delta() - imgui.reset_mouse_drag_delta() - imgui.end_drag_drop_source() - return dragging, dx, dy - -# ---------------------------------------------------------------------------- - - -def drag_button(label, width=0, enabled=True): - clicked = button(label, width=width, enabled=enabled) - dragging, dx, dy = drag_previous_control(enabled=enabled) - return clicked, dragging, dx, dy - -# ---------------------------------------------------------------------------- - - -def drag_hidden_window(label, x, y, width, height, enabled=True): - imgui.push_style_color(imgui.COLOR_WINDOW_BACKGROUND, 0, 0, 0, 0) - imgui.push_style_color(imgui.COLOR_BORDER, 0, 0, 0, 0) - imgui.set_next_window_position(x, y) - imgui.set_next_window_size(width, height) - imgui.begin(label, closable=False, flags=( - imgui.WINDOW_NO_TITLE_BAR | imgui.WINDOW_NO_RESIZE | imgui.WINDOW_NO_MOVE)) - dragging, dx, dy = drag_previous_control(enabled=enabled) - imgui.end() - imgui.pop_style_color(2) - return dragging, dx, dy - -# ---------------------------------------------------------------------------- - - -def click_hidden_window(label, x, y, width, height, img_w, img_h, enabled=True): - imgui.push_style_color(imgui.COLOR_WINDOW_BACKGROUND, 0, 0, 0, 0) - imgui.push_style_color(imgui.COLOR_BORDER, 0, 0, 0, 0) - imgui.set_next_window_position(x, y) - imgui.set_next_window_size(width, height) - imgui.begin(label, closable=False, flags=( - imgui.WINDOW_NO_TITLE_BAR | imgui.WINDOW_NO_RESIZE | imgui.WINDOW_NO_MOVE)) - clicked, down = False, False - img_x, img_y = 0, 0 - if imgui.is_mouse_down(): - posx, posy = imgui.get_mouse_pos() - if posx >= x and posx < x + width and posy >= y and posy < y + height: - if imgui.is_mouse_clicked(): - clicked = True - down = True - img_x = round((posx - x) / (width - 1) * (img_w - 1)) - img_y = round((posy - y) / (height - 1) * (img_h - 1)) - imgui.end() - imgui.pop_style_color(2) - return clicked, down, img_x, img_y - -# ---------------------------------------------------------------------------- diff --git a/spaces/h2oai/h2ogpt-chatbot2/src/iterators/iterator_pipe.py b/spaces/h2oai/h2ogpt-chatbot2/src/iterators/iterator_pipe.py deleted file mode 100644 index 90883b08ee6c5fbb7a575a7f1176f124b4d66134..0000000000000000000000000000000000000000 --- a/spaces/h2oai/h2ogpt-chatbot2/src/iterators/iterator_pipe.py +++ /dev/null @@ -1,93 +0,0 @@ -import queue -import asyncio - - -class IteratorPipe: - """ - Iterator Pipe creates an iterator that can be fed in data from another block of code or thread of execution - """ - - def __init__(self, sentinel=object()): - self._q = queue.Queue() - self._sentinel = sentinel - self._sentinel_pushed = False - self._closed = False - - def __iter__(self): - return self - - def __next__(self): - if self._closed: - raise StopIteration - - data = self._q.get(block=True) - if data is self._sentinel: - self._closed = True - raise StopIteration - - return data - - def put(self, data) -> bool: - """ - Pushes next item to Iterator and returns True - If iterator has been closed via close(), doesn't push anything and returns False - """ - if self._sentinel_pushed: - return False - - self._q.put(data) - return True - - def close(self): - """ - Close is idempotent. Calling close multiple times is safe - Iterator will raise StopIteration only after all elements pushed before close have been iterated - """ - # make close idempotent - if not self._sentinel_pushed: - self._sentinel_pushed = True - self._q.put(self._sentinel) - - -class AsyncIteratorPipe: - - def __init__(self, sentinel=object()): - self._q = asyncio.Queue() - self._sentinel = sentinel - self._sentinel_pushed = False - self._closed = False - - def __aiter__(self): - return self - - async def __anext__(self): - if self._closed: - raise StopAsyncIteration - - data = await self._q.get() - if data is self._sentinel: - self._closed = True - raise StopAsyncIteration - - return data - - async def put(self, data) -> bool: - """ - Pushes next item to Iterator and returns True - If iterator has been closed via close(), doesn't push anything and returns False - """ - if self._sentinel_pushed: - return False - - await self._q.put(data) - return True - - async def close(self): - """ - Close is idempotent. Calling close multiple times is safe - Iterator will raise StopIteration only after all elements pushed before close have been iterated - """ - # make close idempotent - if not self._sentinel_pushed: - self._sentinel_pushed = True - await self._q.put(self._sentinel) diff --git a/spaces/haakohu/deep_privacy2/dp2/detection/models/mask_rcnn.py b/spaces/haakohu/deep_privacy2/dp2/detection/models/mask_rcnn.py deleted file mode 100644 index ed64706c0036d6dcc2355c8ce2f830bd8a22c3e3..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/detection/models/mask_rcnn.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch -import tops -from detectron2.modeling import build_model -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.structures import Boxes -from detectron2.data import MetadataCatalog -from detectron2 import model_zoo -from typing import Dict -from detectron2.data.transforms import ResizeShortestEdge -from torchvision.transforms.functional import resize - - -model_urls = { - "COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml": "https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x/139653917/model_final_2d9806.pkl", - -} - - -class MaskRCNNDetector: - - def __init__( - self, - cfg_name: str = "COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml", - score_thres: float = 0.9, - class_filter=["person"], # ["car", "bicycle","truck", "bus", "backpack"] - fp16_inference: bool = False - ) -> None: - cfg = model_zoo.get_config(cfg_name) - cfg.MODEL.DEVICE = str(tops.get_device()) - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = score_thres - cfg.freeze() - self.cfg = cfg - with tops.logger.capture_log_stdout(): - self.model = build_model(cfg) - DetectionCheckpointer(self.model).load(model_urls[cfg_name]) - self.model.eval() - self.input_format = cfg.INPUT.FORMAT - self.class_names = MetadataCatalog.get(cfg.DATASETS.TRAIN[0]).thing_classes - self.class_to_keep = set([self.class_names.index(cls_) for cls_ in class_filter]) - self.person_class = self.class_names.index("person") - self.fp16_inference = fp16_inference - tops.logger.log("Mask R-CNN built.") - - def __call__(self, *args, **kwargs): - return self.forward(*args, **kwargs) - - def resize_im(self, im): - H, W = im.shape[1:] - newH, newW = ResizeShortestEdge.get_output_shape( - H, W, self.cfg.INPUT.MIN_SIZE_TEST, self.cfg.INPUT.MAX_SIZE_TEST) - return resize( - im, (newH, newW), antialias=True) - - @torch.no_grad() - def forward(self, im: torch.Tensor): - if self.input_format == "BGR": - im = im.flip(0) - else: - assert self.input_format == "RGB" - H, W = im.shape[-2:] - im = self.resize_im(im) - with torch.cuda.amp.autocast(enabled=self.fp16_inference): - output = self.model([{"image": im, "height": H, "width": W}])[0]["instances"] - scores = output.get("scores") - N = len(scores) - classes = output.get("pred_classes") - idx2keep = [i for i in range(N) if classes[i].tolist() in self.class_to_keep] - classes = classes[idx2keep] - assert isinstance(output.get("pred_boxes"), Boxes) - segmentation = output.get("pred_masks")[idx2keep] - assert segmentation.dtype == torch.bool - is_person = classes == self.person_class - return { - "scores": output.get("scores")[idx2keep], - "segmentation": segmentation, - "classes": output.get("pred_classes")[idx2keep], - "is_person": is_person - } diff --git a/spaces/hackathon-pln-es/BioMedIA/article_app.py b/spaces/hackathon-pln-es/BioMedIA/article_app.py deleted file mode 100644 index 496f1b3212b3b7dcd5952374278d5cf6ac68153e..0000000000000000000000000000000000000000 --- a/spaces/hackathon-pln-es/BioMedIA/article_app.py +++ /dev/null @@ -1,355 +0,0 @@ -article = """ - - -

    This app is developed by the aforementioned members of IIC - Instituto de Ingeniería del Conocimiento as part of the Somos PLN Hackaton 2022. - -

    -Objectives and Motivation -

    - -It has been shown recently that the research in the Biomedical field is substantial for the sustainability of society. There is so much information in the Internet about this topic, -we thought it would be possible to have a big database of biomedical texts to retrieve the most relevant documents for a certain question and, with all that information, generate a concise answer that tries to convey the documents' information while being self-explanatory. -With such a tool, Biomedical researchers or professionals could use it to quickly identify the key points for answering a question, therefore accelerating their research process. Also, we would put important health-related information in the hands of everyone, which we think can have -a very good impact on society. Health is a hot topic today but should be always in the top of our priorities, therefore providing quick and easy access to understandable answers that convey complex information into simple explanations is, in our opinion, an action in the right direction. - -We identified the need for strong intelligent information retrieval systems. Imagine a Siri that could generate coherent answers for your questions, instead of simplistic google search for you. That is the technology we envision, to which we would like the Spanish community of -NLP to get a little step closer. Hackaton Somos NLP 2022 is actually intended to impulse NLP tools in Spanish, as there is an imbalance between the amount of Spanish speakers and the percentage of Spanish models and datasets in the hub. - -The main technical objective of this app is to expand the existing tools regarding long form question answering in Spanish, by introducing new generative methods together with a complete architecture of good performing models, producing interesting results in a variety of examples tried. -In fact, multiple novel methods in Spanish have been introduced to build this app. - -Most of these systems currently rely on Sentence Transformers for passage retrieval (which we wanted to improve by creating Dense Passage Retrieval in Spanish), and use Extractive Question Answering methods. This means that the user needs to look -into top answers and then form a final answer in their mind that contains all of that information. This is, to the best of our knowledge, the first time Dense Passage Retrievals have been trained in Spanish with large datasets, and the first time a generative question answering model in Spanish -has been released. - -For doing that, the first restriction we found was the scarcity of datasets for that task, which is exacerbated by the domain gap to the Biomedical domain. We overcomed this restriction by applying translation models from Transformers (specified in each dataset) to translate BioAsq -to Spanish, and by doing the same with LFQA (more info in the attached datasets). BioAsq is a big Question Answering dataset in English for the BioMedical domain, containing more than 35k question-answer-context triplets for training. We then used our translated version of BioAsq, -together with SQAC (15k triplets) and SQUAD-ES (87.5k train triplets), which also has a portion related to the BioMedical domain. This was very useful for training extractive QA models to provide for the community (you can find some in https://huggingface.co/IIC), -but also for building a Dense Passage Retrieval (DPR) dataset to train a DPR model, which is key for our App, as without almost perfect information for answering a question, the generative model will not produce any reliable answer. - -The fragility of the solution we devised, and therefore also the most beautiful side of it when it works, is that every piece must work perfectly for the final answer to be correct. If our Speech2Text system is not -good enough, the transcripted text will come corrupt to the DPR, therefore no relevant documents will be retrieved, and the answer will be poor. Similarly, if the DPR is not correctly trained and is not able to identify the relevant passages for a query, the result will be bad. -This also served as a motivation, as the technical difficulty was completely worth it in cased it worked. Moreover, it would serve for us as a service to the NLP community in Spanish. For building this app we would use much of what we learned from the private sector in building systems -relying on multiple models, to deliver to the community top performing models for Question Answering related tasks, thus participating in the Open Source culture and expansion of knowledge. Another objective we had, then, was to give a practical example sample of good practices, -which fits with the didactic character of both the organization and the Hackaton. - -Regarding the Speech2Text, there were existing solutions trained on Commonvoice; however, there were no Spanish models trained with bigger datasets like MultiLibrispeech-es, which we used following the results reported in Meta's paper (more info in the linked wav2vec2 model above). We also decided -to train the large version of wav2vec2, as the other ASR models that were available were 300M parameter models, therefore we also wanted to improve on that part, not only on the dataset used. We obtained a WER of 0.073, which is arguably low compared to the rest of the existing models on ASR -datasets in Spanish. Further research should be made to compare all of these models, however this was out of the scope for this project. - -Another contribution we wanted to make with this project was a good performing ranker in Spanish. This is a piece we include after the DPR to select the top passages for a query to rank passages based on relevance to the query. Although there are multilingual open source solutions, there are no Spanish monolingual models in this regard. -For that, we trained CrossEncoder, for which we automatically translated MS Marco with Transformer, which has around 200k query-passage pairs, if we take 1 positive to 4 negative rate from the papers. MS Marco is the dataset typically used in English to train crossencoders for ranking. - -Finally, there are not generative question answering datasets in Spanish. For that reason, we used LFQA, as mentioned above. It has over 400k data instances, which we also translated with Transformers. -Our translation methods needed to work correclty, since the passages were too large for the max sequence length of the translation model and there were 400k x 3 (answer, question, passages) texts to translate. -We solved those problems with intelligent text splitting and reconstruction and efficient configuration for the translation process. Thanks to this dataset we could train 2 generative models, for which we used our expertise on generative language models in order to train them effectively. -The reason for including audio as a possible input and output is because we wanted to make the App much more accessible to everyone. With this App we want to put biomedical knowledge in Spanish within everyone's reach. - -

    -System Architecture -

    -Below you can find all the pieces that form the system. This section is minimalist so that the user can get a broad view of the general inner working of the app, and then travel through each model and dataset where they will find much more information on each piece of the system. - - - -
      -
    1. Speech2Text: For this we finedtuned a multilingual Wav2Vec2, as explained in the attached link. We use this model to process audio questions. More info: https://huggingface.co/IIC/wav2vec2-spanish-multilibrispeech
    2. -
    3. Dense Passage Retrieval (DPR) for Context: Dense Passage Retrieval is a methodology developed by Facebook which is currently the SoTA for Passage Retrieval, that is, the task of getting the most relevant passages to answer a given question. You can find details about how it was trained here: https://huggingface.co/IIC/dpr-spanish-passage_encoder-allqa-base.
    4. -
    5. Dense Passage Retrieval (DPR) for Question: It is actually part of the same thing as the above. For more details, go to https://huggingface.co/IIC/dpr-spanish-question_encoder-allqa-base .
    6. -
    7. Sentence Encoder Ranker: To rerank the candidate contexts retrieved by DPR for the generative model to see. This also selects the top 5 passages for the model to read, it is the final filter before the generative model. For this we used 3 different configurations to human-check (that's us seriously playing with our toy) the answer results, as generated answers depended much on this piece of the puzzle. The first option, before we trained our own crossencoder, was to use a multilingual sentence transformer, trained on multilingual MS Marco. This worked more or less fine, although it was noticeable it wasn't specialized in Spanish. We then tried our own CrossEncoder, trained on our translated version of MS Marco to Spanish: https://huggingface.co/datasets/IIC/msmarco_es. It worked better than the sentence transformer. Then, it occured to us by looking at their ranks distributions for the same passages, that maybe by multiplying their similarity scores element by element, we could obtain a less biased rank for the documents, therefore only those documents both rankers agree are important appear at the top. We tried this and it showed much better results, so we left both systems with the posterior multiplication of similarities.
    8. -
    9. Generative Long-Form Question Answering Model: For this we used either mT5 (the one attached) or mBART. This generative model receives the most relevant passages and uses them to generate an answer to the question. In https://huggingface.co/IIC/mt5-base-lfqa-es and https://huggingface.co/IIC/mbart-large-lfqa-es there are more details about how we trained it etc.
    10. -
    11. Text2Speech: For this we used Meta's text2speech service on Huggingface, as text2speech classes are not yet implemented on the main branch of Transformers. This piece was a must to provide a voice to voice service so that it's almost fully accessible. As future work, as soon as text2speech classes are implemented in transformers, we will train our own models to replace this piece.
    12. -
    - -Apart from those, this system could not respond in less than a minute on CPU if we didn't use some indexing tricks on the dataset, by using Faiss. We need to look for relevant passages to answer the questions on over 1.5M of semi-long documents, which means that if we want to compare the question vector as encoded by DPR against all of those vectors, we have to perform over 1.5M comparisons. Instead of that, we created a FAISS index optimized for very fast search, configured as follows: -
      -
    • A dimensionality reduction method is applied to to represent each one of the 1.5M documents as a vector of 128 elements, which after some quantization algorithms requires only 32 bytes of memory per vector.
    • -
    • Document vectors are clusted with k-means into about 5K clusters.
    • -
    • At query time, the query vector follow the same pipeline, and relevant documents from the same cluster are retrieved.
    • -
    -Using this strategy we managed to improve the passages retrieving time to miliseconds. This is key since large generative language models like the ones we use already take too much time on CPU, therefore we alleviate this restriction by reducing the retrieving time. - -

    -Datasets used and created -

    -We uploaded, and in some cases created, datasets in Spanish to be able to build such a system. - -
      -
    1. Spanish Biomedical Crawled Corpus. Used for finding answers to questions about biomedicine. (More info in https://huggingface.co/datasets/IIC/spanish_biomedical_crawled_corpus .)
    2. -
    3. LFQA_Spanish. Used for training the generative model. (More info in https://huggingface.co/datasets/IIC/lfqa_spanish )
    4. -
    5. SQUADES. Used to train the DPR models. (More info in https://huggingface.co/datasets/squad_es .)
    6. -
    7. BioAsq22-Spanish. Used to train the DPR models. (More info in https://huggingface.co/datasets/IIC/bioasq22_es .)
    8. -
    9. SQAC (Spanish Question Answering Corpus). Used to train the DPR models. (More info in https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC .)
    10. -
    11. MSMARCO-ES. Used to train CrossEncoder in Spanish for Ranker.(More info in https://huggingface.co/datasets/IIC/msmarco_es .)
    12. -
    13. MultiLibrispeech. Used to train the Speech2Text model in Spanish. (More info in https://huggingface.co/datasets/multilingual_librispeech .)
    14. -
    - -

    -Objetivos del Desarrollo Sostenible -

    - -
      -
    1. Salud y bienestar: pretendemos con nuestro sistema mejorar la búsqueda de información acerca de la salud y el sector biomédico, ayudando tanto a investigadores biomédicos a indagar en una gran base de datos sobre el tema, pudiendo acelerar así el proceso de investigación y desarrollo en este ámbito, como a cualquier individuo que quiera conocer mejor acerca de la salud y de los temas relacionados. De esta manera usamos la IA para promover tanto el conocimiento como la exploración en el campo de la BioMedicina en castellano.
    2. -
    3. Educación de calidad: al ofrecer al mundo un sistema avanzado de consulta de información, ayudamos a complementar y mejorar los sistemas de calidad actuales del mundo biomédico, pues los alumnos tienen un sistema para aprender sobre este campo interactuando a través de nuestros modelos con una gran base de conocimiento en este tema.
    4. -
    5. Reducción de las desigualdades: Al hacer un sistema end-to-end de voz a voz, en el que no sería necesario usar el teclado (*), promovemos la accesibilidad a la herramienta. Esto tiene la intención de que personas que no puedan o padezcan impedimentos al leer o escribir tengan la oportunidad de interactuar con BioMedIA. Vimos la necesidad de hacer este sistema lo más flexible posible, para que fuera fácil interactuar con él independientemente de las dificultades o limitaciones físicas que pudieran tener las personas. Al incluir una salida de voz, aquellos que tengan problemas de visión también podrán recibir respuestas a sus dudas. Esto reduce las desigualdades de acceso a la herramienta de las personas con alguno de esos impedimentos. Además, generando una herramienta gratuita de acceso al conocimiento disponible en cualquier parte del mundo con acceso a Internet, reducimos las desigualdades de acceso a la información.
    6. -
    - -

    -Contact -

    - - -

    - -(*) Nótese que en la demo actual del sistema el usuario necesita realizar una mínima interacción por teclado y ratón. Esto es debido a una limitación de diseño de los spaces de Huggingface. No obstante, las tecnologías desarrolladas sí permitirían su integración en un sistema de interacción pura por voz. -""" -# 1HOzvvgDLFNTK7tYAY1dRzNiLjH41fZks -# 1kvHDFUPPnf1kM5EKlv5Ife2KcZZvva_1 -description = """ - - - - -

    BioMedIA: Abstractive Question Answering for the BioMedical Domain in Spanish

    -

    Esta aplicación utiliza un avanzado sistema de búsqueda para obtener textos relevantes acerca de tu pregunta, usando toda esa información para tratar de condensarla en una explicación coherente y autocontenida. Más detalles y ejemplos de preguntas en la sección inferior. -El sistema generativo puede tardar entre 20 y 50 segundos en general, por lo que en esos ratos mientras esperas las respuestas, te invitamos a que bucees por el artículo que hemos dejado debajo de los ejemplos de la App, en el que podrás descubrir más detalles acerca de cómo funciona 📖 🤓. -Los miembros del equipo: -

    -Esperamos que disfrutéis y curioseéis con ella 💗

    -""" - - -examples = [ - [ - "¿Cuáles son los efectos secundarios más ampliamente reportados en el tratamiento de la enfermedad de Crohn?", - "vacio.flac", - "vacio.flac", - 60, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Para qué sirve la tecnología CRISPR?", - "vacio.flac", - "vacio.flac", - 60, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Qué es el lupus?", - "vacio.flac", - "vacio.flac", - 60, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Qué es la anorexia?", - "vacio.flac", - "vacio.flac", - 60, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Por qué sentimos ansiedad?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Qué es la gripe aviar?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Qué es la tecnología CRISPR?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Cómo se genera la apendicitis?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Qué es la mesoterapia?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Qué alternativas al Paracetamol existen para el dolor de cabeza?", - "vacio.flac", - "vacio.flac", - 80, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Cuáles son los principales tipos de disartria del trastorno del habla motor?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Es la esclerosis tuberosa una enfermedad genética?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Cuál es la función de la proteína Mis18?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Cuáles son las principales causas de muerte?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Qué deficiencia es la causa del síndrome de piernas inquietas?", - "vacio.flac", - "vacio.flac", - 50, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Cuál es la función del 6SRNA en las bacterias?", - "vacio.flac", - "vacio.flac", - 60, - 8, - 3, - 1.0, - 250, - False, - ], - [ - "¿Por qué los humanos desarrollamos diabetes?", - "vacio.flac", - "vacio.flac", - 50, - 10, - 3, - 1.0, - 250, - False, - ], - [ - "¿Qué factores de riesgo aumentan la probabilidad de sufrir un ataque al corazón?", - "vacio.flac", - "vacio.flac", - 80, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Cómo funcionan las vacunas?", - "vacio.flac", - "vacio.flac", - 90, - 8, - 3, - 1.0, - 250, - False - ], - [ - "¿Tienen conciencia los animales?", - "vacio.flac", - "vacio.flac", - 70, - 8, - 3, - 1.0, - 250, - False - ], -] diff --git a/spaces/hackathon-pln-es/Sentence-Embedding-Bertin/README.md b/spaces/hackathon-pln-es/Sentence-Embedding-Bertin/README.md deleted file mode 100644 index f0ac5223ab48f55533e5f1807520700c86e7d6c0..0000000000000000000000000000000000000000 --- a/spaces/hackathon-pln-es/Sentence-Embedding-Bertin/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentence Embedding Bertin -emoji: 💻 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hahahafofo/ChatPDF/README.md b/spaces/hahahafofo/ChatPDF/README.md deleted file mode 100644 index 76f9db9402d03687b1899db1c19262ada2df4335..0000000000000000000000000000000000000000 --- a/spaces/hahahafofo/ChatPDF/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatPDF -emoji: ⚡ -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/processing/text.py b/spaces/hamelcubsfan/AutoGPT/autogpt/processing/text.py deleted file mode 100644 index 52add81401775c1b111512d8149f86a175fd9acb..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/processing/text.py +++ /dev/null @@ -1,132 +0,0 @@ -"""Text processing functions""" -from typing import Dict, Generator, Optional - -from selenium.webdriver.remote.webdriver import WebDriver - -from autogpt.config import Config -from autogpt.llm_utils import create_chat_completion -from autogpt.memory import get_memory - -CFG = Config() -MEMORY = get_memory(CFG) - - -def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]: - """Split text into chunks of a maximum length - - Args: - text (str): The text to split - max_length (int, optional): The maximum length of each chunk. Defaults to 8192. - - Yields: - str: The next chunk of text - - Raises: - ValueError: If the text is longer than the maximum length - """ - paragraphs = text.split("\n") - current_length = 0 - current_chunk = [] - - for paragraph in paragraphs: - if current_length + len(paragraph) + 1 <= max_length: - current_chunk.append(paragraph) - current_length += len(paragraph) + 1 - else: - yield "\n".join(current_chunk) - current_chunk = [paragraph] - current_length = len(paragraph) + 1 - - if current_chunk: - yield "\n".join(current_chunk) - - -def summarize_text( - url: str, text: str, question: str, driver: Optional[WebDriver] = None -) -> str: - """Summarize text using the OpenAI API - - Args: - url (str): The url of the text - text (str): The text to summarize - question (str): The question to ask the model - driver (WebDriver): The webdriver to use to scroll the page - - Returns: - str: The summary of the text - """ - if not text: - return "Error: No text to summarize" - - text_length = len(text) - print(f"Text length: {text_length} characters") - - summaries = [] - chunks = list(split_text(text)) - scroll_ratio = 1 / len(chunks) - - for i, chunk in enumerate(chunks): - if driver: - scroll_to_percentage(driver, scroll_ratio * i) - print(f"Adding chunk {i + 1} / {len(chunks)} to memory") - - memory_to_add = f"Source: {url}\n" f"Raw content part#{i + 1}: {chunk}" - - MEMORY.add(memory_to_add) - - print(f"Summarizing chunk {i + 1} / {len(chunks)}") - messages = [create_message(chunk, question)] - - summary = create_chat_completion( - model=CFG.fast_llm_model, - messages=messages, - ) - summaries.append(summary) - print(f"Added chunk {i + 1} summary to memory") - - memory_to_add = f"Source: {url}\n" f"Content summary part#{i + 1}: {summary}" - - MEMORY.add(memory_to_add) - - print(f"Summarized {len(chunks)} chunks.") - - combined_summary = "\n".join(summaries) - messages = [create_message(combined_summary, question)] - - return create_chat_completion( - model=CFG.fast_llm_model, - messages=messages, - ) - - -def scroll_to_percentage(driver: WebDriver, ratio: float) -> None: - """Scroll to a percentage of the page - - Args: - driver (WebDriver): The webdriver to use - ratio (float): The percentage to scroll to - - Raises: - ValueError: If the ratio is not between 0 and 1 - """ - if ratio < 0 or ratio > 1: - raise ValueError("Percentage should be between 0 and 1") - driver.execute_script(f"window.scrollTo(0, document.body.scrollHeight * {ratio});") - - -def create_message(chunk: str, question: str) -> Dict[str, str]: - """Create a message for the chat completion - - Args: - chunk (str): The chunk of text to summarize - question (str): The question to answer - - Returns: - Dict[str, str]: The message to send to the chat completion - """ - return { - "role": "user", - "content": f'"""{chunk}""" Using the above text, answer the following' - f' question: "{question}" -- if the question cannot be answered using the text,' - " summarize the text.", - } diff --git a/spaces/haofeixu/unimatch/unimatch/matching.py b/spaces/haofeixu/unimatch/unimatch/matching.py deleted file mode 100644 index 595437f2307202ab36d7c2ee3dfa0ab44e4dc830..0000000000000000000000000000000000000000 --- a/spaces/haofeixu/unimatch/unimatch/matching.py +++ /dev/null @@ -1,279 +0,0 @@ -import torch -import torch.nn.functional as F - -from .geometry import coords_grid, generate_window_grid, normalize_coords - - -def global_correlation_softmax(feature0, feature1, - pred_bidir_flow=False, - ): - # global correlation - b, c, h, w = feature0.shape - feature0 = feature0.view(b, c, -1).permute(0, 2, 1) # [B, H*W, C] - feature1 = feature1.view(b, c, -1) # [B, C, H*W] - - correlation = torch.matmul(feature0, feature1).view(b, h, w, h, w) / (c ** 0.5) # [B, H, W, H, W] - - # flow from softmax - init_grid = coords_grid(b, h, w).to(correlation.device) # [B, 2, H, W] - grid = init_grid.view(b, 2, -1).permute(0, 2, 1) # [B, H*W, 2] - - correlation = correlation.view(b, h * w, h * w) # [B, H*W, H*W] - - if pred_bidir_flow: - correlation = torch.cat((correlation, correlation.permute(0, 2, 1)), dim=0) # [2*B, H*W, H*W] - init_grid = init_grid.repeat(2, 1, 1, 1) # [2*B, 2, H, W] - grid = grid.repeat(2, 1, 1) # [2*B, H*W, 2] - b = b * 2 - - prob = F.softmax(correlation, dim=-1) # [B, H*W, H*W] - - correspondence = torch.matmul(prob, grid).view(b, h, w, 2).permute(0, 3, 1, 2) # [B, 2, H, W] - - # when predicting bidirectional flow, flow is the concatenation of forward flow and backward flow - flow = correspondence - init_grid - - return flow, prob - - -def local_correlation_softmax(feature0, feature1, local_radius, - padding_mode='zeros', - ): - b, c, h, w = feature0.size() - coords_init = coords_grid(b, h, w).to(feature0.device) # [B, 2, H, W] - coords = coords_init.view(b, 2, -1).permute(0, 2, 1) # [B, H*W, 2] - - local_h = 2 * local_radius + 1 - local_w = 2 * local_radius + 1 - - window_grid = generate_window_grid(-local_radius, local_radius, - -local_radius, local_radius, - local_h, local_w, device=feature0.device) # [2R+1, 2R+1, 2] - window_grid = window_grid.reshape(-1, 2).repeat(b, 1, 1, 1) # [B, 1, (2R+1)^2, 2] - sample_coords = coords.unsqueeze(-2) + window_grid # [B, H*W, (2R+1)^2, 2] - - sample_coords_softmax = sample_coords - - # exclude coords that are out of image space - valid_x = (sample_coords[:, :, :, 0] >= 0) & (sample_coords[:, :, :, 0] < w) # [B, H*W, (2R+1)^2] - valid_y = (sample_coords[:, :, :, 1] >= 0) & (sample_coords[:, :, :, 1] < h) # [B, H*W, (2R+1)^2] - - valid = valid_x & valid_y # [B, H*W, (2R+1)^2], used to mask out invalid values when softmax - - # normalize coordinates to [-1, 1] - sample_coords_norm = normalize_coords(sample_coords, h, w) # [-1, 1] - window_feature = F.grid_sample(feature1, sample_coords_norm, - padding_mode=padding_mode, align_corners=True - ).permute(0, 2, 1, 3) # [B, H*W, C, (2R+1)^2] - feature0_view = feature0.permute(0, 2, 3, 1).view(b, h * w, 1, c) # [B, H*W, 1, C] - - corr = torch.matmul(feature0_view, window_feature).view(b, h * w, -1) / (c ** 0.5) # [B, H*W, (2R+1)^2] - - # mask invalid locations - corr[~valid] = -1e9 - - prob = F.softmax(corr, -1) # [B, H*W, (2R+1)^2] - - correspondence = torch.matmul(prob.unsqueeze(-2), sample_coords_softmax).squeeze(-2).view( - b, h, w, 2).permute(0, 3, 1, 2) # [B, 2, H, W] - - flow = correspondence - coords_init - match_prob = prob - - return flow, match_prob - - -def local_correlation_with_flow(feature0, feature1, - flow, - local_radius, - padding_mode='zeros', - dilation=1, - ): - b, c, h, w = feature0.size() - coords_init = coords_grid(b, h, w).to(feature0.device) # [B, 2, H, W] - coords = coords_init.view(b, 2, -1).permute(0, 2, 1) # [B, H*W, 2] - - local_h = 2 * local_radius + 1 - local_w = 2 * local_radius + 1 - - window_grid = generate_window_grid(-local_radius, local_radius, - -local_radius, local_radius, - local_h, local_w, device=feature0.device) # [2R+1, 2R+1, 2] - window_grid = window_grid.reshape(-1, 2).repeat(b, 1, 1, 1) # [B, 1, (2R+1)^2, 2] - sample_coords = coords.unsqueeze(-2) + window_grid * dilation # [B, H*W, (2R+1)^2, 2] - - # flow can be zero when using features after transformer - if not isinstance(flow, float): - sample_coords = sample_coords + flow.view( - b, 2, -1).permute(0, 2, 1).unsqueeze(-2) # [B, H*W, (2R+1)^2, 2] - else: - assert flow == 0. - - # normalize coordinates to [-1, 1] - sample_coords_norm = normalize_coords(sample_coords, h, w) # [-1, 1] - window_feature = F.grid_sample(feature1, sample_coords_norm, - padding_mode=padding_mode, align_corners=True - ).permute(0, 2, 1, 3) # [B, H*W, C, (2R+1)^2] - feature0_view = feature0.permute(0, 2, 3, 1).view(b, h * w, 1, c) # [B, H*W, 1, C] - - corr = torch.matmul(feature0_view, window_feature).view(b, h * w, -1) / (c ** 0.5) # [B, H*W, (2R+1)^2] - - corr = corr.view(b, h, w, -1).permute(0, 3, 1, 2).contiguous() # [B, (2R+1)^2, H, W] - - return corr - - -def global_correlation_softmax_stereo(feature0, feature1, - ): - # global correlation on horizontal direction - b, c, h, w = feature0.shape - - x_grid = torch.linspace(0, w - 1, w, device=feature0.device) # [W] - - feature0 = feature0.permute(0, 2, 3, 1) # [B, H, W, C] - feature1 = feature1.permute(0, 2, 1, 3) # [B, H, C, W] - - correlation = torch.matmul(feature0, feature1) / (c ** 0.5) # [B, H, W, W] - - # mask subsequent positions to make disparity positive - mask = torch.triu(torch.ones((w, w)), diagonal=1).type_as(feature0) # [W, W] - valid_mask = (mask == 0).unsqueeze(0).unsqueeze(0).repeat(b, h, 1, 1) # [B, H, W, W] - - correlation[~valid_mask] = -1e9 - - prob = F.softmax(correlation, dim=-1) # [B, H, W, W] - - correspondence = (x_grid.view(1, 1, 1, w) * prob).sum(-1) # [B, H, W] - - # NOTE: unlike flow, disparity is typically positive - disparity = x_grid.view(1, 1, w).repeat(b, h, 1) - correspondence # [B, H, W] - - return disparity.unsqueeze(1), prob # feature resolution - - -def local_correlation_softmax_stereo(feature0, feature1, local_radius, - ): - b, c, h, w = feature0.size() - coords_init = coords_grid(b, h, w).to(feature0.device) # [B, 2, H, W] - coords = coords_init.view(b, 2, -1).permute(0, 2, 1).contiguous() # [B, H*W, 2] - - local_h = 1 - local_w = 2 * local_radius + 1 - - window_grid = generate_window_grid(0, 0, - -local_radius, local_radius, - local_h, local_w, device=feature0.device) # [1, 2R+1, 2] - window_grid = window_grid.reshape(-1, 2).repeat(b, 1, 1, 1) # [B, 1, (2R+1), 2] - sample_coords = coords.unsqueeze(-2) + window_grid # [B, H*W, (2R+1), 2] - - sample_coords_softmax = sample_coords - - # exclude coords that are out of image space - valid_x = (sample_coords[:, :, :, 0] >= 0) & (sample_coords[:, :, :, 0] < w) # [B, H*W, (2R+1)^2] - valid_y = (sample_coords[:, :, :, 1] >= 0) & (sample_coords[:, :, :, 1] < h) # [B, H*W, (2R+1)^2] - - valid = valid_x & valid_y # [B, H*W, (2R+1)^2], used to mask out invalid values when softmax - - # normalize coordinates to [-1, 1] - sample_coords_norm = normalize_coords(sample_coords, h, w) # [-1, 1] - window_feature = F.grid_sample(feature1, sample_coords_norm, - padding_mode='zeros', align_corners=True - ).permute(0, 2, 1, 3) # [B, H*W, C, (2R+1)] - feature0_view = feature0.permute(0, 2, 3, 1).contiguous().view(b, h * w, 1, c) # [B, H*W, 1, C] - - corr = torch.matmul(feature0_view, window_feature).view(b, h * w, -1) / (c ** 0.5) # [B, H*W, (2R+1)] - - # mask invalid locations - corr[~valid] = -1e9 - - prob = F.softmax(corr, -1) # [B, H*W, (2R+1)] - - correspondence = torch.matmul(prob.unsqueeze(-2), - sample_coords_softmax).squeeze(-2).view( - b, h, w, 2).permute(0, 3, 1, 2).contiguous() # [B, 2, H, W] - - flow = correspondence - coords_init # flow at feature resolution - match_prob = prob - - flow_x = -flow[:, :1] # [B, 1, H, W] - - return flow_x, match_prob - - -def correlation_softmax_depth(feature0, feature1, - intrinsics, - pose, - depth_candidates, - depth_from_argmax=False, - pred_bidir_depth=False, - ): - b, c, h, w = feature0.size() - assert depth_candidates.dim() == 4 # [B, D, H, W] - scale_factor = c ** 0.5 - - if pred_bidir_depth: - feature0, feature1 = torch.cat((feature0, feature1), dim=0), torch.cat((feature1, feature0), dim=0) - intrinsics = intrinsics.repeat(2, 1, 1) - pose = torch.cat((pose, torch.inverse(pose)), dim=0) - depth_candidates = depth_candidates.repeat(2, 1, 1, 1) - - # depth candidates are actually inverse depth - warped_feature1 = warp_with_pose_depth_candidates(feature1, intrinsics, pose, - 1. / depth_candidates, - ) # [B, C, D, H, W] - - correlation = (feature0.unsqueeze(2) * warped_feature1).sum(1) / scale_factor # [B, D, H, W] - - match_prob = F.softmax(correlation, dim=1) # [B, D, H, W] - - # for cross-task transfer (flow -> depth), extract depth with argmax at test time - if depth_from_argmax: - index = torch.argmax(match_prob, dim=1, keepdim=True) - depth = torch.gather(depth_candidates, dim=1, index=index) - else: - depth = (match_prob * depth_candidates).sum(dim=1, keepdim=True) # [B, 1, H, W] - - return depth, match_prob - - -def warp_with_pose_depth_candidates(feature1, intrinsics, pose, depth, - clamp_min_depth=1e-3, - ): - """ - feature1: [B, C, H, W] - intrinsics: [B, 3, 3] - pose: [B, 4, 4] - depth: [B, D, H, W] - """ - - assert intrinsics.size(1) == intrinsics.size(2) == 3 - assert pose.size(1) == pose.size(2) == 4 - assert depth.dim() == 4 - - b, d, h, w = depth.size() - c = feature1.size(1) - - with torch.no_grad(): - # pixel coordinates - grid = coords_grid(b, h, w, homogeneous=True, device=depth.device) # [B, 3, H, W] - # back project to 3D and transform viewpoint - points = torch.inverse(intrinsics).bmm(grid.view(b, 3, -1)) # [B, 3, H*W] - points = torch.bmm(pose[:, :3, :3], points).unsqueeze(2).repeat( - 1, 1, d, 1) * depth.view(b, 1, d, h * w) # [B, 3, D, H*W] - points = points + pose[:, :3, -1:].unsqueeze(-1) # [B, 3, D, H*W] - # reproject to 2D image plane - points = torch.bmm(intrinsics, points.view(b, 3, -1)).view(b, 3, d, h * w) # [B, 3, D, H*W] - pixel_coords = points[:, :2] / points[:, -1:].clamp(min=clamp_min_depth) # [B, 2, D, H*W] - - # normalize to [-1, 1] - x_grid = 2 * pixel_coords[:, 0] / (w - 1) - 1 - y_grid = 2 * pixel_coords[:, 1] / (h - 1) - 1 - - grid = torch.stack([x_grid, y_grid], dim=-1) # [B, D, H*W, 2] - - # sample features - warped_feature = F.grid_sample(feature1, grid.view(b, d * h, w, 2), mode='bilinear', - padding_mode='zeros', - align_corners=True).view(b, c, d, h, w) # [B, C, D, H, W] - - return warped_feature diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/backbone/swint_v2_vl.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/backbone/swint_v2_vl.py deleted file mode 100644 index b415e07ebed3c1d8e2e5491ea8cf1fb5d1462a78..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/backbone/swint_v2_vl.py +++ /dev/null @@ -1,861 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# modified from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py -# -------------------------------------------------------- - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from einops import rearrange -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0., - ntext=None, dim_text=None): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - if ntext is not None: - self.qkv_text = nn.Linear(dim_text, dim * 3, bias=qkv_bias) - self.proj_text = nn.Linear(dim, dim_text) - - self.i2t_relative_position_bias = nn.Parameter( - torch.zeros(2, num_heads, ntext)) # (2, nH, ntext) - self.t2t_relative_position_bias = nn.Parameter( - torch.zeros(num_heads, ntext, ntext)) # (nH, ntext, ntext) - trunc_normal_(self.i2t_relative_position_bias, std=.02) - trunc_normal_(self.t2t_relative_position_bias, std=.02) - - def forward(self, x, mask=None, x_text=None, mask_text=None): - """ Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - x_text: input text features with shape of (B_text, N_text, C_text) - mask_text: (0/-inf) mask with shape of (B_text, N_text) or None; TODO: support casual mask - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - - if x_text is not None: - B_text, N_text, C_text = x_text.shape - nW = B_ // B_text # number of windows - assert B_text * nW == B_, "B_ is not a multiplier of B_text in window attention" - # notice that after qkv_text, the hidden dimension is C instead of C_text - qkv_text = self.qkv_text(x_text).reshape(B_text, N_text, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, - 1, 4) - q_text, k_text, v_text = qkv_text[0], qkv_text[1], qkv_text[ - 2] # make torchscript happy (cannot use tensor as tuple) - - # image to text attention - attn_i2t = (q @ torch.repeat_interleave(k_text, nW, dim=0).transpose(-2, -1)) # B_, nH, N, N_text - # add image to text bias and text_mask - if mask_text is not None: - mask_and_i2t_bias = mask_text.view(B_text, 1, 1, N_text) + self.i2t_relative_position_bias[:1].expand( - B_text, -1, -1).unsqueeze(-2) # B_text, nH, 1, N_text - else: - mask_and_i2t_bias = self.i2t_relative_position_bias[:1].expand(B_text, -1, -1).unsqueeze( - -2) # B_text, nH, 1, N_text - attn_i2t = attn_i2t + torch.repeat_interleave(mask_and_i2t_bias, nW, dim=0) - - attn = torch.cat((attn, attn_i2t), dim=-1) # B_, nH, N, N+N_text - - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - if x_text is None: - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - else: - x = ( - attn @ torch.cat((v, torch.repeat_interleave(v_text, nW, dim=0)), dim=-2) - ).transpose(1, 2).reshape(B_, N, C) - - # compute attn_t2i - q_text = q_text * self.scale - - kv = qkv[1:].reshape(2, B_text, nW, self.num_heads, N, C // self.num_heads).transpose(2, 3) - k, v = kv[0].reshape(B_text, self.num_heads, nW * N, -1), kv[1].reshape(B_text, self.num_heads, nW * N, -1) - attn_t2i = (q_text @ k.transpose(-2, -1)) - mask_t2i = self.i2t_relative_position_bias[1:].expand(B_text, -1, -1).unsqueeze(-1) # B_text, nH, N_text, 1 - attn_t2i = attn_t2i + mask_t2i - - attn_t2t = (q_text @ k_text.transpose(-2, -1)) - # add relative positional bias - attn_t2t = attn_t2t + self.t2t_relative_position_bias.unsqueeze(0) - if mask_text is not None: - attn_t2t = attn_t2t + mask_text.view(B_text, 1, 1, N_text) - - attn_t = torch.cat((attn_t2i, attn_t2t), dim=-1) # B_text, nH, N_text, N+N_text - attn_t = self.softmax(attn_t) - attn_t = self.attn_drop(attn_t) - - x_text = ( - attn_t @ torch.cat((v, v_text), dim=-2) - ).transpose(1, 2).reshape(B_text, N_text, C) - - x_text = self.proj_text(x_text) - x_text = self.proj_drop(x_text) - - x = self.proj(x) - x = self.proj_drop(x) - return x, x_text - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm, layer_scale=False, ntext=None, dim_text=None): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop, - ntext=ntext, dim_text=dim_text - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - self.gamma = 1.0 - if layer_scale: - self.gamma = nn.Parameter( - 1e-4*torch.ones(dim), requires_grad=True - ) - - if dim_text is not None: - self.norm1_text = norm_layer(dim_text) - self.norm2_text = norm_layer(dim_text) - mlp_hidden_dim_text = int(dim_text * mlp_ratio) - self.mlp_text = Mlp(in_features=dim_text, hidden_features=mlp_hidden_dim_text, act_layer=act_layer, - drop=drop) - self.gamma_text = 1.0 - if layer_scale: - self.gamma_text = nn.Parameter( - 1e-4*torch.ones(dim_text), requires_grad=True - ) - - def forward(self, x, mask_matrix, x_text, mask_text): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - x_text: Input text feature, tensor size (B, L_text, C_text). L_text: Number of text tokens. - mask_text: text mask (vector right now). - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - if x_text is not None: - B, L_text, C_text = x_text.shape - shortcut_text = x_text - x_text = self.norm1_text(x_text) - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows, x_text = self.attn(x_windows, mask=attn_mask, x_text=x_text, - mask_text=mask_text) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(self.gamma*x) - x = x + self.drop_path(self.gamma*self.mlp(self.norm2(x))) - - if x_text is not None: - x_text = shortcut_text + self.drop_path(self.gamma_text*x_text) - x_text = x_text + self.drop_path(self.gamma_text*self.mlp_text(self.norm2_text(x_text))) - - return x, x_text - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - layer_scale=False, - ntext=None, - dim_text=None): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - layer_scale=layer_scale, - ntext=ntext, - dim_text=dim_text) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(patch_size=3, in_chans=dim, embed_dim=dim*2, - stride=2, padding=1, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W, x_text=None, mask_text=None): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - x_text: input text features with shape of (B_text, N_text, C_text) - mask_text: (0/-inf) mask with shape of (B_text, N_text) or None; - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x, x_text = checkpoint.checkpoint(blk, x, attn_mask, x_text, mask_text) - else: - x, x_text = blk(x, attn_mask, x_text, mask_text) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww, x_text - else: - return x, H, W, x, H, W, x_text - - -# class PatchEmbed(nn.Module): -# """ Image to Patch Embedding -# Args: -# patch_size (int): Patch token size. Default: 4. -# in_chans (int): Number of input image channels. Default: 3. -# embed_dim (int): Number of linear projection output channels. Default: 96. -# norm_layer (nn.Module, optional): Normalization layer. Default: None -# """ -# -# def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): -# super().__init__() -# patch_size = to_2tuple(patch_size) -# self.patch_size = patch_size -# -# self.in_chans = in_chans -# self.embed_dim = embed_dim -# -# self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) -# if norm_layer is not None: -# self.norm = norm_layer(embed_dim) -# else: -# self.norm = None -# -# def forward(self, x): -# """Forward function.""" -# # padding -# _, _, H, W = x.size() -# if W % self.patch_size[1] != 0: -# x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) -# if H % self.patch_size[0] != 0: -# x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) -# -# x = self.proj(x) # B C Wh Ww -# if self.norm is not None: -# Wh, Ww = x.size(2), x.size(3) -# x = x.flatten(2).transpose(1, 2) -# x = self.norm(x) -# x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) -# -# return x - - -class ConvEmbed(nn.Module): - """ Image to Patch Embedding - """ - - def __init__( - self, - patch_size=7, - in_chans=3, - embed_dim=64, - stride=4, - padding=2, - norm_layer=None - ): - super().__init__() - self.patch_size = patch_size - self.embed_dim = embed_dim - - self.proj = nn.Conv2d( - in_chans, embed_dim, - kernel_size=patch_size, - stride=stride, - padding=padding - ) - self.norm = norm_layer(embed_dim) if norm_layer else None - - def forward(self, x, H=None, W=None): - restore_hw = False - if H is None and W is None and len(x.size()) == 4: - _, _, H, W = x.size() - if W % self.patch_size != 0: - x = F.pad(x, (0, self.patch_size - W % self.patch_size)) - if H % self.patch_size != 0: - x = F.pad(x, (0, 0, 0, self.patch_size - H % self.patch_size)) - restore_hw = True - - if len(x.size()) == 3: - x = rearrange( - x, 'b (h w) c -> b c h w', - h=H, - w=W - ) - x = self.proj(x) # B C Wh Ww - B, C, Wh, Ww = x.shape - x = rearrange(x, 'b c h w -> b (h w) c') - if self.norm: - x = self.norm(x) - - if restore_hw: - x = rearrange( - x, 'b (h w) c -> b c h w', - h=Wh, - w=Ww - ) - - return x - - -class SwinTransformer(nn.Module): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=7, - patch_padding=2, - patch_stride=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - frozen_stages=-1, - use_checkpoint=False, - layer_scale=False, - out_features=["stage2", "stage3", "stage4", "stage5"], - out_norm=True, - backbone_arch="SWINT-FPN-RETINANET", - max_query_len=None, - lang_dim=None): - super(SwinTransformer, self).__init__() - - print("VISION BACKBONE USE GRADIENT CHECKPOINTING: ", use_checkpoint) - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.frozen_stages = frozen_stages - - self.out_features = out_features - self.out_norm = out_norm - - # split image into non-overlapping patches - # self.patch_embed = PatchEmbed( - # patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - # norm_layer=norm_layer if self.patch_norm else None) - self.patch_embed = ConvEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, padding=patch_padding, - norm_layer=norm_layer if self.patch_norm else None - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - self._out_feature_strides = {} - self._out_feature_channels = {} - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - if i_layer < self.num_layers - 1: - ntext, dim_text = None, None - else: - ntext, dim_text = max_query_len, lang_dim - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=ConvEmbed if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint and i_layer > self.frozen_stages - 1, - layer_scale=layer_scale, - ntext=ntext, - dim_text=dim_text - ) - self.layers.append(layer) - - stage = f'stage{i_layer + 2}' - if stage in self.out_features: - self._out_feature_channels[stage] = embed_dim * 2 ** i_layer - self._out_feature_strides[stage] = 4 * 2 ** i_layer - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - if self.out_norm: - for i_layer in range(self.num_layers): - stage = f'stage{i_layer + 2}' - if stage in self.out_features: - if i_layer == 0 and backbone_arch.endswith("RETINANET"): - layer = nn.Identity() - else: - layer = norm_layer(num_features[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - self.apply(_init_weights) - - def forward(self, inputs): - """Forward function.""" - x = inputs["img"] - language_dict_features = inputs["lang"] - - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - x_text = language_dict_features['hidden'] - if "masks" in language_dict_features: - mask_text = 1.0 - language_dict_features["masks"] # (B, N_text) 0 means not to be masked out - mask_text.masked_fill_(mask_text.bool(), -float('inf')) - else: - mask_text = None - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - if i < self.num_layers - 1: - x_out, H, W, x, Wh, Ww, _ = layer(x, Wh, Ww, x_text=None, mask_text=None) - else: - x_out, H, W, x, Wh, Ww, x_text = layer(x, Wh, Ww, x_text=x_text, mask_text=mask_text) - name = f'stage{i + 2}' - if name in self.out_features: - if self.out_norm: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x_out) - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - - # the backbone only update the "hidden" field, currently - language_dict_features['hidden'] = x_text - - return outs, language_dict_features - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - -def build_swint_backbone(cfg): - """ - Create a SwinT instance from config. - - Returns: - VoVNet: a :class:`VoVNet` instance. - """ - return SwinTransformer( - patch_size=7, - patch_padding=2, - patch_stride=4, - in_chans=3, - embed_dim=cfg.MODEL.SWINT.EMBED_DIM, - depths=cfg.MODEL.SWINT.DEPTHS, - num_heads=cfg.MODEL.SWINT.NUM_HEADS, - window_size=cfg.MODEL.SWINT.WINDOW_SIZE, - mlp_ratio=cfg.MODEL.SWINT.MLP_RATIO, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=cfg.MODEL.SWINT.DROP_PATH_RATE, - norm_layer=nn.LayerNorm, - ape=cfg.MODEL.SWINT.APE, - patch_norm=True, - frozen_stages=cfg.MODEL.BACKBONE.FREEZE_CONV_BODY_AT, - backbone_arch=cfg.MODEL.BACKBONE.CONV_BODY, - use_checkpoint=cfg.MODEL.BACKBONE.USE_CHECKPOINT, - layer_scale=cfg.MODEL.SWINT.LAYER_SCALE, - out_features=cfg.MODEL.BACKBONE.OUT_FEATURES, - out_norm=cfg.MODEL.SWINT.OUT_NORM, - max_query_len=cfg.MODEL.LANGUAGE_BACKBONE.MAX_QUERY_LEN, - lang_dim=cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM - ) \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/datasets/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/README.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/README.md deleted file mode 100644 index 36263bd87401a98f273831f4ec98fcb5c65d3412..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/README.md +++ /dev/null @@ -1,31 +0,0 @@ - -Here are a few projects that are built on detectron2. -They are examples of how to use detectron2 as a library, to make your projects more -maintainable. - -## Projects by Facebook - -Note that these are research projects, and therefore may not have the same level -of support or stability of detectron2. - -+ [DensePose: Dense Human Pose Estimation In The Wild](DensePose) -+ [Scale-Aware Trident Networks for Object Detection](TridentNet) -+ [TensorMask: A Foundation for Dense Object Segmentation](TensorMask) -+ [Mesh R-CNN](https://github.com/facebookresearch/meshrcnn) -+ [PointRend: Image Segmentation as Rendering](PointRend) -+ [Momentum Contrast for Unsupervised Visual Representation Learning](https://github.com/facebookresearch/moco/tree/master/detection) - - -## External Projects - -External projects in the community that use detectron2: - - - -+ [VoVNet backbones](https://github.com/youngwanLEE/vovnet-detectron2). -+ [AdelaiDet](https://github.com/aim-uofa/adet), a detection toolbox from the Universtiy of Adelaide. -+ [CenterMask : Real-Time Anchor-Free Instance Segmentation](https://github.com/youngwanLEE/centermask2) diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/segment/__init__.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/segment/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hkunlp/Binder/utils/mmqa/__init__.py b/spaces/hkunlp/Binder/utils/mmqa/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/cascade/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/cascade/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/housexu123/bingo-2.0/src/components/ui/separator.tsx b/spaces/housexu123/bingo-2.0/src/components/ui/separator.tsx deleted file mode 100644 index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000 --- a/spaces/housexu123/bingo-2.0/src/components/ui/separator.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SeparatorPrimitive from '@radix-ui/react-separator' - -import { cn } from '@/lib/utils' - -const Separator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, orientation = 'horizontal', decorative = true, ...props }, - ref - ) => ( - - ) -) -Separator.displayName = SeparatorPrimitive.Root.displayName - -export { Separator } diff --git a/spaces/hu-po/speech2speech/src/openailib.py b/spaces/hu-po/speech2speech/src/openailib.py deleted file mode 100644 index 75acdf1eca37f2813c94799d865671c9a7cd81c2..0000000000000000000000000000000000000000 --- a/spaces/hu-po/speech2speech/src/openailib.py +++ /dev/null @@ -1,58 +0,0 @@ -import logging -import os - -from .utils import timeit - -import openai - -logging.basicConfig(level=logging.INFO) -log = logging.getLogger(__name__) - - -def set_openai_key(openai_api_key_textbox = None): - log.info(f"Setting OpenAI key.") - if openai_api_key_textbox is not None: - os.environ["OPENAI_API_KEY"] = openai_api_key_textbox - try: - openai.api_key = os.getenv("OPENAI_API_KEY") - except KeyError as e: - log.warning("OPENAI_API_KEY not found in environment variables.") - pass - -set_openai_key() - -@timeit -def speech_to_text(audio_path): - log.info("Transcribing audio...") - transcript = openai.Audio.transcribe("whisper-1", open(audio_path, "rb")) - text = transcript["text"] - log.info(f"Transcript: \n\t{text}") - return text - - -@timeit -def top_response(prompt, system=None, model="gpt-3.5-turbo", max_tokens=20, temperature=0.8): - _prompt = [ - { - "role": "user", - "content": prompt, - }, - ] - if system: - _prompt = [ - { - "role": "system", - "content": system, - }, - ] + _prompt - log.info(f"API call to {model} with prompt: \n\n\t{_prompt}\n\n") - _response = openai.ChatCompletion.create( - model=model, - messages=_prompt, - temperature=temperature, - n=1, - max_tokens=max_tokens, - ) - log.info(f"API reponse: \n\t{_response}") - response: str = _response['choices'][0]['message']['content'] - return response diff --git a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_layout-e397cbf6.css b/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_layout-e397cbf6.css deleted file mode 100644 index 7d58639711a30204720b05b7a39e65f3ffde151a..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_layout-e397cbf6.css +++ /dev/null @@ -1 +0,0 @@ -*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji"}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[type=text],[type=email],[type=url],[type=password],[type=number],[type=date],[type=datetime-local],[type=month],[type=search],[type=tel],[type=time],[type=week],[multiple],textarea,select{-webkit-appearance:none;-moz-appearance:none;appearance:none;background-color:#fff;border-color:#6b7280;border-width:1px;border-radius:0;padding:.5rem .75rem;font-size:1rem;line-height:1.5rem;--tw-shadow: 0 0 #0000}[type=text]:focus,[type=email]:focus,[type=url]:focus,[type=password]:focus,[type=number]:focus,[type=date]:focus,[type=datetime-local]:focus,[type=month]:focus,[type=search]:focus,[type=tel]:focus,[type=time]:focus,[type=week]:focus,[multiple]:focus,textarea:focus,select:focus{outline:2px solid transparent;outline-offset:2px;--tw-ring-inset: var(--tw-empty, );--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: #2563eb;--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow);border-color:#2563eb}input::-moz-placeholder,textarea::-moz-placeholder{color:#6b7280;opacity:1}input::placeholder,textarea::placeholder{color:#6b7280;opacity:1}::-webkit-datetime-edit-fields-wrapper{padding:0}::-webkit-date-and-time-value{min-height:1.5em}::-webkit-datetime-edit,::-webkit-datetime-edit-year-field,::-webkit-datetime-edit-month-field,::-webkit-datetime-edit-day-field,::-webkit-datetime-edit-hour-field,::-webkit-datetime-edit-minute-field,::-webkit-datetime-edit-second-field,::-webkit-datetime-edit-millisecond-field,::-webkit-datetime-edit-meridiem-field{padding-top:0;padding-bottom:0}select{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='none' viewBox='0 0 20 20'%3e%3cpath stroke='%236b7280' stroke-linecap='round' stroke-linejoin='round' stroke-width='1.5' d='M6 8l4 4 4-4'/%3e%3c/svg%3e");background-position:right .5rem center;background-repeat:no-repeat;background-size:1.5em 1.5em;padding-right:2.5rem;-webkit-print-color-adjust:exact;print-color-adjust:exact}[multiple]{background-image:initial;background-position:initial;background-repeat:unset;background-size:initial;padding-right:.75rem;-webkit-print-color-adjust:unset;print-color-adjust:unset}[type=checkbox],[type=radio]{-webkit-appearance:none;-moz-appearance:none;appearance:none;padding:0;-webkit-print-color-adjust:exact;print-color-adjust:exact;display:inline-block;vertical-align:middle;background-origin:border-box;-webkit-user-select:none;-moz-user-select:none;user-select:none;flex-shrink:0;height:1rem;width:1rem;color:#2563eb;background-color:#fff;border-color:#6b7280;border-width:1px;--tw-shadow: 0 0 #0000}[type=checkbox]{border-radius:0}[type=radio]{border-radius:100%}[type=checkbox]:focus,[type=radio]:focus{outline:2px solid transparent;outline-offset:2px;--tw-ring-inset: var(--tw-empty, );--tw-ring-offset-width: 2px;--tw-ring-offset-color: #fff;--tw-ring-color: #2563eb;--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow)}[type=checkbox]:checked,[type=radio]:checked{border-color:transparent;background-color:currentColor;background-size:100% 100%;background-position:center;background-repeat:no-repeat}[type=checkbox]:checked{background-image:url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3cpath d='M12.207 4.793a1 1 0 010 1.414l-5 5a1 1 0 01-1.414 0l-2-2a1 1 0 011.414-1.414L6.5 9.086l4.293-4.293a1 1 0 011.414 0z'/%3e%3c/svg%3e")}[type=radio]:checked{background-image:url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='8' cy='8' r='3'/%3e%3c/svg%3e")}[type=checkbox]:checked:hover,[type=checkbox]:checked:focus,[type=radio]:checked:hover,[type=radio]:checked:focus{border-color:transparent;background-color:currentColor}[type=checkbox]:indeterminate{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='none' viewBox='0 0 16 16'%3e%3cpath stroke='white' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M4 8h8'/%3e%3c/svg%3e");border-color:transparent;background-color:currentColor;background-size:100% 100%;background-position:center;background-repeat:no-repeat}[type=checkbox]:indeterminate:hover,[type=checkbox]:indeterminate:focus{border-color:transparent;background-color:currentColor}[type=file]{background:unset;border-color:inherit;border-width:0;border-radius:0;padding:0;font-size:unset;line-height:inherit}[type=file]:focus{outline:1px solid ButtonText;outline:1px auto -webkit-focus-ring-color}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::-webkit-backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.absolute{position:absolute}.relative{position:relative}.bottom-0{bottom:0px}.top-0{top:0px}.z-0{z-index:0}.z-50{z-index:50}.col-span-6{grid-column:span 6 / span 6}.col-span-4{grid-column:span 4 / span 4}.col-span-2{grid-column:span 2 / span 2}.row-start-1{grid-row-start:1}.row-start-3{grid-row-start:3}.row-start-2{grid-row-start:2}.row-start-4{grid-row-start:4}.m-1{margin:.25rem}.mx-auto{margin-left:auto;margin-right:auto}.my-3{margin-top:.75rem;margin-bottom:.75rem}.my-10{margin-top:2.5rem;margin-bottom:2.5rem}.mr-1\.5{margin-right:.375rem}.mr-1{margin-right:.25rem}.ml-1\.5{margin-left:.375rem}.ml-1{margin-left:.25rem}.ml-2{margin-left:.5rem}.ml-3{margin-left:.75rem}.mt-6{margin-top:1.5rem}.mb-4{margin-bottom:1rem}.block{display:block}.inline-block{display:inline-block}.flex{display:flex}.grid{display:grid}.aspect-square{aspect-ratio:1 / 1}.h-3{height:.75rem}.w-full{width:100%}.w-3{width:.75rem}.min-w-\[9ch\]{min-width:9ch}.min-w-\[3ch\]{min-width:3ch}.max-w-full{max-width:100%}.max-w-prose{max-width:65ch}.max-w-\[100px\]{max-width:100px}.max-w-screen-md{max-width:768px}.max-w-\[1rem\]{max-width:1rem}.grow{flex-grow:1}.rotate-180{--tw-rotate: 180deg;transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.transform{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}@-webkit-keyframes spin{to{transform:rotate(360deg)}}@keyframes spin{to{transform:rotate(360deg)}}.animate-spin{-webkit-animation:spin 1s linear infinite;animation:spin 1s linear infinite}.cursor-pointer{cursor:pointer}.select-none{-webkit-user-select:none;-moz-user-select:none;user-select:none}.grid-cols-6{grid-template-columns:repeat(6,minmax(0,1fr))}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.justify-around{justify-content:space-around}.gap-3{gap:.75rem}.gap-4{gap:1rem}.space-x-2>:not([hidden])~:not([hidden]){--tw-space-x-reverse: 0;margin-right:calc(.5rem * var(--tw-space-x-reverse));margin-left:calc(.5rem * calc(1 - var(--tw-space-x-reverse)))}.rounded-full{border-radius:9999px}.rounded-2xl{border-radius:1rem}.rounded-lg{border-radius:.5rem}.border{border-width:1px}.border-2{border-width:2px}.border-b{border-bottom-width:1px}.border-black{--tw-border-opacity: 1;border-color:rgb(0 0 0 / var(--tw-border-opacity))}.border-gray-200{--tw-border-opacity: 1;border-color:rgb(229 231 235 / var(--tw-border-opacity))}.bg-white{--tw-bg-opacity: 1;background-color:rgb(255 255 255 / var(--tw-bg-opacity))}.bg-black{--tw-bg-opacity: 1;background-color:rgb(0 0 0 / var(--tw-bg-opacity))}.bg-slate-900{--tw-bg-opacity: 1;background-color:rgb(15 23 42 / var(--tw-bg-opacity))}.px-2{padding-left:.5rem;padding-right:.5rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.px-3{padding-left:.75rem;padding-right:.75rem}.py-8{padding-top:2rem;padding-bottom:2rem}.py-3{padding-top:.75rem;padding-bottom:.75rem}.px-2\.5{padding-left:.625rem;padding-right:.625rem}.py-1{padding-top:.25rem;padding-bottom:.25rem}.pl-1{padding-left:.25rem}.pb-3{padding-bottom:.75rem}.text-center{text-align:center}.text-right{text-align:right}.text-xs{font-size:.75rem;line-height:1rem}.text-base{font-size:1rem;line-height:1.5rem}.text-3xl{font-size:1.875rem;line-height:2.25rem}.text-sm{font-size:.875rem;line-height:1.25rem}.font-bold{font-weight:700}.font-semibold{font-weight:600}.uppercase{text-transform:uppercase}.italic{font-style:italic}.leading-normal{line-height:1.5}.text-black{--tw-text-opacity: 1;color:rgb(0 0 0 / var(--tw-text-opacity))}.text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.underline{text-decoration-line:underline}.shadow-sm{--tw-shadow: 0 1px 2px 0 rgb(0 0 0 / .05);--tw-shadow-colored: 0 1px 2px 0 var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.line-clamp-3{overflow:hidden;display:-webkit-box;-webkit-box-orient:vertical;-webkit-line-clamp:3}.placeholder\:text-white::-moz-placeholder{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.placeholder\:text-white::placeholder{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.placeholder\:text-opacity-30::-moz-placeholder{--tw-text-opacity: .3}.placeholder\:text-opacity-30::placeholder{--tw-text-opacity: .3}.hover\:bg-gray-50:hover{--tw-bg-opacity: 1;background-color:rgb(249 250 251 / var(--tw-bg-opacity))}.hover\:text-gray-500:hover{--tw-text-opacity: 1;color:rgb(107 114 128 / var(--tw-text-opacity))}.hover\:no-underline:hover{text-decoration-line:none}.focus\:border-gray-400:focus{--tw-border-opacity: 1;border-color:rgb(156 163 175 / var(--tw-border-opacity))}.focus\:outline-none:focus{outline:2px solid transparent;outline-offset:2px}.disabled\:opacity-50:disabled{opacity:.5}@media (prefers-color-scheme: dark){.dark\:border-white{--tw-border-opacity: 1;border-color:rgb(255 255 255 / var(--tw-border-opacity))}.dark\:bg-\[rgb\(11\,15\,25\)\]{--tw-bg-opacity: 1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.dark\:bg-white{--tw-bg-opacity: 1;background-color:rgb(255 255 255 / var(--tw-bg-opacity))}.dark\:bg-black{--tw-bg-opacity: 1;background-color:rgb(0 0 0 / var(--tw-bg-opacity))}.dark\:bg-slate-300{--tw-bg-opacity: 1;background-color:rgb(203 213 225 / var(--tw-bg-opacity))}.dark\:text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.dark\:text-black{--tw-text-opacity: 1;color:rgb(0 0 0 / var(--tw-text-opacity))}.dark\:placeholder\:text-black::-moz-placeholder{--tw-text-opacity: 1;color:rgb(0 0 0 / var(--tw-text-opacity))}.dark\:placeholder\:text-black::placeholder{--tw-text-opacity: 1;color:rgb(0 0 0 / var(--tw-text-opacity))}.dark\:placeholder\:text-opacity-10::-moz-placeholder{--tw-text-opacity: .1}.dark\:placeholder\:text-opacity-10::placeholder{--tw-text-opacity: .1}.dark\:hover\:bg-gray-800:hover{--tw-bg-opacity: 1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}}@media (min-width: 640px){.sm\:justify-center{justify-content:center}}@media (min-width: 768px){.md\:col-span-4{grid-column:span 4 / span 4}.md\:col-span-2{grid-column:span 2 / span 2}.md\:col-span-5{grid-column:span 5 / span 5}.md\:col-span-1{grid-column:span 1 / span 1}.md\:col-start-5{grid-column-start:5}.md\:row-start-2{grid-row-start:2}.md\:justify-end{justify-content:flex-end}} diff --git a/spaces/hussain-shk/IndiSent/scripts/postprocess_score.py b/spaces/hussain-shk/IndiSent/scripts/postprocess_score.py deleted file mode 100644 index f73002d98b59c3477dc664163095a60c163f8748..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/scripts/postprocess_score.py +++ /dev/null @@ -1,48 +0,0 @@ -import sys - -def postprocess( - infname, outfname, input_size -): - """ - parse fairseq interactive output, convert script back to native Indic script (in case of Indic languages) and detokenize. - - infname: fairseq log file - outfname: output file of translation (sentences not translated contain the dummy string 'DUMMY_OUTPUT' - input_size: expected number of output sentences - """ - - consolidated_testoutput = [] - # with open(infname,'r',encoding='utf-8') as infile: - # consolidated_testoutput= list(map(lambda x: x.strip(), filter(lambda x: x.startswith('H-'),infile) )) - # consolidated_testoutput.sort(key=lambda x: int(x.split('\t')[0].split('-')[1])) - # consolidated_testoutput=[ x.split('\t')[2] for x in consolidated_testoutput ] - - consolidated_testoutput = [(x, 0.0, "") for x in range(input_size)] - temp_testoutput = [] - with open(infname, "r", encoding="utf-8") as infile: - temp_testoutput = list( - map( - lambda x: x.strip().split("\t"), - filter(lambda x: x.startswith("H-"), infile), - ) - ) - temp_testoutput = list( - map(lambda x: (int(x[0].split("-")[1]), float(x[1]), x[2]), temp_testoutput) - ) - for sid, score, hyp in temp_testoutput: - consolidated_testoutput[sid] = (sid, score, hyp) - #consolidated_testoutput = [x[2] for x in consolidated_testoutput] - - with open(outfname, "w", encoding="utf-8") as outfile: - for (sid, score, hyp) in consolidated_testoutput: - outfile.write("{}\n".format(score)) - -if __name__ == "__main__": - - infname = sys.argv[1] - outfname = sys.argv[2] - input_size = int(sys.argv[3]) - - postprocess( - infname, outfname, input_size - ) diff --git a/spaces/hysts/age-estimation-APPA-REAL/README.md b/spaces/hysts/age-estimation-APPA-REAL/README.md deleted file mode 100644 index 081c0a0a0605e0e75d898a51ed3cbac5c5dfc97f..0000000000000000000000000000000000000000 --- a/spaces/hysts/age-estimation-APPA-REAL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Age Estimation APPA REAL -emoji: 🦀 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/igtsolutions/igtsolutions/style.css b/spaces/igtsolutions/igtsolutions/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/igtsolutions/igtsolutions/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/wording.py b/spaces/imseldrith/DeepFakeAI/DeepFakeAI/wording.py deleted file mode 100644 index 1d70363ea7546eeb3b3ec224eb04848db727718e..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/wording.py +++ /dev/null @@ -1,88 +0,0 @@ -WORDING =\ -{ - 'python_not_supported': 'Python version is not supported, upgrade to {version} or higher', - 'ffmpeg_not_installed': 'FFMpeg is not installed', - 'source_help': 'select a source image', - 'target_help': 'select a target image or video', - 'output_help': 'specify the output file or directory', - 'frame_processors_help': 'choose from the available frame processors (choices: {choices}, ...)', - 'ui_layouts_help': 'choose from the available ui layouts (choices: {choices}, ...)', - 'keep_fps_help': 'preserve the frames per second (fps) of the target', - 'keep_temp_help': 'retain temporary frames after processing', - 'skip_audio_help': 'omit audio from the target', - 'face_recognition_help': 'specify the method for face recognition', - 'face_analyser_direction_help': 'specify the direction used for face analysis', - 'face_analyser_age_help': 'specify the age used for face analysis', - 'face_analyser_gender_help': 'specify the gender used for face analysis', - 'reference_face_position_help': 'specify the position of the reference face', - 'reference_face_distance_help': 'specify the distance between the reference face and the target face', - 'reference_frame_number_help': 'specify the number of the reference frame', - 'trim_frame_start_help': 'specify the start frame for extraction', - 'trim_frame_end_help': 'specify the end frame for extraction', - 'temp_frame_format_help': 'specify the image format used for frame extraction', - 'temp_frame_quality_help': 'specify the image quality used for frame extraction', - 'output_video_encoder_help': 'specify the encoder used for the output video', - 'output_video_quality_help': 'specify the quality used for the output video', - 'max_memory_help': 'specify the maximum amount of ram to be used (in gb)', - 'execution_providers_help': 'choose from the available execution providers (choices: {choices}, ...)', - 'execution_thread_count_help': 'specify the number of execution threads', - 'execution_queue_count_help': 'specify the number of execution queries', - 'creating_temp': 'Creating temporary resources', - 'extracting_frames_fps': 'Extracting frames with {fps} FPS', - 'processing': 'Processing', - 'downloading': 'Downloading', - 'temp_frames_not_found': 'Temporary frames not found', - 'creating_video_fps': 'Creating video with {fps} FPS', - 'creating_video_failed': 'Creating video failed', - 'skipping_audio': 'Skipping audio', - 'restoring_audio': 'Restoring audio', - 'clearing_temp': 'Clearing temporary resources', - 'processing_image_succeed': 'Processing to image succeed', - 'processing_image_failed': 'Processing to image failed', - 'processing_video_succeed': 'Processing to video succeed', - 'processing_video_failed': 'Processing to video failed', - 'select_image_source': 'Select an image for source path', - 'select_image_or_video_target': 'Select an image or video for target path', - 'no_source_face_detected': 'No source face detected', - 'frame_processor_not_loaded': 'Frame processor {frame_processor} could not be loaded', - 'frame_processor_not_implemented': 'Frame processor {frame_processor} not implemented correctly', - 'ui_layout_not_loaded': 'UI layout {ui_layout} could not be loaded', - 'ui_layout_not_implemented': 'UI layout {ui_layout} not implemented correctly', - 'start_button_label': 'START', - 'clear_button_label': 'CLEAR', - 'benchmark_result_dataframe_label': 'BENCHMARK RESULT', - 'benchmark_cycles_slider_label': 'BENCHMARK CYCLES', - 'execution_providers_checkbox_group_label': 'EXECUTION PROVIDERS', - 'execution_thread_count_slider_label': 'EXECUTION THREAD COUNT', - 'execution_queue_count_slider_label': 'EXECUTION QUEUE COUNT', - 'face_analyser_direction_dropdown_label': 'FACE ANALYSER DIRECTION', - 'face_analyser_age_dropdown_label': 'FACE ANALYSER AGE', - 'face_analyser_gender_dropdown_label': 'FACE ANALYSER GENDER', - 'reference_face_gallery_label': 'REFERENCE FACE', - 'face_recognition_dropdown_label': 'FACE RECOGNITION', - 'reference_face_distance_slider_label': 'REFERENCE FACE DISTANCE', - 'output_image_or_video_label': 'OUTPUT', - 'output_video_encoder_dropdown_label': 'OUTPUT VIDEO ENCODER', - 'output_video_quality_slider_label': 'OUTPUT VIDEO QUALITY', - 'preview_image_label': 'PREVIEW', - 'preview_frame_slider_label': 'PREVIEW FRAME', - 'frame_processors_checkbox_group_label': 'FRAME PROCESSORS', - 'keep_fps_checkbox_label': 'KEEP FPS', - 'keep_temp_checkbox_label': 'KEEP TEMP', - 'skip_audio_checkbox_label': 'SKIP AUDIO', - 'temp_frame_format_dropdown_label': 'TEMP FRAME FORMAT', - 'temp_frame_quality_slider_label': 'TEMP FRAME QUALITY', - 'trim_frame_start_slider_label': 'TRIM FRAME START', - 'trim_frame_end_slider_label': 'TRIM FRAME END', - 'source_file_label': 'SOURCE', - 'target_file_label': 'TARGET', - 'point': '.', - 'comma': ',', - 'colon': ':', - 'question_mark': '?', - 'exclamation_mark': '!' -} - - -def get(key : str) -> str: - return WORDING[key] diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Alphacam 2013 R1 SP2 CRACK - [JO3K].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Alphacam 2013 R1 SP2 CRACK - [JO3K].md deleted file mode 100644 index dfbf2e22811c5b60fda94807899ad93da1bef063..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Alphacam 2013 R1 SP2 CRACK - [JO3K].md +++ /dev/null @@ -1,58 +0,0 @@ -
    -

    Alphacam 2013 R1 SP2 CRACK - [JO3K]: A Review of the CAD/CAM Software

    - -

    If you are looking for a CAD/CAM software that can handle any CNC machine and any design project, you might want to check out Alphacam 2013 R1 SP2 CRACK - [JO3K]. This is a software package that contains the cracked version of Alphacam 2013 R1 SP2, which is a leading CAD/CAM software for wood, stone, metal, and composite industries. You can download it from various torrent sites using a torrent client.

    - -

    In this article, we will review the features, benefits, and drawbacks of Alphacam 2013 R1 SP2 CRACK - [JO3K], and help you decide if it is the right software for you.

    -

    Alphacam 2013 R1 SP2 CRACK - [JO3K]


    Download Zip ☆☆☆☆☆ https://urlin.us/2uEyzq



    - -

    What is Alphacam 2013 R1 SP2 CRACK - [JO3K]?

    - -

    Alphacam 2013 R1 SP2 CRACK - [JO3K] is a software package that contains the cracked version of Alphacam 2013 R1 SP2, as well as the setup files and multilanguage support. You can download it from various torrent sites using a torrent client.

    - -

    Alphacam 2013 R1 SP2 is the latest update of Alphacam 2013, which is a CAD/CAM software that provides solutions for wood, stone, metal, and composite industries. It has a user-friendly interface, a wide range of modules and toolpaths, and industry-specific features and functions.

    - -

    The cracked version of Alphacam 2013 R1 SP2 allows you to use the software without activation or license. You can also access all the features and modules of the software without any limitations or restrictions.

    - -

    What are the features of Alphacam 2013 R1 SP2 CRACK - [JO3K]?

    - -

    Alphacam 2013 R1 SP2 CRACK - [JO3K] has many features that make it a powerful and versatile CAD/CAM software. Some of them are:

    - -
      -
    • Modules: The software has different modules for different applications, such as milling, turning, routing, profiling, nesting, art, wire EDM, etc. You can choose the module that suits your needs and preferences.
    • -
    • Toolpaths: The software has various toolpaths for different operations, such as contouring, pocketing, drilling, engraving, sawing, etc. You can customize the toolpaths according to your requirements and specifications.
    • -
    • Simulation: The software has a simulation feature that allows you to preview and verify your toolpaths before sending them to the CNC machine. You can check for errors, collisions, or interferences and optimize your toolpaths accordingly.
    • -
    • Automation: The software has an automation feature that allows you to automate your tasks and workflows using macros, scripts, or custom commands. You can save time and effort by automating repetitive or complex tasks.
    • -
    • Integration: The software has an integration feature that allows you to import and export files from other CAD/CAM software or formats. You can also connect your software to your CNC machine using post processors or drivers.
    • -
    - -

    What are the benefits of Alphacam 2013 R1 SP2 CRACK - [JO3K]?

    - -

    Alphacam 2013 R1 SP2 CRACK - [JO3K] has many benefits that make it a worthwhile software to use. Some of them are:

    - -
      -
    • Compatibility: You can use the software with any CNC machine and any design project. You can also work with any file format that is supported by Alphacam, such as DWG, DXF, IGES, STL, etc.
    • -
    • Versatility: You can handle any application and operation with ease. You can also access different modules and toolpaths for different industries and materials.
    • -
    • Productivity: You can improve your workflow and efficiency with various tools and features that help you create, edit, simulate, automate, and integrate your toolpaths.
    • -
    • Affordability: You can save money by downloading the software from torrent sites for free. You can also use the software without activation or license.
    • -
    - -

    What are the drawbacks of Alphacam 2013 R1 SP2 CRACK - [JO3K]?

    - -

    Alphacam 2013 R1 SP2 CRACK - [JO3K] also has some drawbacks that you should be aware of before using it. Some of them are:

    -

    - -
      -
    • Risk of malware: You might download malicious files or viruses along with the software from torrent sites. You should always scan your files before opening them.
    • -
    • Lack of support: You might not get any technical support or updates from Alphacam if you use the cracked version of their software.
    • -
    • Limited functionality: You might not be able to access some advanced features or tools that are available in the latest version of Alphacam or other CAD/CAM software.
    • -
    • Ethical issues: You might be violating the terms and conditions of Alphacam or infringing their intellectual property rights by using their software without paying for it.
    • -
    - -

    Conclusion

    - -

    In conclusion, Alphacam 2013 R1 SP2 CRACK - [JO3K] is a software package that contains the cracked version of Alphacam 2013 R1 SP2, as well as the setup files and multilanguage support. It is a CAD/CAM software that provides solutions for wood, stone, metal, and composite industries. It has many features, benefits, and drawbacks that you should consider before using it.

    - -

    If you are looking for a CAD/CAM software that can handle any CNC machine and any design project, you might want to try Alphacam 2013 R1 SP2 CRACK - [JO3K]. However, if you are looking for a CAD/CAM software that has more functionality and support from Alphacam, you might want to look for other options.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Blu Meri Pyaari Bindu Hd Movie 1080p Hindi Movies VERIFIED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Blu Meri Pyaari Bindu Hd Movie 1080p Hindi Movies VERIFIED.md deleted file mode 100644 index aa987370102bed88f875e65608b10eab9b9f2d7f..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Blu Meri Pyaari Bindu Hd Movie 1080p Hindi Movies VERIFIED.md +++ /dev/null @@ -1,14 +0,0 @@ -

    Blu Meri Pyaari Bindu Hd Movie 1080p Hindi Movies


    DOWNLOAD ->->->-> https://urlin.us/2uExAH



    - -  - -60 - -.. who stay in Rajasthan for a year .. . . - -61 - -... then move to India. Later I changed to writing.. . . . . ... He spent almost all his time reading.. ... . ... He ... ... ... ...  ... ... ... ... ... .. ... ... ... ... ... ... ... ... ... . ... ... ... . . ... ... . ... ... ... ... . ... . ... ... ... . ... ... ... ... ... ... ... ... ... ... ... ... . ... ... ... . ... ... . ... . ... ... ... . ... ... . ... ... ... ... ... ... ... ... ... . . . ... . . . ... . . . ... . . . ... . ... . . ... . ... . . ... . . ... . . . ... ... ... . ... ... . ... . . . ... . ... . ... . ... . . . ... . . . . ... ... . ... ... . . . ... ... ... ... ... . . . ... . . . ... . ... . . ... . . . ... ... . . . . ... ... . . . ... ... . ... . . . ... ... . . . ... . . . . ... ... . ... ... ... ... ... ... ... . . . ... . ... ... ... . ... . . . ... . . . ... ... . 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Masterwriter 2.0 Activation Code Serial Number.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Masterwriter 2.0 Activation Code Serial Number.md deleted file mode 100644 index 8efb85c6b27c07916f0f331f0bf1a6cd77781ef4..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Masterwriter 2.0 Activation Code Serial Number.md +++ /dev/null @@ -1,117 +0,0 @@ -
    -

    How to Download and Install Masterwriter 2.0 with a Valid Activation Code and Serial Number

    - -

    Masterwriter 2.0 is a software that helps writers, musicians, and poets to create original and creative content. It has features such as word families, phrases, synonyms, rhymes, speech types, pop culture, and more. It also helps you to organize your projects, save your work online, and access it from any device.

    -

    Masterwriter 2.0 Activation Code Serial Number


    Download ★★★★★ https://urlin.us/2uEwrp



    - -

    If you want to use Masterwriter 2.0, you need to have a valid activation code and serial number. These are codes that you can get from the official website or from other sources. They are used to verify your purchase and activate the software on your computer.

    - -

    In this article, we will show you how to download and install Masterwriter 2.0 with a valid activation code and serial number. We will also give you some tips and tricks for successful installation and usage.

    - -

    Step 1: Download Masterwriter 2.0

    - -

    The first step is to download Masterwriter 2.0 from the official website or from other sources. You can choose between the 32-bit or the 64-bit version depending on your operating system. The file size is about 200 MB.

    - -

    To download Masterwriter 2.0 from the official website, you need to go to https://masterwriter.com/ and click on the "Download" button. You will be asked to enter your email address and password if you have an account, or to create one if you don't. You will also be asked to choose your payment method and enter your billing information.

    -

    - -

    To download Masterwriter 2.0 from other sources, you need to find a reliable website that offers the software for free or for a discounted price. You can use search engines or online forums to find such websites. However, be careful of scams, viruses, and malware that may harm your computer or steal your personal information.

    - -

    Step 2: Install Masterwriter 2.0

    - -

    The second step is to install Masterwriter 2.0 on your computer. You need to have administrator rights to do this.

    - -

    To install Masterwriter 2.0, you need to double-click on the downloaded file and follow the instructions on the screen. You will be asked to accept the terms and conditions, choose the destination folder, and create a shortcut on your desktop.

    - -

    The installation process may take a few minutes depending on your computer speed and internet connection.

    - -

    Step 3: Activate Masterwriter 2.0

    - -

    The third step is to activate Masterwriter 2.0 with a valid activation code and serial number. You need to have an internet connection to do this.

    - -

    To activate Masterwriter 2.0, you need to launch the software and enter your activation code and serial number when prompted. You can find these codes in your email confirmation, in your online account, or in other sources where you got the software.

    - -

    The activation code is a 16-digit alphanumeric code that looks like this: XXXX-XXXX-XXXX-XXXX

    - -

    The serial number is a 10-digit numeric code that looks like this: XXXXXXXXXX

    - -

    After entering these codes, click on the "Activate" button and wait for the verification process to complete.

    - -

    Tips and Tricks for Successful Installation and Usage

    - -

    Here are some tips and tricks for successful installation and usage of Masterwriter 2.0:

    - -
      -
    • Make sure you have enough disk space and memory on your computer before downloading and installing the software.
    • -
    • Make sure you have a stable internet connection during the download, installation, and activation process.
    • -
    • Make sure you have a backup of your activation code and serial number in case you lose them or need to reinstall the software.
    • -
    • Make sure you update the software regularly to get the latest features and bug fixes.
    • -
    • Make sure you use the software legally and ethically according to the terms and conditions.
    • -
    • Make sure you enjoy using the software and unleash your creativity!
    • -
    - -

    We hope this article helped you to download and install Masterwriter 2.0 with a valid activation code and serial number. If you have any questions or problems, please contact the customer support team at support@masterwriter.com.

    -

    Benefits of Using Masterwriter 2.0

    - -

    Masterwriter 2.0 is not just a software, but a tool that can help you improve your writing skills and creativity. Here are some of the benefits of using Masterwriter 2.0:

    - -
      -
    • It helps you to find the right words, phrases, and expressions for your writing. You can use the word families, phrases, synonyms, rhymes, speech types, and pop culture features to expand your vocabulary and avoid repetition.
    • -
    • It helps you to write faster and easier. You can use the templates, outlines, and examples to get started with your writing project. You can also use the project manager, notes, and online backup to organize your work and access it from anywhere.
    • -
    • It helps you to write better and more original. You can use the dictionary, thesaurus, spell checker, grammar checker, and style checker to improve your writing quality and accuracy. You can also use the plagiarism checker to ensure your work is unique and original.
    • -
    • It helps you to write for different purposes and audiences. You can use the genre-specific features, such as songwriting, poetry, fiction, non-fiction, screenwriting, and academic writing to tailor your writing to your specific needs and goals.
    • -
    • It helps you to write with more confidence and inspiration. You can use the motivational quotes, inspirational images, and creative prompts to overcome writer's block and spark your imagination.
    • -
    - -

    With Masterwriter 2.0, you can write anything from songs, poems, stories, essays, articles, speeches, scripts, and more. You can also export your work to various formats, such as Word, PDF, HTML, RTF, TXT, etc.

    - -

    Conclusion

    - -

    Masterwriter 2.0 is a software that can help you create original and creative content with ease and efficiency. It has many features that can help you find the right words, write faster and easier, write better and more original, write for different purposes and audiences, and write with more confidence and inspiration.

    - -

    If you want to use Masterwriter 2.0, you need to have a valid activation code and serial number. These are codes that you can get from the official website or from other sources. They are used to verify your purchase and activate the software on your computer.

    - -

    In this article, we showed you how to download and install Masterwriter 2.0 with a valid activation code and serial number. We also gave you some tips and tricks for successful installation and usage.

    - -

    We hope this article helped you to learn more about Masterwriter 2.0 and how to use it effectively. If you have any questions or problems, please contact the customer support team at support@masterwriter.com.

    -

    How to Use Masterwriter 2.0 Effectively

    - -

    Masterwriter 2.0 is a software that can help you create original and creative content with ease and efficiency. However, to get the most out of it, you need to know how to use it effectively. Here are some tips on how to use Masterwriter 2.0 effectively:

    - -
      -
    • Explore the features and functions of Masterwriter 2.0. Masterwriter 2.0 has many features and functions that can help you with your writing project. You can access them from the main menu, the toolbar, or the right-click menu. You can also use the help menu or the online tutorials to learn more about them.
    • -
    • Customize the settings and preferences of Masterwriter 2.0. Masterwriter 2.0 allows you to customize the settings and preferences of the software according to your needs and preferences. You can change the font size, color, style, and layout of the interface. You can also adjust the language, dictionary, thesaurus, spell checker, grammar checker, style checker, and plagiarism checker options.
    • -
    • Use the search and filter options of Masterwriter 2.0. Masterwriter 2.0 allows you to search and filter the content and information that you need for your writing project. You can use the search box or the advanced search option to find words, phrases, synonyms, rhymes, speech types, pop culture, and more. You can also use the filter option to narrow down your results by genre, category, subcategory, mood, tone, etc.
    • -
    • Use the drag and drop feature of Masterwriter 2.0. Masterwriter 2.0 allows you to drag and drop the content and information that you need for your writing project from the software to your word processor or other applications. You can also drag and drop images, audio files, video files, and other media files from your computer or online sources to your writing project.
    • -
    • Use the feedback and review features of Masterwriter 2.0. Masterwriter 2.0 allows you to get feedback and review your writing project before you finalize it. You can use the dictionary, thesaurus, spell checker, grammar checker, style checker, and plagiarism checker features to check your writing quality and accuracy. You can also use the word count, readability score, keyword density, and other statistics features to measure your writing performance.
    • -
    - -

    By using these tips, you can use Masterwriter 2.0 effectively and efficiently.

    - -

    Frequently Asked Questions about Masterwriter 2.0

    - -

    Here are some frequently asked questions about Masterwriter 2.0:

    - -
      -
    1. What are the system requirements for Masterwriter 2.0?
    2. -

      Masterwriter 2.0 is compatible with Windows XP/Vista/7/8/10 and Mac OS X 10.6 or higher. You need to have at least 1 GB of RAM and 500 MB of free disk space on your computer.

      -
    3. How much does Masterwriter 2.0 cost?
    4. -

      Masterwriter 2.0 costs $99 for a one-year subscription or $199 for a lifetime license. You can also get a free trial for 10 days or a discounted price for students, teachers, schools, and organizations.

      -
    5. How can I contact Masterwriter 2.0 customer support?
    6. -

      You can contact Masterwriter 2.0 customer support by email at support@masterwriter.com, by phone at (805) 892-2656 or (866) 848-8484 (toll-free), or by mail at P.O.Box 60850 Santa Barbara CA 93160 USA.

      -
    7. What are some alternatives to Masterwriter 2.0?
    8. -

      Some alternatives to Masterwriter 2.0 are Scrivener, ProWritingAid, Grammarly, Hemingway Editor, Evernote, Google Docs, etc.

      -
    - -

    We hope this article answered some of your questions about Masterwriter 2.0.

    -

    Conclusion

    - -

    Masterwriter 2.0 is a software that can help you create original and creative content with ease and efficiency. It has many features and functions that can help you find the right words, write faster and easier, write better and more original, write for different purposes and audiences, and write with more confidence and inspiration.

    - -

    If you want to use Masterwriter 2.0, you need to have a valid activation code and serial number. These are codes that you can get from the official website or from other sources. They are used to verify your purchase and activate the software on your computer.

    - -

    In this article, we showed you how to download and install Masterwriter 2.0 with a valid activation code and serial number. We also gave you some tips and tricks for successful installation and usage. We also answered some frequently asked questions about Masterwriter 2.0.

    - -

    We hope this article helped you to learn more about Masterwriter 2.0 and how to use it effectively. If you have any questions or problems, please contact the customer support team at support@masterwriter.com.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mise A Jour RT4 RT5 8.11 CD 3293 Ref 6574PK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mise A Jour RT4 RT5 8.11 CD 3293 Ref 6574PK.md deleted file mode 100644 index a188278d9b92279f8eb5badbaa37f765d389a095..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mise A Jour RT4 RT5 8.11 CD 3293 Ref 6574PK.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Mise a jour RT4 RT5 8.11 CD 3293 Ref 6574PK


    Download Ziphttps://urlin.us/2uEvfZ



    -
    -mbaacc nodvd 1.4 · Mise a jour RT4 RT5 8.11 CD 3293 Ref 6574PK · Tdu Bmw E36 M3 3.2 Hd Download Game Pc. grotassegbull's Ownd. 1fdad05405
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Animaplanos Modulo 6 Solucion De Todos Los Ejercicios.md b/spaces/inreVtussa/clothingai/Examples/Animaplanos Modulo 6 Solucion De Todos Los Ejercicios.md deleted file mode 100644 index 93e09dc6b50c199809be0ec0b5ed47dd045234d4..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Animaplanos Modulo 6 Solucion De Todos Los Ejercicios.md +++ /dev/null @@ -1,149 +0,0 @@ -
    -

    Animaplanos Modulo 6: Solucion de Todos los Ejercicios

    - -

    ¿Te gustan las matemáticas y quieres aprender a resolver todos los ejercicios de Animaplanos Modulo 6? Entonces este artículo es para ti. Aquí te vamos a mostrar las soluciones completas y detalladas de cada uno de los ejercicios de este módulo, que te ayudarán a mejorar tu comprensión y tu razonamiento matemático.

    - -

    Animaplanos es un programa de formación complementaria para maestras y maestros en ejercicio, que busca fortalecer sus competencias pedagógicas y didácticas en el área de matemáticas. Animaplanos se basa en el uso de planos geométricos con figuras animadas, que representan conceptos y operaciones matemáticas de forma lúdica y creativa.

    -

    animaplanos modulo 6 solucion de todos los ejercicios


    Download File 🗹 https://tiurll.com/2uCiuJ



    - -

    El módulo 6 de Animaplanos abarca temas como: números naturales, fracciones, decimales, porcentajes, proporcionalidad, geometría plana, medidas de longitud, área y perímetro, gráficas y estadística. Cada tema tiene una serie de ejercicios que se deben resolver siguiendo las instrucciones y los pasos indicados en la cartilla.

    - -

    ¿Cómo resolver los ejercicios de Animaplanos Modulo 6?

    - -

    Para resolver los ejercicios de Animaplanos Modulo 6, se debe tener en cuenta lo siguiente:

    - -
      -
    • Se debe leer con atención cada ejercicio y comprender lo que se pide.
    • -
    • Se debe usar lápiz y papel para hacer los cálculos necesarios y escribir las respuestas.
    • -
    • Se debe seguir el orden numérico de los ejercicios y no saltarse ninguno.
    • -
    • Se debe verificar que las respuestas sean correctas y coherentes con el ejercicio.
    • -
    • Se debe marcar en el plano correspondiente la figura que representa la respuesta del ejercicio.
    • -
    - -

    A continuación, te mostramos algunos ejemplos de ejercicios resueltos de Animaplanos Modulo 6, con sus respectivas soluciones y explicaciones.

    - -

    Ejemplo 1: Ejercicio 1

    - -

    El ejercicio 1 dice: "Escribe el número que corresponde a cada letra".

    - -

    La solución es:

    - - - - -
    ABCDEF
    24681012
    - -

    La explicación es: Se observa que la secuencia de números es par y que aumenta de dos en dos. Por lo tanto, se asigna el número par siguiente a cada letra, empezando por el 2.

    - -

    Ejemplo 2: Ejercicio 5

    - -

    El ejercicio 5 dice: "Escribe el número decimal que se forma al unir las cifras del número natural 1234".

    - -

    La solución es: 0,1234

    -

    - -

    La explicación es: Se observa que el número natural 1234 tiene cuatro cifras. Para formar un número decimal con esas cifras, se debe colocar una coma después del cero y luego escribir las cifras en el mismo orden.

    - -

    Ejemplo 3: Ejercicio 9

    - -

    El ejercicio 9 dice: "Calcula el área del rectángulo cuya base mide 8 cm y su altura 5 cm".

    - -

    La solución es: 40 cm

    - -

    La explicación es: Se recuerda que el área del rectángulo se obtiene multiplicando la base por la altura. Entonces, se aplica la fórmula A = b x h y se sustituyen los valores dados. Así, se tiene que A = 8 x 5 = 40 cm.

    - -

    ¿Dónde encontrar más soluciones de Animaplanos Modulo 6?

    - -

    Si quieres ver más soluciones de Animaplanos Modulo 6, puedes visitar los siguientes sitios web:

    - - - -

    Esperamos que este artículo te haya sido útil para aprender a resolver todos los ejercicios de Animaplanos Modulo 6. Recuerda que la práctica hace al maestro, así que no te desanimes si te equivocas o te cuesta entender algún ejercicio. Lo importante es que sigas intentando y que disfrutes del proceso de aprendizaje.

    -

    ¿Qué dificultades se pueden presentar al resolver Animaplanos Modulo 6?

    - -

    Al resolver Animaplanos Modulo 6, se pueden presentar algunas dificultades que se deben superar con paciencia y perseverancia. Algunas de ellas son:

    - -
      -
    • Confundir o olvidar las reglas y las propiedades de las operaciones matemáticas, como la jerarquía, la conmutatividad, la asociatividad y la distributividad.
    • -
    • Cometer errores de cálculo mental o de escritura al realizar las operaciones matemáticas, como sumar o restar mal, multiplicar o dividir por cero, invertir o omitir cifras.
    • -
    • No entender bien el enunciado del ejercicio o lo que se pide, por falta de atención, de vocabulario o de comprensión lectora.
    • -
    • No saber cómo empezar o cómo continuar el ejercicio, por falta de estrategias, de organización o de creatividad.
    • -
    • No verificar que la respuesta sea correcta y coherente con el ejercicio, por falta de tiempo, de revisión o de confianza.
    • -
    - -

    Para superar estas dificultades, se recomienda seguir los siguientes consejos:

    - -
      -
    • Repasar y memorizar las reglas y las propiedades de las operaciones matemáticas, y aplicarlas correctamente en cada caso.
    • -
    • Realizar los cálculos mentales o escritos con cuidado y precisión, y revisarlos antes de escribir la respuesta final.
    • -
    • Leer con atención y comprensión el enunciado del ejercicio y lo que se pide, y aclarar las dudas que se tengan.
    • -
    • Buscar y seguir un plan o una estrategia para resolver el ejercicio, y organizar los pasos y los datos que se usen.
    • -
    • Verificar que la respuesta sea correcta y coherente con el ejercicio, y corregir los errores que se encuentren.
    • -
    - -

    ¿Qué testimonios hay sobre Animaplanos Modulo 6?

    - -

    Animaplanos Modulo 6 ha recibido muchos testimonios positivos de parte de los estudiantes y los docentes que lo han usado. Algunos de ellos son:

    - -
    "Animaplanos Modulo 6 me ha ayudado mucho a mejorar mis habilidades matemáticas. Me gusta mucho porque es divertido y me hace pensar. He aprendido a resolver problemas que antes me parecían difíciles o aburridos. Me siento más seguro y más motivado para seguir aprendiendo matemáticas." (Estudiante de grado 6°)
    - -
    "Animaplanos Modulo 6 es una excelente herramienta para enseñar y aprender matemáticas. Me ha facilitado mucho el trabajo como docente, ya que me permite orientar a mis estudiantes de forma dinámica y efectiva. He visto cómo mis estudiantes han mejorado su rendimiento y su actitud hacia las matemáticas. Estoy muy satisfecha con los resultados que hemos obtenido." (Docente de grado 6°)
    - -
    "Animaplanos Modulo 6 es una propuesta didáctica innovadora y atractiva para las matemáticas. Me ha sorprendido gratamente la forma en que integra el cálculo mental, la geometría y el razonamiento lógico. He disfrutado mucho resolviendo los ejercicios y descubriendo las figuras animadas. He ampliado mis conocimientos y mis competencias matemáticas." (Estudiante de grado 6°)
    - -

    ¿Cómo contactar con Animaplanos?

    - -

    Si quieres contactar con Animaplanos para obtener más información, hacer sugerencias, resolver dudas o enviar comentarios, puedes hacerlo de las siguientes formas:

    - -
      -
    • Puedes visitar el sitio web oficial de Animaplanos: www.animaplanos.com, donde encontrarás toda la información sobre el programa, los módulos, los recursos y las novedades.
    • -
    • Puedes enviar un correo electrónico a: info@animaplanos.com, donde podrás comunicarte directamente con el equipo de Animaplanos.
    • -
    • Puedes seguir las redes sociales de Animaplanos: Facebook, Twitter, Instagram, donde podrás estar al día con las noticias, las actividades y los eventos relacionados con Animaplanos.
    • -
    - -

    Animaplanos te espera para que te diviertas y aprendas con sus módulos recreativos para matemáticas. ¡No te lo pierdas!

    -

    ¿Qué recursos se pueden usar para complementar Animaplanos Modulo 6?

    - -

    Además de Animaplanos Modulo 6, se pueden usar otros recursos para complementar el aprendizaje de las matemáticas. Algunos de ellos son:

    - - - -

    Estos recursos se pueden usar como apoyo, refuerzo o ampliación de Animaplanos Modulo 6, según las necesidades y los intereses de cada estudiante.

    - -

    ¿Qué consejos se pueden dar para aprovechar al máximo Animaplanos Modulo 6?

    - -

    Para aprovechar al máximo Animaplanos Modulo 6, se pueden seguir estos consejos:

    - -
      -
    • Dedicar un tiempo diario o semanal para resolver los ejercicios de Animaplanos Modulo 6, según el ritmo y el nivel de cada estudiante.
    • -
    • Buscar un lugar tranquilo, cómodo y bien iluminado para realizar los ejercicios de Animaplanos Modulo 6, sin distracciones ni interrupciones.
    • -
    • Disponer de los materiales necesarios para realizar los ejercicios de Animaplanos Modulo 6, como lápiz, papel, borrador, regla, compás, calculadora, etc.
    • -
    • Consultar con el docente o con un compañero las dudas o las dificultades que se presenten al resolver los ejercicios de Animaplanos Modulo 6.
    • -
    • Combinar los ejercicios de Animaplanos Modulo 6 con otros recursos que complementen el aprendizaje de las matemáticas, como videos, juegos, libros, etc.
    • -
    • Disfrutar del proceso de resolver los ejercicios de Animaplanos Modulo 6, sin presión ni estrés, sino con curiosidad y entusiasmo.
    • -
    - -

    Animaplanos Modulo 6 es una oportunidad para aprender y divertirse con las matemáticas. No te lo pierdas!

    -

    Conclusión

    - -

    Animaplanos Modulo 6 es un programa de formación complementaria para maestras y maestros en ejercicio, que busca fortalecer sus competencias pedagógicas y didácticas en el área de matemáticas. Animaplanos Modulo 6 se basa en el uso de planos geométricos con figuras animadas, que representan conceptos y operaciones matemáticas de forma lúdica y creativa.

    - -

    Animaplanos Modulo 6 abarca temas como: números naturales, fracciones, decimales, porcentajes, proporcionalidad, geometría plana, medidas de longitud, área y perímetro, gráficas y estadística. Cada tema tiene una serie de ejercicios que se deben resolver siguiendo las instrucciones y los pasos indicados en la cartilla.

    - -

    Animaplanos Modulo 6 tiene muchos beneficios para los estudiantes y los docentes, ya que desarrolla el pensamiento lógico-matemático, facilita la comprensión de conceptos y operaciones matemáticas, fomenta el interés y la motivación por las matemáticas, promueve la autoevaluación y la retroalimentación, y fortalece las competencias pedagógicas y didácticas de los docentes.

    - -

    Animaplanos Modulo 6 se puede conseguir en diferentes formatos y medios, como físico, digital, audiovisual e interactivo. Además, se puede complementar con otros recursos que ofrecen información, actividades y ejercicios sobre los temas de Animaplanos Modulo 6.

    - -

    Animaplanos Modulo 6 se puede aprovechar al máximo siguiendo algunos consejos, como dedicar un tiempo diario o semanal para resolver los ejercicios, buscar un lugar tranquilo y cómodo para realizarlos, disponer de los materiales necesarios, consultar las dudas o dificultades que se presenten, combinar los ejercicios con otros recursos que complementen el aprendizaje, y disfrutar del proceso de resolverlos.

    - -

    Animaplanos Modulo 6 es una oportunidad para aprender y divertirse con las matemáticas. No te lo pierdas!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Darksiders Ii Ps3 Duplex Duplex Darksiders2 R78 95.md b/spaces/inreVtussa/clothingai/Examples/Darksiders Ii Ps3 Duplex Duplex Darksiders2 R78 95.md deleted file mode 100644 index 5b7271baa844a9fc43e6841ce805617b2969fd8c..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Darksiders Ii Ps3 Duplex Duplex Darksiders2 R78 95.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Darksiders Ii Ps3 Duplex Duplex Darksiders2 R78 95


    DOWNLOADhttps://tiurll.com/2uCjz9



    -
    -Encuentra darksiders 2 ps3 en venta entre una amplia seleccion de en eBay. ... PS3 DARKSIDERS II LIMITED EDITION PS3 6171947. Usado. 9,95 EUR. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/ismot/1702t1/preprocessing/pano_lsd_align.py b/spaces/ismot/1702t1/preprocessing/pano_lsd_align.py deleted file mode 100644 index 2594029f5cb1a9d1afd580ef59a579af41580376..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/preprocessing/pano_lsd_align.py +++ /dev/null @@ -1,911 +0,0 @@ -''' -This script is helper function for preprocessing. -Most of the code are converted from LayoutNet official's matlab code. -All functions, naming rule and data flow follow official for easier -converting and comparing. -Code is not optimized for python or numpy yet. -''' - -import sys -import numpy as np -from scipy.ndimage import map_coordinates -import cv2 -from pylsd import lsd - - -def computeUVN(n, in_, planeID): - ''' - compute v given u and normal. - ''' - if planeID == 2: - n = np.array([n[1], n[2], n[0]]) - elif planeID == 3: - n = np.array([n[2], n[0], n[1]]) - bc = n[0] * np.sin(in_) + n[1] * np.cos(in_) - bs = n[2] - out = np.arctan(-bc / (bs + 1e-9)) - return out - - -def computeUVN_vec(n, in_, planeID): - ''' - vectorization version of computeUVN - @n N x 3 - @in_ MN x 1 - @planeID N - ''' - n = n.copy() - if (planeID == 2).sum(): - n[planeID == 2] = np.roll(n[planeID == 2], 2, axis=1) - if (planeID == 3).sum(): - n[planeID == 3] = np.roll(n[planeID == 3], 1, axis=1) - n = np.repeat(n, in_.shape[0] // n.shape[0], axis=0) - assert n.shape[0] == in_.shape[0] - bc = n[:, [0]] * np.sin(in_) + n[:, [1]] * np.cos(in_) - bs = n[:, [2]] - out = np.arctan(-bc / (bs + 1e-9)) - return out - - -def xyz2uvN(xyz, planeID=1): - ID1 = (int(planeID) - 1 + 0) % 3 - ID2 = (int(planeID) - 1 + 1) % 3 - ID3 = (int(planeID) - 1 + 2) % 3 - normXY = np.sqrt(xyz[:, [ID1]] ** 2 + xyz[:, [ID2]] ** 2) - normXY[normXY < 0.000001] = 0.000001 - normXYZ = np.sqrt(xyz[:, [ID1]] ** 2 + xyz[:, [ID2]] ** 2 + xyz[:, [ID3]] ** 2) - v = np.arcsin(xyz[:, [ID3]] / normXYZ) - u = np.arcsin(xyz[:, [ID1]] / normXY) - valid = (xyz[:, [ID2]] < 0) & (u >= 0) - u[valid] = np.pi - u[valid] - valid = (xyz[:, [ID2]] < 0) & (u <= 0) - u[valid] = -np.pi - u[valid] - uv = np.hstack([u, v]) - uv[np.isnan(uv[:, 0]), 0] = 0 - return uv - - -def uv2xyzN(uv, planeID=1): - ID1 = (int(planeID) - 1 + 0) % 3 - ID2 = (int(planeID) - 1 + 1) % 3 - ID3 = (int(planeID) - 1 + 2) % 3 - xyz = np.zeros((uv.shape[0], 3)) - xyz[:, ID1] = np.cos(uv[:, 1]) * np.sin(uv[:, 0]) - xyz[:, ID2] = np.cos(uv[:, 1]) * np.cos(uv[:, 0]) - xyz[:, ID3] = np.sin(uv[:, 1]) - return xyz - - -def uv2xyzN_vec(uv, planeID): - ''' - vectorization version of uv2xyzN - @uv N x 2 - @planeID N - ''' - assert (planeID.astype(int) != planeID).sum() == 0 - planeID = planeID.astype(int) - ID1 = (planeID - 1 + 0) % 3 - ID2 = (planeID - 1 + 1) % 3 - ID3 = (planeID - 1 + 2) % 3 - ID = np.arange(len(uv)) - xyz = np.zeros((len(uv), 3)) - xyz[ID, ID1] = np.cos(uv[:, 1]) * np.sin(uv[:, 0]) - xyz[ID, ID2] = np.cos(uv[:, 1]) * np.cos(uv[:, 0]) - xyz[ID, ID3] = np.sin(uv[:, 1]) - return xyz - - -def warpImageFast(im, XXdense, YYdense): - minX = max(1., np.floor(XXdense.min()) - 1) - minY = max(1., np.floor(YYdense.min()) - 1) - - maxX = min(im.shape[1], np.ceil(XXdense.max()) + 1) - maxY = min(im.shape[0], np.ceil(YYdense.max()) + 1) - - im = im[int(round(minY-1)):int(round(maxY)), - int(round(minX-1)):int(round(maxX))] - - assert XXdense.shape == YYdense.shape - out_shape = XXdense.shape - coordinates = [ - (YYdense - minY).reshape(-1), - (XXdense - minX).reshape(-1), - ] - im_warp = np.stack([ - map_coordinates(im[..., c], coordinates, order=1).reshape(out_shape) - for c in range(im.shape[-1])], - axis=-1) - - return im_warp - - -def rotatePanorama(img, vp=None, R=None): - ''' - Rotate panorama - if R is given, vp (vanishing point) will be overlooked - otherwise R is computed from vp - ''' - sphereH, sphereW, C = img.shape - - # new uv coordinates - TX, TY = np.meshgrid(range(1, sphereW + 1), range(1, sphereH + 1)) - TX = TX.reshape(-1, 1, order='F') - TY = TY.reshape(-1, 1, order='F') - ANGx = (TX - sphereW/2 - 0.5) / sphereW * np.pi * 2 - ANGy = -(TY - sphereH/2 - 0.5) / sphereH * np.pi - uvNew = np.hstack([ANGx, ANGy]) - xyzNew = uv2xyzN(uvNew, 1) - - # rotation matrix - if R is None: - R = np.linalg.inv(vp.T) - - xyzOld = np.linalg.solve(R, xyzNew.T).T - uvOld = xyz2uvN(xyzOld, 1) - - Px = (uvOld[:, 0] + np.pi) / (2*np.pi) * sphereW + 0.5 - Py = (-uvOld[:, 1] + np.pi/2) / np.pi * sphereH + 0.5 - - Px = Px.reshape(sphereH, sphereW, order='F') - Py = Py.reshape(sphereH, sphereW, order='F') - - # boundary - imgNew = np.zeros((sphereH+2, sphereW+2, C), np.float64) - imgNew[1:-1, 1:-1, :] = img - imgNew[1:-1, 0, :] = img[:, -1, :] - imgNew[1:-1, -1, :] = img[:, 0, :] - imgNew[0, 1:sphereW//2+1, :] = img[0, sphereW-1:sphereW//2-1:-1, :] - imgNew[0, sphereW//2+1:-1, :] = img[0, sphereW//2-1::-1, :] - imgNew[-1, 1:sphereW//2+1, :] = img[-1, sphereW-1:sphereW//2-1:-1, :] - imgNew[-1, sphereW//2+1:-1, :] = img[0, sphereW//2-1::-1, :] - imgNew[0, 0, :] = img[0, 0, :] - imgNew[-1, -1, :] = img[-1, -1, :] - imgNew[0, -1, :] = img[0, -1, :] - imgNew[-1, 0, :] = img[-1, 0, :] - - rotImg = warpImageFast(imgNew, Px+1, Py+1) - - return rotImg - - -def imgLookAt(im, CENTERx, CENTERy, new_imgH, fov): - sphereH = im.shape[0] - sphereW = im.shape[1] - warped_im = np.zeros((new_imgH, new_imgH, 3)) - TX, TY = np.meshgrid(range(1, new_imgH + 1), range(1, new_imgH + 1)) - TX = TX.reshape(-1, 1, order='F') - TY = TY.reshape(-1, 1, order='F') - TX = TX - 0.5 - new_imgH/2 - TY = TY - 0.5 - new_imgH/2 - r = new_imgH / 2 / np.tan(fov/2) - - # convert to 3D - R = np.sqrt(TY ** 2 + r ** 2) - ANGy = np.arctan(- TY / r) - ANGy = ANGy + CENTERy - - X = np.sin(ANGy) * R - Y = -np.cos(ANGy) * R - Z = TX - - INDn = np.nonzero(np.abs(ANGy) > np.pi/2) - - # project back to sphere - ANGx = np.arctan(Z / -Y) - RZY = np.sqrt(Z ** 2 + Y ** 2) - ANGy = np.arctan(X / RZY) - - ANGx[INDn] = ANGx[INDn] + np.pi - ANGx = ANGx + CENTERx - - INDy = np.nonzero(ANGy < -np.pi/2) - ANGy[INDy] = -np.pi - ANGy[INDy] - ANGx[INDy] = ANGx[INDy] + np.pi - - INDx = np.nonzero(ANGx <= -np.pi); ANGx[INDx] = ANGx[INDx] + 2 * np.pi - INDx = np.nonzero(ANGx > np.pi); ANGx[INDx] = ANGx[INDx] - 2 * np.pi - INDx = np.nonzero(ANGx > np.pi); ANGx[INDx] = ANGx[INDx] - 2 * np.pi - INDx = np.nonzero(ANGx > np.pi); ANGx[INDx] = ANGx[INDx] - 2 * np.pi - - Px = (ANGx + np.pi) / (2*np.pi) * sphereW + 0.5 - Py = ((-ANGy) + np.pi/2) / np.pi * sphereH + 0.5 - - INDxx = np.nonzero(Px < 1) - Px[INDxx] = Px[INDxx] + sphereW - im = np.concatenate([im, im[:, :2]], 1) - - Px = Px.reshape(new_imgH, new_imgH, order='F') - Py = Py.reshape(new_imgH, new_imgH, order='F') - - warped_im = warpImageFast(im, Px, Py) - - return warped_im - - -def separatePano(panoImg, fov, x, y, imgSize=320): - '''cut a panorama image into several separate views''' - assert x.shape == y.shape - if not isinstance(fov, np.ndarray): - fov = fov * np.ones_like(x) - - sepScene = [ - { - 'img': imgLookAt(panoImg.copy(), xi, yi, imgSize, fovi), - 'vx': xi, - 'vy': yi, - 'fov': fovi, - 'sz': imgSize, - } - for xi, yi, fovi in zip(x, y, fov) - ] - - return sepScene - - -def lsdWrap(img): - ''' - Opencv implementation of - Rafael Grompone von Gioi, Jérémie Jakubowicz, Jean-Michel Morel, and Gregory Randall, - LSD: a Line Segment Detector, Image Processing On Line, vol. 2012. - [Rafael12] http://www.ipol.im/pub/art/2012/gjmr-lsd/?utm_source=doi - @img - input image - ''' - if len(img.shape) == 3: - img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - - lines = lsd(img, quant=0.7) - if lines is None: - return np.zeros_like(img), np.array([]) - edgeMap = np.zeros_like(img) - for i in range(lines.shape[0]): - pt1 = (int(lines[i, 0]), int(lines[i, 1])) - pt2 = (int(lines[i, 2]), int(lines[i, 3])) - width = lines[i, 4] - cv2.line(edgeMap, pt1, pt2, 255, int(np.ceil(width / 2))) - edgeList = np.concatenate([lines, np.ones_like(lines[:, :2])], 1) - return edgeMap, edgeList - - -def edgeFromImg2Pano(edge): - edgeList = edge['edgeLst'] - if len(edgeList) == 0: - return np.array([]) - - vx = edge['vx'] - vy = edge['vy'] - fov = edge['fov'] - imH, imW = edge['img'].shape - - R = (imW/2) / np.tan(fov/2) - - # im is the tangent plane, contacting with ball at [x0 y0 z0] - x0 = R * np.cos(vy) * np.sin(vx) - y0 = R * np.cos(vy) * np.cos(vx) - z0 = R * np.sin(vy) - vecposX = np.array([np.cos(vx), -np.sin(vx), 0]) - vecposY = np.cross(np.array([x0, y0, z0]), vecposX) - vecposY = vecposY / np.sqrt(vecposY @ vecposY.T) - vecposX = vecposX.reshape(1, -1) - vecposY = vecposY.reshape(1, -1) - Xc = (0 + imW-1) / 2 - Yc = (0 + imH-1) / 2 - - vecx1 = edgeList[:, [0]] - Xc - vecy1 = edgeList[:, [1]] - Yc - vecx2 = edgeList[:, [2]] - Xc - vecy2 = edgeList[:, [3]] - Yc - - vec1 = np.tile(vecx1, [1, 3]) * vecposX + np.tile(vecy1, [1, 3]) * vecposY - vec2 = np.tile(vecx2, [1, 3]) * vecposX + np.tile(vecy2, [1, 3]) * vecposY - coord1 = [[x0, y0, z0]] + vec1 - coord2 = [[x0, y0, z0]] + vec2 - - normal = np.cross(coord1, coord2, axis=1) - normal = normal / np.linalg.norm(normal, axis=1, keepdims=True) - - panoList = np.hstack([normal, coord1, coord2, edgeList[:, [-1]]]) - - return panoList - - -def _intersection(range1, range2): - if range1[1] < range1[0]: - range11 = [range1[0], 1] - range12 = [0, range1[1]] - else: - range11 = range1 - range12 = [0, 0] - - if range2[1] < range2[0]: - range21 = [range2[0], 1] - range22 = [0, range2[1]] - else: - range21 = range2 - range22 = [0, 0] - - b = max(range11[0], range21[0]) < min(range11[1], range21[1]) - if b: - return b - b2 = max(range12[0], range22[0]) < min(range12[1], range22[1]) - b = b or b2 - return b - - -def _insideRange(pt, range): - if range[1] > range[0]: - b = pt >= range[0] and pt <= range[1] - else: - b1 = pt >= range[0] and pt <= 1 - b2 = pt >= 0 and pt <= range[1] - b = b1 or b2 - return b - - -def combineEdgesN(edges): - ''' - Combine some small line segments, should be very conservative - OUTPUT - lines: combined line segments - ori_lines: original line segments - line format [nx ny nz projectPlaneID umin umax LSfov score] - ''' - arcList = [] - for edge in edges: - panoLst = edge['panoLst'] - if len(panoLst) == 0: - continue - arcList.append(panoLst) - arcList = np.vstack(arcList) - - # ori lines - numLine = len(arcList) - ori_lines = np.zeros((numLine, 8)) - areaXY = np.abs(arcList[:, 2]) - areaYZ = np.abs(arcList[:, 0]) - areaZX = np.abs(arcList[:, 1]) - planeIDs = np.argmax(np.stack([areaXY, areaYZ, areaZX], -1), 1) + 1 # XY YZ ZX - - for i in range(numLine): - ori_lines[i, :3] = arcList[i, :3] - ori_lines[i, 3] = planeIDs[i] - coord1 = arcList[i, 3:6] - coord2 = arcList[i, 6:9] - uv = xyz2uvN(np.stack([coord1, coord2]), planeIDs[i]) - umax = uv[:, 0].max() + np.pi - umin = uv[:, 0].min() + np.pi - if umax - umin > np.pi: - ori_lines[i, 4:6] = np.array([umax, umin]) / 2 / np.pi - else: - ori_lines[i, 4:6] = np.array([umin, umax]) / 2 / np.pi - ori_lines[i, 6] = np.arccos(( - np.dot(coord1, coord2) / (np.linalg.norm(coord1) * np.linalg.norm(coord2)) - ).clip(-1, 1)) - ori_lines[i, 7] = arcList[i, 9] - - # additive combination - lines = ori_lines.copy() - for _ in range(3): - numLine = len(lines) - valid_line = np.ones(numLine, bool) - for i in range(numLine): - if not valid_line[i]: - continue - dotProd = (lines[:, :3] * lines[[i], :3]).sum(1) - valid_curr = np.logical_and((np.abs(dotProd) > np.cos(np.pi / 180)), valid_line) - valid_curr[i] = False - for j in np.nonzero(valid_curr)[0]: - range1 = lines[i, 4:6] - range2 = lines[j, 4:6] - valid_rag = _intersection(range1, range2) - if not valid_rag: - continue - - # combine - I = np.argmax(np.abs(lines[i, :3])) - if lines[i, I] * lines[j, I] > 0: - nc = lines[i, :3] * lines[i, 6] + lines[j, :3] * lines[j, 6] - else: - nc = lines[i, :3] * lines[i, 6] - lines[j, :3] * lines[j, 6] - nc = nc / np.linalg.norm(nc) - - if _insideRange(range1[0], range2): - nrmin = range2[0] - else: - nrmin = range1[0] - - if _insideRange(range1[1], range2): - nrmax = range2[1] - else: - nrmax = range1[1] - - u = np.array([[nrmin], [nrmax]]) * 2 * np.pi - np.pi - v = computeUVN(nc, u, lines[i, 3]) - xyz = uv2xyzN(np.hstack([u, v]), lines[i, 3]) - l = np.arccos(np.dot(xyz[0, :], xyz[1, :]).clip(-1, 1)) - scr = (lines[i,6]*lines[i,7] + lines[j,6]*lines[j,7]) / (lines[i,6]+lines[j,6]) - - lines[i] = [*nc, lines[i, 3], nrmin, nrmax, l, scr] - valid_line[j] = False - - lines = lines[valid_line] - - return lines, ori_lines - - -def icosahedron2sphere(level): - # this function use a icosahedron to sample uniformly on a sphere - a = 2 / (1 + np.sqrt(5)) - M = np.array([ - 0, a, -1, a, 1, 0, -a, 1, 0, - 0, a, 1, -a, 1, 0, a, 1, 0, - 0, a, 1, 0, -a, 1, -1, 0, a, - 0, a, 1, 1, 0, a, 0, -a, 1, - 0, a, -1, 0, -a, -1, 1, 0, -a, - 0, a, -1, -1, 0, -a, 0, -a, -1, - 0, -a, 1, a, -1, 0, -a, -1, 0, - 0, -a, -1, -a, -1, 0, a, -1, 0, - -a, 1, 0, -1, 0, a, -1, 0, -a, - -a, -1, 0, -1, 0, -a, -1, 0, a, - a, 1, 0, 1, 0, -a, 1, 0, a, - a, -1, 0, 1, 0, a, 1, 0, -a, - 0, a, 1, -1, 0, a, -a, 1, 0, - 0, a, 1, a, 1, 0, 1, 0, a, - 0, a, -1, -a, 1, 0, -1, 0, -a, - 0, a, -1, 1, 0, -a, a, 1, 0, - 0, -a, -1, -1, 0, -a, -a, -1, 0, - 0, -a, -1, a, -1, 0, 1, 0, -a, - 0, -a, 1, -a, -1, 0, -1, 0, a, - 0, -a, 1, 1, 0, a, a, -1, 0]) - - coor = M.T.reshape(3, 60, order='F').T - coor, idx = np.unique(coor, return_inverse=True, axis=0) - tri = idx.reshape(3, 20, order='F').T - - # extrude - coor = list(coor / np.tile(np.linalg.norm(coor, axis=1, keepdims=True), (1, 3))) - - for _ in range(level): - triN = [] - for t in range(len(tri)): - n = len(coor) - coor.append((coor[tri[t, 0]] + coor[tri[t, 1]]) / 2) - coor.append((coor[tri[t, 1]] + coor[tri[t, 2]]) / 2) - coor.append((coor[tri[t, 2]] + coor[tri[t, 0]]) / 2) - - triN.append([n, tri[t, 0], n+2]) - triN.append([n, tri[t, 1], n+1]) - triN.append([n+1, tri[t, 2], n+2]) - triN.append([n, n+1, n+2]) - tri = np.array(triN) - - # uniquefy - coor, idx = np.unique(coor, return_inverse=True, axis=0) - tri = idx[tri] - - # extrude - coor = list(coor / np.tile(np.sqrt(np.sum(coor * coor, 1, keepdims=True)), (1, 3))) - - return np.array(coor), np.array(tri) - - -def curveFitting(inputXYZ, weight): - ''' - @inputXYZ: N x 3 - @weight : N x 1 - ''' - l = np.linalg.norm(inputXYZ, axis=1, keepdims=True) - inputXYZ = inputXYZ / l - weightXYZ = inputXYZ * weight - XX = np.sum(weightXYZ[:, 0] ** 2) - YY = np.sum(weightXYZ[:, 1] ** 2) - ZZ = np.sum(weightXYZ[:, 2] ** 2) - XY = np.sum(weightXYZ[:, 0] * weightXYZ[:, 1]) - YZ = np.sum(weightXYZ[:, 1] * weightXYZ[:, 2]) - ZX = np.sum(weightXYZ[:, 2] * weightXYZ[:, 0]) - - A = np.array([ - [XX, XY, ZX], - [XY, YY, YZ], - [ZX, YZ, ZZ]]) - U, S, Vh = np.linalg.svd(A) - outputNM = Vh[-1, :] - outputNM = outputNM / np.linalg.norm(outputNM) - - return outputNM - - -def sphereHoughVote(segNormal, segLength, segScores, binRadius, orthTolerance, candiSet, force_unempty=True): - # initial guess - numLinesg = len(segNormal) - - voteBinPoints = candiSet.copy() - voteBinPoints = voteBinPoints[~(voteBinPoints[:,2] < 0)] - reversValid = (segNormal[:, 2] < 0).reshape(-1) - segNormal[reversValid] = -segNormal[reversValid] - - voteBinUV = xyz2uvN(voteBinPoints) - numVoteBin = len(voteBinPoints) - voteBinValues = np.zeros(numVoteBin) - for i in range(numLinesg): - tempNorm = segNormal[[i]] - tempDots = (voteBinPoints * tempNorm).sum(1) - - valid = np.abs(tempDots) < np.cos((90 - binRadius) * np.pi / 180) - - voteBinValues[valid] = voteBinValues[valid] + segScores[i] * segLength[i] - - checkIDs1 = np.nonzero(voteBinUV[:, [1]] > np.pi / 3)[0] - voteMax = 0 - checkID1Max = 0 - checkID2Max = 0 - checkID3Max = 0 - - for j in range(len(checkIDs1)): - checkID1 = checkIDs1[j] - vote1 = voteBinValues[checkID1] - if voteBinValues[checkID1] == 0 and force_unempty: - continue - checkNormal = voteBinPoints[[checkID1]] - dotProduct = (voteBinPoints * checkNormal).sum(1) - checkIDs2 = np.nonzero(np.abs(dotProduct) < np.cos((90 - orthTolerance) * np.pi / 180))[0] - - for i in range(len(checkIDs2)): - checkID2 = checkIDs2[i] - if voteBinValues[checkID2] == 0 and force_unempty: - continue - vote2 = vote1 + voteBinValues[checkID2] - cpv = np.cross(voteBinPoints[checkID1], voteBinPoints[checkID2]).reshape(1, 3) - cpn = np.linalg.norm(cpv) - dotProduct = (voteBinPoints * cpv).sum(1) / cpn - checkIDs3 = np.nonzero(np.abs(dotProduct) > np.cos(orthTolerance * np.pi / 180))[0] - - for k in range(len(checkIDs3)): - checkID3 = checkIDs3[k] - if voteBinValues[checkID3] == 0 and force_unempty: - continue - vote3 = vote2 + voteBinValues[checkID3] - if vote3 > voteMax: - lastStepCost = vote3 - voteMax - if voteMax != 0: - tmp = (voteBinPoints[[checkID1Max, checkID2Max, checkID3Max]] * \ - voteBinPoints[[checkID1, checkID2, checkID3]]).sum(1) - lastStepAngle = np.arccos(tmp.clip(-1, 1)) - else: - lastStepAngle = np.zeros(3) - - checkID1Max = checkID1 - checkID2Max = checkID2 - checkID3Max = checkID3 - - voteMax = vote3 - - if checkID1Max == 0: - print('[WARN] sphereHoughVote: no orthogonal voting exist', file=sys.stderr) - return None, 0, 0 - initXYZ = voteBinPoints[[checkID1Max, checkID2Max, checkID3Max]] - - # refine - refiXYZ = np.zeros((3, 3)) - dotprod = (segNormal * initXYZ[[0]]).sum(1) - valid = np.abs(dotprod) < np.cos((90 - binRadius) * np.pi / 180) - validNm = segNormal[valid] - validWt = segLength[valid] * segScores[valid] - validWt = validWt / validWt.max() - refiNM = curveFitting(validNm, validWt) - refiXYZ[0] = refiNM.copy() - - dotprod = (segNormal * initXYZ[[1]]).sum(1) - valid = np.abs(dotprod) < np.cos((90 - binRadius) * np.pi / 180) - validNm = segNormal[valid] - validWt = segLength[valid] * segScores[valid] - validWt = validWt / validWt.max() - validNm = np.vstack([validNm, refiXYZ[[0]]]) - validWt = np.vstack([validWt, validWt.sum(0, keepdims=1) * 0.1]) - refiNM = curveFitting(validNm, validWt) - refiXYZ[1] = refiNM.copy() - - refiNM = np.cross(refiXYZ[0], refiXYZ[1]) - refiXYZ[2] = refiNM / np.linalg.norm(refiNM) - - return refiXYZ, lastStepCost, lastStepAngle - - -def findMainDirectionEMA(lines): - '''compute vp from set of lines''' - - # initial guess - segNormal = lines[:, :3] - segLength = lines[:, [6]] - segScores = np.ones((len(lines), 1)) - - shortSegValid = (segLength < 5 * np.pi / 180).reshape(-1) - segNormal = segNormal[~shortSegValid, :] - segLength = segLength[~shortSegValid] - segScores = segScores[~shortSegValid] - - numLinesg = len(segNormal) - candiSet, tri = icosahedron2sphere(3) - ang = np.arccos((candiSet[tri[0,0]] * candiSet[tri[0,1]]).sum().clip(-1, 1)) / np.pi * 180 - binRadius = ang / 2 - initXYZ, score, angle = sphereHoughVote(segNormal, segLength, segScores, 2*binRadius, 2, candiSet) - - if initXYZ is None: - print('[WARN] findMainDirectionEMA: initial failed', file=sys.stderr) - return None, score, angle - - # iterative refine - iter_max = 3 - candiSet, tri = icosahedron2sphere(5) - numCandi = len(candiSet) - angD = np.arccos((candiSet[tri[0, 0]] * candiSet[tri[0, 1]]).sum().clip(-1, 1)) / np.pi * 180 - binRadiusD = angD / 2 - curXYZ = initXYZ.copy() - tol = np.linspace(4*binRadius, 4*binRadiusD, iter_max) # shrink down ls and candi - for it in range(iter_max): - dot1 = np.abs((segNormal * curXYZ[[0]]).sum(1)) - dot2 = np.abs((segNormal * curXYZ[[1]]).sum(1)) - dot3 = np.abs((segNormal * curXYZ[[2]]).sum(1)) - valid1 = dot1 < np.cos((90 - tol[it]) * np.pi / 180) - valid2 = dot2 < np.cos((90 - tol[it]) * np.pi / 180) - valid3 = dot3 < np.cos((90 - tol[it]) * np.pi / 180) - valid = valid1 | valid2 | valid3 - - if np.sum(valid) == 0: - print('[WARN] findMainDirectionEMA: zero line segments for voting', file=sys.stderr) - break - - subSegNormal = segNormal[valid] - subSegLength = segLength[valid] - subSegScores = segScores[valid] - - dot1 = np.abs((candiSet * curXYZ[[0]]).sum(1)) - dot2 = np.abs((candiSet * curXYZ[[1]]).sum(1)) - dot3 = np.abs((candiSet * curXYZ[[2]]).sum(1)) - valid1 = dot1 > np.cos(tol[it] * np.pi / 180) - valid2 = dot2 > np.cos(tol[it] * np.pi / 180) - valid3 = dot3 > np.cos(tol[it] * np.pi / 180) - valid = valid1 | valid2 | valid3 - - if np.sum(valid) == 0: - print('[WARN] findMainDirectionEMA: zero line segments for voting', file=sys.stderr) - break - - subCandiSet = candiSet[valid] - - tcurXYZ, _, _ = sphereHoughVote(subSegNormal, subSegLength, subSegScores, 2*binRadiusD, 2, subCandiSet) - - if tcurXYZ is None: - print('[WARN] findMainDirectionEMA: no answer found', file=sys.stderr) - break - curXYZ = tcurXYZ.copy() - - mainDirect = curXYZ.copy() - mainDirect[0] = mainDirect[0] * np.sign(mainDirect[0,2]) - mainDirect[1] = mainDirect[1] * np.sign(mainDirect[1,2]) - mainDirect[2] = mainDirect[2] * np.sign(mainDirect[2,2]) - - uv = xyz2uvN(mainDirect) - I1 = np.argmax(uv[:,1]) - J = np.setdiff1d(np.arange(3), I1) - I2 = np.argmin(np.abs(np.sin(uv[J,0]))) - I2 = J[I2] - I3 = np.setdiff1d(np.arange(3), np.hstack([I1, I2])) - mainDirect = np.vstack([mainDirect[I1], mainDirect[I2], mainDirect[I3]]) - - mainDirect[0] = mainDirect[0] * np.sign(mainDirect[0,2]) - mainDirect[1] = mainDirect[1] * np.sign(mainDirect[1,1]) - mainDirect[2] = mainDirect[2] * np.sign(mainDirect[2,0]) - - mainDirect = np.vstack([mainDirect, -mainDirect]) - - return mainDirect, score, angle - - -def multi_linspace(start, stop, num): - div = (num - 1) - y = np.arange(0, num, dtype=np.float64) - steps = (stop - start) / div - return steps.reshape(-1, 1) * y + start.reshape(-1, 1) - - -def assignVanishingType(lines, vp, tol, area=10): - numLine = len(lines) - numVP = len(vp) - typeCost = np.zeros((numLine, numVP)) - # perpendicular - for vid in range(numVP): - cosint = (lines[:, :3] * vp[[vid]]).sum(1) - typeCost[:, vid] = np.arcsin(np.abs(cosint).clip(-1, 1)) - - # infinity - u = np.stack([lines[:, 4], lines[:, 5]], -1) - u = u.reshape(-1, 1) * 2 * np.pi - np.pi - v = computeUVN_vec(lines[:, :3], u, lines[:, 3]) - xyz = uv2xyzN_vec(np.hstack([u, v]), np.repeat(lines[:, 3], 2)) - xyz = multi_linspace(xyz[0::2].reshape(-1), xyz[1::2].reshape(-1), 100) - xyz = np.vstack([blk.T for blk in np.split(xyz, numLine)]) - xyz = xyz / np.linalg.norm(xyz, axis=1, keepdims=True) - for vid in range(numVP): - ang = np.arccos(np.abs((xyz * vp[[vid]]).sum(1)).clip(-1, 1)) - notok = (ang < area * np.pi / 180).reshape(numLine, 100).sum(1) != 0 - typeCost[notok, vid] = 100 - - I = typeCost.min(1) - tp = typeCost.argmin(1) - tp[I > tol] = numVP + 1 - - return tp, typeCost - - -def refitLineSegmentB(lines, vp, vpweight=0.1): - ''' - Refit direction of line segments - INPUT: - lines: original line segments - vp: vannishing point - vpweight: if set to 0, lines will not change; if set to inf, lines will - be forced to pass vp - ''' - numSample = 100 - numLine = len(lines) - xyz = np.zeros((numSample+1, 3)) - wei = np.ones((numSample+1, 1)) - wei[numSample] = vpweight * numSample - lines_ali = lines.copy() - for i in range(numLine): - n = lines[i, :3] - sid = lines[i, 4] * 2 * np.pi - eid = lines[i, 5] * 2 * np.pi - if eid < sid: - x = np.linspace(sid, eid + 2 * np.pi, numSample) % (2 * np.pi) - else: - x = np.linspace(sid, eid, numSample) - u = -np.pi + x.reshape(-1, 1) - v = computeUVN(n, u, lines[i, 3]) - xyz[:numSample] = uv2xyzN(np.hstack([u, v]), lines[i, 3]) - xyz[numSample] = vp - outputNM = curveFitting(xyz, wei) - lines_ali[i, :3] = outputNM - - return lines_ali - - -def paintParameterLine(parameterLine, width, height): - lines = parameterLine.copy() - panoEdgeC = np.zeros((height, width)) - - num_sample = max(height, width) - for i in range(len(lines)): - n = lines[i, :3] - sid = lines[i, 4] * 2 * np.pi - eid = lines[i, 5] * 2 * np.pi - if eid < sid: - x = np.linspace(sid, eid + 2 * np.pi, num_sample) - x = x % (2 * np.pi) - else: - x = np.linspace(sid, eid, num_sample) - u = -np.pi + x.reshape(-1, 1) - v = computeUVN(n, u, lines[i, 3]) - xyz = uv2xyzN(np.hstack([u, v]), lines[i, 3]) - uv = xyz2uvN(xyz, 1) - m = np.minimum(np.floor((uv[:,0] + np.pi) / (2 * np.pi) * width) + 1, - width).astype(np.int32) - n = np.minimum(np.floor(((np.pi / 2) - uv[:, 1]) / np.pi * height) + 1, - height).astype(np.int32) - panoEdgeC[n-1, m-1] = i - - return panoEdgeC - - -def panoEdgeDetection(img, viewSize=320, qError=0.7, refineIter=3): - ''' - line detection on panorama - INPUT: - img: image waiting for detection, double type, range 0~1 - viewSize: image size of croped views - qError: set smaller if more line segment wanted - OUTPUT: - oLines: detected line segments - vp: vanishing point - views: separate views of panorama - edges: original detection of line segments in separate views - panoEdge: image for visualize line segments - ''' - cutSize = viewSize - fov = np.pi / 3 - xh = np.arange(-np.pi, np.pi*5/6, np.pi/6) - yh = np.zeros(xh.shape[0]) - xp = np.array([-3/3, -2/3, -1/3, 0/3, 1/3, 2/3, -3/3, -2/3, -1/3, 0/3, 1/3, 2/3]) * np.pi - yp = np.array([ 1/4, 1/4, 1/4, 1/4, 1/4, 1/4, -1/4, -1/4, -1/4, -1/4, -1/4, -1/4]) * np.pi - x = np.concatenate([xh, xp, [0, 0]]) - y = np.concatenate([yh, yp, [np.pi/2., -np.pi/2]]) - - sepScene = separatePano(img.copy(), fov, x, y, cutSize) - edge = [] - for i, scene in enumerate(sepScene): - edgeMap, edgeList = lsdWrap(scene['img']) - edge.append({ - 'img': edgeMap, - 'edgeLst': edgeList, - 'vx': scene['vx'], - 'vy': scene['vy'], - 'fov': scene['fov'], - }) - edge[-1]['panoLst'] = edgeFromImg2Pano(edge[-1]) - lines, olines = combineEdgesN(edge) - - clines = lines.copy() - for _ in range(refineIter): - mainDirect, score, angle = findMainDirectionEMA(clines) - - tp, typeCost = assignVanishingType(lines, mainDirect[:3], 0.1, 10) - lines1 = lines[tp==0] - lines2 = lines[tp==1] - lines3 = lines[tp==2] - - lines1rB = refitLineSegmentB(lines1, mainDirect[0], 0) - lines2rB = refitLineSegmentB(lines2, mainDirect[1], 0) - lines3rB = refitLineSegmentB(lines3, mainDirect[2], 0) - - clines = np.vstack([lines1rB, lines2rB, lines3rB]) - - panoEdge1r = paintParameterLine(lines1rB, img.shape[1], img.shape[0]) - panoEdge2r = paintParameterLine(lines2rB, img.shape[1], img.shape[0]) - panoEdge3r = paintParameterLine(lines3rB, img.shape[1], img.shape[0]) - panoEdger = np.stack([panoEdge1r, panoEdge2r, panoEdge3r], -1) - - # output - olines = clines - vp = mainDirect - views = sepScene - edges = edge - panoEdge = panoEdger - - return olines, vp, views, edges, panoEdge, score, angle - - -if __name__ == '__main__': - - # disable OpenCV3's non thread safe OpenCL option - cv2.ocl.setUseOpenCL(False) - - import os - import argparse - import PIL - from PIL import Image - import time - - parser = argparse.ArgumentParser() - parser.add_argument('--i', required=True) - parser.add_argument('--o_prefix', required=True) - parser.add_argument('--qError', default=0.7, type=float) - parser.add_argument('--refineIter', default=3, type=int) - args = parser.parse_args() - - # Read image - img_ori = np.array(Image.open(args.i).resize((1024, 512))) - - # Vanishing point estimation & Line segments detection - s_time = time.time() - olines, vp, views, edges, panoEdge, score, angle = panoEdgeDetection(img_ori, - qError=args.qError, - refineIter=args.refineIter) - print('Elapsed time: %.2f' % (time.time() - s_time)) - panoEdge = (panoEdge > 0) - - print('Vanishing point:') - for v in vp[2::-1]: - print('%.6f %.6f %.6f' % tuple(v)) - - # Visualization - edg = rotatePanorama(panoEdge.astype(np.float64), vp[2::-1]) - img = rotatePanorama(img_ori / 255.0, vp[2::-1]) - one = img.copy() * 0.5 - one[(edg > 0.5).sum(-1) > 0] = 0 - one[edg[..., 0] > 0.5, 0] = 1 - one[edg[..., 1] > 0.5, 1] = 1 - one[edg[..., 2] > 0.5, 2] = 1 - Image.fromarray((edg * 255).astype(np.uint8)).save('%s_edg.png' % args.o_prefix) - Image.fromarray((img * 255).astype(np.uint8)).save('%s_img.png' % args.o_prefix) - Image.fromarray((one * 255).astype(np.uint8)).save('%s_one.png' % args.o_prefix) diff --git a/spaces/jackli888/stable-diffusion-webui/extensions-builtin/ScuNET/scunet_model_arch.py b/spaces/jackli888/stable-diffusion-webui/extensions-builtin/ScuNET/scunet_model_arch.py deleted file mode 100644 index 43ca8d36fe57a12dcad58e8b06ee2e0774494b0e..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions-builtin/ScuNET/scunet_model_arch.py +++ /dev/null @@ -1,265 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import torch -import torch.nn as nn -from einops import rearrange -from einops.layers.torch import Rearrange -from timm.models.layers import trunc_normal_, DropPath - - -class WMSA(nn.Module): - """ Self-attention module in Swin Transformer - """ - - def __init__(self, input_dim, output_dim, head_dim, window_size, type): - super(WMSA, self).__init__() - self.input_dim = input_dim - self.output_dim = output_dim - self.head_dim = head_dim - self.scale = self.head_dim ** -0.5 - self.n_heads = input_dim // head_dim - self.window_size = window_size - self.type = type - self.embedding_layer = nn.Linear(self.input_dim, 3 * self.input_dim, bias=True) - - self.relative_position_params = nn.Parameter( - torch.zeros((2 * window_size - 1) * (2 * window_size - 1), self.n_heads)) - - self.linear = nn.Linear(self.input_dim, self.output_dim) - - trunc_normal_(self.relative_position_params, std=.02) - self.relative_position_params = torch.nn.Parameter( - self.relative_position_params.view(2 * window_size - 1, 2 * window_size - 1, self.n_heads).transpose(1, - 2).transpose( - 0, 1)) - - def generate_mask(self, h, w, p, shift): - """ generating the mask of SW-MSA - Args: - shift: shift parameters in CyclicShift. - Returns: - attn_mask: should be (1 1 w p p), - """ - # supporting square. - attn_mask = torch.zeros(h, w, p, p, p, p, dtype=torch.bool, device=self.relative_position_params.device) - if self.type == 'W': - return attn_mask - - s = p - shift - attn_mask[-1, :, :s, :, s:, :] = True - attn_mask[-1, :, s:, :, :s, :] = True - attn_mask[:, -1, :, :s, :, s:] = True - attn_mask[:, -1, :, s:, :, :s] = True - attn_mask = rearrange(attn_mask, 'w1 w2 p1 p2 p3 p4 -> 1 1 (w1 w2) (p1 p2) (p3 p4)') - return attn_mask - - def forward(self, x): - """ Forward pass of Window Multi-head Self-attention module. - Args: - x: input tensor with shape of [b h w c]; - attn_mask: attention mask, fill -inf where the value is True; - Returns: - output: tensor shape [b h w c] - """ - if self.type != 'W': x = torch.roll(x, shifts=(-(self.window_size // 2), -(self.window_size // 2)), dims=(1, 2)) - x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size) - h_windows = x.size(1) - w_windows = x.size(2) - # square validation - # assert h_windows == w_windows - - x = rearrange(x, 'b w1 w2 p1 p2 c -> b (w1 w2) (p1 p2) c', p1=self.window_size, p2=self.window_size) - qkv = self.embedding_layer(x) - q, k, v = rearrange(qkv, 'b nw np (threeh c) -> threeh b nw np c', c=self.head_dim).chunk(3, dim=0) - sim = torch.einsum('hbwpc,hbwqc->hbwpq', q, k) * self.scale - # Adding learnable relative embedding - sim = sim + rearrange(self.relative_embedding(), 'h p q -> h 1 1 p q') - # Using Attn Mask to distinguish different subwindows. - if self.type != 'W': - attn_mask = self.generate_mask(h_windows, w_windows, self.window_size, shift=self.window_size // 2) - sim = sim.masked_fill_(attn_mask, float("-inf")) - - probs = nn.functional.softmax(sim, dim=-1) - output = torch.einsum('hbwij,hbwjc->hbwic', probs, v) - output = rearrange(output, 'h b w p c -> b w p (h c)') - output = self.linear(output) - output = rearrange(output, 'b (w1 w2) (p1 p2) c -> b (w1 p1) (w2 p2) c', w1=h_windows, p1=self.window_size) - - if self.type != 'W': output = torch.roll(output, shifts=(self.window_size // 2, self.window_size // 2), - dims=(1, 2)) - return output - - def relative_embedding(self): - cord = torch.tensor(np.array([[i, j] for i in range(self.window_size) for j in range(self.window_size)])) - relation = cord[:, None, :] - cord[None, :, :] + self.window_size - 1 - # negative is allowed - return self.relative_position_params[:, relation[:, :, 0].long(), relation[:, :, 1].long()] - - -class Block(nn.Module): - def __init__(self, input_dim, output_dim, head_dim, window_size, drop_path, type='W', input_resolution=None): - """ SwinTransformer Block - """ - super(Block, self).__init__() - self.input_dim = input_dim - self.output_dim = output_dim - assert type in ['W', 'SW'] - self.type = type - if input_resolution <= window_size: - self.type = 'W' - - self.ln1 = nn.LayerNorm(input_dim) - self.msa = WMSA(input_dim, input_dim, head_dim, window_size, self.type) - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.ln2 = nn.LayerNorm(input_dim) - self.mlp = nn.Sequential( - nn.Linear(input_dim, 4 * input_dim), - nn.GELU(), - nn.Linear(4 * input_dim, output_dim), - ) - - def forward(self, x): - x = x + self.drop_path(self.msa(self.ln1(x))) - x = x + self.drop_path(self.mlp(self.ln2(x))) - return x - - -class ConvTransBlock(nn.Module): - def __init__(self, conv_dim, trans_dim, head_dim, window_size, drop_path, type='W', input_resolution=None): - """ SwinTransformer and Conv Block - """ - super(ConvTransBlock, self).__init__() - self.conv_dim = conv_dim - self.trans_dim = trans_dim - self.head_dim = head_dim - self.window_size = window_size - self.drop_path = drop_path - self.type = type - self.input_resolution = input_resolution - - assert self.type in ['W', 'SW'] - if self.input_resolution <= self.window_size: - self.type = 'W' - - self.trans_block = Block(self.trans_dim, self.trans_dim, self.head_dim, self.window_size, self.drop_path, - self.type, self.input_resolution) - self.conv1_1 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True) - self.conv1_2 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True) - - self.conv_block = nn.Sequential( - nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False), - nn.ReLU(True), - nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False) - ) - - def forward(self, x): - conv_x, trans_x = torch.split(self.conv1_1(x), (self.conv_dim, self.trans_dim), dim=1) - conv_x = self.conv_block(conv_x) + conv_x - trans_x = Rearrange('b c h w -> b h w c')(trans_x) - trans_x = self.trans_block(trans_x) - trans_x = Rearrange('b h w c -> b c h w')(trans_x) - res = self.conv1_2(torch.cat((conv_x, trans_x), dim=1)) - x = x + res - - return x - - -class SCUNet(nn.Module): - # def __init__(self, in_nc=3, config=[2, 2, 2, 2, 2, 2, 2], dim=64, drop_path_rate=0.0, input_resolution=256): - def __init__(self, in_nc=3, config=None, dim=64, drop_path_rate=0.0, input_resolution=256): - super(SCUNet, self).__init__() - if config is None: - config = [2, 2, 2, 2, 2, 2, 2] - self.config = config - self.dim = dim - self.head_dim = 32 - self.window_size = 8 - - # drop path rate for each layer - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(config))] - - self.m_head = [nn.Conv2d(in_nc, dim, 3, 1, 1, bias=False)] - - begin = 0 - self.m_down1 = [ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution) - for i in range(config[0])] + \ - [nn.Conv2d(dim, 2 * dim, 2, 2, 0, bias=False)] - - begin += config[0] - self.m_down2 = [ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 2) - for i in range(config[1])] + \ - [nn.Conv2d(2 * dim, 4 * dim, 2, 2, 0, bias=False)] - - begin += config[1] - self.m_down3 = [ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 4) - for i in range(config[2])] + \ - [nn.Conv2d(4 * dim, 8 * dim, 2, 2, 0, bias=False)] - - begin += config[2] - self.m_body = [ConvTransBlock(4 * dim, 4 * dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 8) - for i in range(config[3])] - - begin += config[3] - self.m_up3 = [nn.ConvTranspose2d(8 * dim, 4 * dim, 2, 2, 0, bias=False), ] + \ - [ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 4) - for i in range(config[4])] - - begin += config[4] - self.m_up2 = [nn.ConvTranspose2d(4 * dim, 2 * dim, 2, 2, 0, bias=False), ] + \ - [ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 2) - for i in range(config[5])] - - begin += config[5] - self.m_up1 = [nn.ConvTranspose2d(2 * dim, dim, 2, 2, 0, bias=False), ] + \ - [ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution) - for i in range(config[6])] - - self.m_tail = [nn.Conv2d(dim, in_nc, 3, 1, 1, bias=False)] - - self.m_head = nn.Sequential(*self.m_head) - self.m_down1 = nn.Sequential(*self.m_down1) - self.m_down2 = nn.Sequential(*self.m_down2) - self.m_down3 = nn.Sequential(*self.m_down3) - self.m_body = nn.Sequential(*self.m_body) - self.m_up3 = nn.Sequential(*self.m_up3) - self.m_up2 = nn.Sequential(*self.m_up2) - self.m_up1 = nn.Sequential(*self.m_up1) - self.m_tail = nn.Sequential(*self.m_tail) - # self.apply(self._init_weights) - - def forward(self, x0): - - h, w = x0.size()[-2:] - paddingBottom = int(np.ceil(h / 64) * 64 - h) - paddingRight = int(np.ceil(w / 64) * 64 - w) - x0 = nn.ReplicationPad2d((0, paddingRight, 0, paddingBottom))(x0) - - x1 = self.m_head(x0) - x2 = self.m_down1(x1) - x3 = self.m_down2(x2) - x4 = self.m_down3(x3) - x = self.m_body(x4) - x = self.m_up3(x + x4) - x = self.m_up2(x + x3) - x = self.m_up1(x + x2) - x = self.m_tail(x + x1) - - x = x[..., :h, :w] - - return x - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) \ No newline at end of file diff --git a/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/clip/simple_tokenizer.py b/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/clip/simple_tokenizer.py deleted file mode 100644 index 56d17512b06afb700e7834e4f3f6515c315ebb74..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/clip/simple_tokenizer.py +++ /dev/null @@ -1,150 +0,0 @@ -import gzip -import html -import os -from functools import lru_cache - -import ftfy -import regex as re - - -@lru_cache() -def default_bpe(): - return os.path.join( - os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz" - ) - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) - + list(range(ord("¡"), ord("¬") + 1)) - + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2 ** 8): - if b not in bs: - bs.append(b) - cs.append(2 ** 8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r"\s+", " ", text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str = default_bpe()): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode("utf-8").split("\n") - merges = merges[1 : 49152 - 256 - 2 + 1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v + "" for v in vocab] - for merge in merges: - vocab.append("".join(merge)) - vocab.extend(["<|startoftext|>", "<|endoftext|>"]) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = { - "<|startoftext|>": "<|startoftext|>", - "<|endoftext|>": "<|endoftext|>", - } - self.pat = re.compile( - r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", - re.IGNORECASE, - ) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + (token[-1] + "",) - pairs = get_pairs(word) - - if not pairs: - return token + "" - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) - bpe_tokens.extend( - self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ") - ) - return bpe_tokens - - def decode(self, tokens): - text = "".join([self.decoder[token] for token in tokens]) - text = ( - bytearray([self.byte_decoder[c] for c in text]) - .decode("utf-8", errors="replace") - .replace("", " ") - ) - return text diff --git a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/image_degradation/utils_image.py b/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/nn/modules/unittest.py b/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/nn/modules/unittest.py deleted file mode 100644 index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/nn/modules/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest - -import numpy as np -from torch.autograd import Variable - - -def as_numpy(v): - if isinstance(v, Variable): - v = v.data - return v.cpu().numpy() - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3): - npa, npb = as_numpy(a), as_numpy(b) - self.assertTrue( - np.allclose(npa, npb, atol=atol), - 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max()) - ) diff --git a/spaces/jiejiejie0420/bingo/src/components/ui/select.tsx b/spaces/jiejiejie0420/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/feaLib/error.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/feaLib/error.py deleted file mode 100644 index a2c5f9dbb9f9c943bf57d047af0456070d319e5d..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/feaLib/error.py +++ /dev/null @@ -1,22 +0,0 @@ -class FeatureLibError(Exception): - def __init__(self, message, location): - Exception.__init__(self, message) - self.location = location - - def __str__(self): - message = Exception.__str__(self) - if self.location: - return f"{self.location}: {message}" - else: - return message - - -class IncludedFeaNotFound(FeatureLibError): - def __str__(self): - assert self.location is not None - - message = ( - "The following feature file should be included but cannot be found: " - f"{Exception.__str__(self)}" - ) - return f"{self.location}: {message}" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/varLib/mvar.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/varLib/mvar.py deleted file mode 100644 index 653aeb45e0ff18e06c2dd04ad58085d77a73c1b5..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/varLib/mvar.py +++ /dev/null @@ -1,40 +0,0 @@ -MVAR_ENTRIES = { - "hasc": ("OS/2", "sTypoAscender"), # horizontal ascender - "hdsc": ("OS/2", "sTypoDescender"), # horizontal descender - "hlgp": ("OS/2", "sTypoLineGap"), # horizontal line gap - "hcla": ("OS/2", "usWinAscent"), # horizontal clipping ascent - "hcld": ("OS/2", "usWinDescent"), # horizontal clipping descent - "vasc": ("vhea", "ascent"), # vertical ascender - "vdsc": ("vhea", "descent"), # vertical descender - "vlgp": ("vhea", "lineGap"), # vertical line gap - "hcrs": ("hhea", "caretSlopeRise"), # horizontal caret rise - "hcrn": ("hhea", "caretSlopeRun"), # horizontal caret run - "hcof": ("hhea", "caretOffset"), # horizontal caret offset - "vcrs": ("vhea", "caretSlopeRise"), # vertical caret rise - "vcrn": ("vhea", "caretSlopeRun"), # vertical caret run - "vcof": ("vhea", "caretOffset"), # vertical caret offset - "xhgt": ("OS/2", "sxHeight"), # x height - "cpht": ("OS/2", "sCapHeight"), # cap height - "sbxs": ("OS/2", "ySubscriptXSize"), # subscript em x size - "sbys": ("OS/2", "ySubscriptYSize"), # subscript em y size - "sbxo": ("OS/2", "ySubscriptXOffset"), # subscript em x offset - "sbyo": ("OS/2", "ySubscriptYOffset"), # subscript em y offset - "spxs": ("OS/2", "ySuperscriptXSize"), # superscript em x size - "spys": ("OS/2", "ySuperscriptYSize"), # superscript em y size - "spxo": ("OS/2", "ySuperscriptXOffset"), # superscript em x offset - "spyo": ("OS/2", "ySuperscriptYOffset"), # superscript em y offset - "strs": ("OS/2", "yStrikeoutSize"), # strikeout size - "stro": ("OS/2", "yStrikeoutPosition"), # strikeout offset - "unds": ("post", "underlineThickness"), # underline size - "undo": ("post", "underlinePosition"), # underline offset - #'gsp0': ('gasp', 'gaspRange[0].rangeMaxPPEM'), # gaspRange[0] - #'gsp1': ('gasp', 'gaspRange[1].rangeMaxPPEM'), # gaspRange[1] - #'gsp2': ('gasp', 'gaspRange[2].rangeMaxPPEM'), # gaspRange[2] - #'gsp3': ('gasp', 'gaspRange[3].rangeMaxPPEM'), # gaspRange[3] - #'gsp4': ('gasp', 'gaspRange[4].rangeMaxPPEM'), # gaspRange[4] - #'gsp5': ('gasp', 'gaspRange[5].rangeMaxPPEM'), # gaspRange[5] - #'gsp6': ('gasp', 'gaspRange[6].rangeMaxPPEM'), # gaspRange[6] - #'gsp7': ('gasp', 'gaspRange[7].rangeMaxPPEM'), # gaspRange[7] - #'gsp8': ('gasp', 'gaspRange[8].rangeMaxPPEM'), # gaspRange[8] - #'gsp9': ('gasp', 'gaspRange[9].rangeMaxPPEM'), # gaspRange[9] -} diff --git a/spaces/johnyang/ChatPaper111/backend.py b/spaces/johnyang/ChatPaper111/backend.py deleted file mode 100644 index afe1242df3a833bd4d6488aa2e69a469148c97f1..0000000000000000000000000000000000000000 --- a/spaces/johnyang/ChatPaper111/backend.py +++ /dev/null @@ -1,81 +0,0 @@ -from flask import jsonify, Flask, request -from embedding_model import HuggingfaceSentenceTransformerModel -from similarity_metric import CosineSimilarity -from pdf_parser import GrobidSciPDFPaser -from chatbot import OpenAIChatbot -from chat_pdf import ChatPDF -from config import DEFAULT_ENGINE, MAX_TOKEN_MODEL_MAP, DEFAULT_TEMPERATURE, DEFAULT_TOP_P, DEFAULT_PRESENCE_PENALTY, DEFAULT_FREQUENCY_PENALTY, DEFAULT_REPLY_COUNT -app = Flask(__name__) -chatpdf_pool = {} - -embedding_model = HuggingfaceSentenceTransformerModel() -simi_metric = CosineSimilarity() - - -@app.route("/query/", methods=['POST', 'GET']) -def query(): - api_key = request.headers.get('Api-Key') - pdf_link = request.json['pdf_link'] - user_stamp = request.json['user_stamp'] - user_query = request.json['user_query'] - print( - "api_key", api_key, - "pdf_link", pdf_link, - "user_stamp", user_stamp, - "user_query", user_query - ) - - chat_pdf = None - if user_stamp not in chatpdf_pool: - print(f"User {user_stamp} not in pool, creating new chatpdf") - # Initialize the ChatPDF - bot = OpenAIChatbot( - api_key=api_key, - engine=DEFAULT_ENGINE, - proxy=None, - max_tokens=4000, - temperature=DEFAULT_TEMPERATURE, - top_p=DEFAULT_TOP_P, - presence_penalty=DEFAULT_PRESENCE_PENALTY, - frequency_penalty=DEFAULT_FREQUENCY_PENALTY, - reply_count=DEFAULT_REPLY_COUNT - ) - - pdf = GrobidSciPDFPaser( - pdf_link=pdf_link - ) - chat_pdf = ChatPDF( - pdf=pdf, - bot=bot, - embedding_model=embedding_model, - similarity_metric=simi_metric, - user_stamp=user_stamp - ) - chatpdf_pool[user_stamp] = chat_pdf - else: - print("user_stamp", user_stamp, "already exists") - chat_pdf = chatpdf_pool[user_stamp] - - try: - response = chat_pdf.chat(user_query) - code = 200 - json_dict = { - "code": code, - "response": response - } - except Exception as e: - code = 500 - json_dict = { - "code": code, - "response": str(e) - } - return jsonify(json_dict) - - -# @app.route("/", methods=['GET']) -# def index(): -# return "Hello World!" - - -if __name__ == '__main__': - app.run(host='0.0.0.0', port=5000, debug=False) diff --git a/spaces/jone/Music_Source_Separation/scripts/3_create_evaluation_audios/vctk-musdb18/create_evaluation_audios.sh b/spaces/jone/Music_Source_Separation/scripts/3_create_evaluation_audios/vctk-musdb18/create_evaluation_audios.sh deleted file mode 100644 index b12a57c6e2ddafe7e9db2d9240b58d00898b2c8a..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/scripts/3_create_evaluation_audios/vctk-musdb18/create_evaluation_audios.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/bin/bash -VCTK_DATASET_DIR=${1:-"./datasets/vctk"} -MUSDB18_DATASET_DIR=${2:-"./datasets/musdb18"} -WORKSPACE=${3:-"./workspaces/bytesep"} - -SAMPLE_RATE=44100 -CHANNELS=2 -EVALUATION_SEGMENTS_NUM=100 - -EVLUATION_AUDIOS_DIR="${WORKSPACE}/evaluation_audios/vctk-musdb18" - -python3 bytesep/dataset_creation/create_evaluation_audios/vctk-musdb18.py \ - --vctk_dataset_dir=$VCTK_DATASET_DIR \ - --musdb18_dataset_dir=$MUSDB18_DATASET_DIR \ - --evaluation_audios_dir=$EVLUATION_AUDIOS_DIR \ - --sample_rate=$SAMPLE_RATE \ - --channels=$CHANNELS \ - --evaluation_segments_num=$EVALUATION_SEGMENTS_NUM - \ No newline at end of file diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/contrib/moe/trainer_test.py b/spaces/juancopi81/youtube-music-transcribe/t5x/contrib/moe/trainer_test.py deleted file mode 100644 index f7dcc94c9305ecc10c9eaac1150b49effea055b6..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/contrib/moe/trainer_test.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for trainer.""" - -import contextlib - -from absl.testing import absltest -from flax import optim -import jax -import numpy as np -from t5x import metrics as metrics_lib -from t5x import models as models_lib -from t5x import train_state as train_state_lib -from t5x.contrib.moe import partitioning -from t5x.contrib.moe import trainer as trainer_lib -import tensorflow as tf - -mock = absltest.mock -jax.config.parse_flags_with_absl() - - -# Make `log_elapsed_time` a no-op to simplify mocking of `time.time()`. -@contextlib.contextmanager -def fake_log_elapsed_time(_): - yield - - -jax._src.dispatch.log_elapsed_time = fake_log_elapsed_time - - -def fake_accum_grads(model, optimizer, batch, rng, num_microbatches, - data_partition_spec): - del model, num_microbatches, rng, data_partition_spec - # Add `i` to each optimzer value. - i = batch['i'].sum() - grad_accum = jax.tree_map(lambda x: i, optimizer) - # Add j to each metric. - j = batch['j'].sum() - metrics = { - 'loss': metrics_lib.Sum.from_model_output(j), - 'accuracy': metrics_lib.Sum.from_model_output(j) - } - return grad_accum, metrics, None - - -def fake_apply_grads(optimizer, - grad_accum, - metrics, - learning_rate, - weight_metrics_computer, - other_state_variables=None): - del weight_metrics_computer - del other_state_variables - metrics['learning_rate'] = metrics_lib.Sum.from_model_output(learning_rate) - optimizer = jax.tree_multimap(lambda x, g: x + g, optimizer, grad_accum) - return optimizer, metrics - - -class MoeTrainerTest(absltest.TestCase): - - def setUp(self): - super().setUp() - self.init_optimizer = optim.Optimizer( - optim.GradientDescent(), - state=optim.OptimizerState( - step=0, param_states={ - 'expert_bias': 0, - 'kernel': 0 - }), - target={ - 'expert_bias': np.zeros(4), - 'kernel': np.zeros((2, 4)) - }) - self.init_train_state = train_state_lib.FlaxOptimTrainState( - self.init_optimizer) - train_state_axes = jax.tree_map(lambda x: None, self.init_train_state) - model_dir = self.create_tempdir().full_path - - mapfn = lambda i: {'i': [tf.cast(i, tf.int32)], 'j': [tf.cast(1, tf.int32)]} - self.dataset = tf.data.Dataset.range(6).map(mapfn).batch( - 2, drop_remainder=True) - - num_experts = 10 - self.test_trainer = trainer_lib.MoeTrainer( - model=mock.create_autospec(models_lib.BaseModel, instance=True), - train_state=self.init_train_state, - partitioner=partitioning.MoePjitPartitioner( - num_experts=num_experts, num_partitions=1), - eval_names=['task1', 'task2'], - summary_dir=model_dir, - train_state_axes=train_state_axes, - rng=np.ones(2, np.uint32), - learning_rate_fn=lambda step: 2 * step, - num_microbatches=None, - num_experts=num_experts) - - @mock.patch('time.time') - @mock.patch('t5x.trainer.accumulate_grads_microbatched', fake_accum_grads) - @mock.patch('t5x.trainer.apply_grads', fake_apply_grads) - @mock.patch('absl.logging.log', lambda *_: None) # avoids time.time() calls - def _test_train(self, precompile, mock_time=None): - trainer = self.test_trainer - initial_rng = trainer._base_rng - - if precompile: - mock_time.side_effect = [0, 1] - trainer.compile_train(next(self.dataset.as_numpy_iterator())) - trainer._compiled_train_step = mock.Mock( - side_effect=trainer._compiled_train_step) - - trainer._partitioned_train_step = mock.Mock( - side_effect=trainer._partitioned_train_step) - - # train start, logging, train end, logging - mock_time.side_effect = [1, 5] - num_steps = 2 - trainer.train(self.dataset.as_numpy_iterator(), num_steps) - - # Base rng must remain the same. - np.testing.assert_array_equal(trainer._base_rng, initial_rng) - - expected_optimizer = optim.Optimizer( - self.init_optimizer.optimizer_def, - state=optim.OptimizerState( - step=[6], - param_states={ - 'expert_bias': 60, # 10 * (0+1+2+3) = 60 - 'kernel': 6 # 0+1+2+3 = 6 - }), - target={ - 'expert_bias': 60 * np.ones(4), - 'kernel': 6 * np.ones((2, 4)) - }) - expected_train_state = train_state_lib.FlaxOptimTrainState( - expected_optimizer) - jax.tree_multimap(np.testing.assert_allclose, trainer.train_state, - expected_train_state) - - if precompile: - self.assertEqual(trainer._compiled_train_step.call_count, num_steps) - trainer._partitioned_train_step.assert_not_called() - else: - self.assertIsNone(trainer._compiled_train_step) - self.assertEqual(trainer._partitioned_train_step.call_count, num_steps) - - def test_train_noprecompile(self): - self._test_train(False) - - def test_train_precompile(self): - self._test_train(True) - - -if __name__ == '__main__': - absltest.main() diff --git a/spaces/junkmind/SOTER/training/datasets/validation_set.py b/spaces/junkmind/SOTER/training/datasets/validation_set.py deleted file mode 100644 index fa28f0889acb86d37fc4f0b8d9a9373ab0cc6ba4..0000000000000000000000000000000000000000 --- a/spaces/junkmind/SOTER/training/datasets/validation_set.py +++ /dev/null @@ -1,60 +0,0 @@ - - -PUBLIC_SET = {'tjuihawuqm', 'prwsfljdjo', 'scrbqgpvzz', 'ziipxxchai', 'uubgqnvfdl', 'wclvkepakb', 'xjvxtuakyd', - 'qlvsqdroqo', 'bcbqxhziqz', 'yzuestxcbq', 'hxwtsaydal', 'kqlvggiqee', 'vtunvalyji', 'mohiqoogpb', - 'siebfpwuhu', 'cekwtyxdoo', 'hszwwswewp', 'orekjthsef', 'huvlwkxoxm', 'fmhiujydwo', 'lhvjzhjxdp', - 'ibxfxggtqh', 'bofrwgeyjo', 'rmufsuogzn', 'zbgssotnjm', 'dpevefkefv', 'sufvvwmbha', 'ncoeewrdlo', - 'qhsehzgxqj', 'yxadevzohx', 'aomqqjipcp', 'pcyswtgick', 'wfzjxzhdkj', 'rcjfxxhcal', 'lnjkpdviqb', - 'xmkwsnuzyq', 'ouaowjmigq', 'bkuzquigyt', 'vwxednhlwz', 'mszblrdprw', 'blnmxntbey', 'gccnvdoknm', - 'mkzaekkvej', 'hclsparpth', 'eryjktdexi', 'hfsvqabzfq', 'acazlolrpz', 'yoyhmxtrys', 'rerpivllud', - 'elackxuccp', 'zgbhzkditd', 'vjljdfopjg', 'famlupsgqm', 'nymodlmxni', 'qcbkztamqc', 'qclpbcbgeq', - 'lpkgabskbw', 'mnowxangqx', 'czfqlbcfpa', 'qyyhuvqmyf', 'toinozytsp', 'ztyvglkcsf', 'nplviymzlg', - 'opvqdabdap', 'uxuvkrjhws', 'mxahsihabr', 'cqxxumarvp', 'ptbfnkajyi', 'njzshtfmcw', 'dcqodpzomd', - 'ajiyrjfyzp', 'ywauoonmlr', 'gochxzemmq', 'lpgxwdgnio', 'hnfwagcxdf', 'gfcycflhbo', 'gunamloolc', - 'yhjlnisfel', 'srfefmyjvt', 'evysmtpnrf', 'aktnlyqpah', 'gpsxfxrjrr', 'zfobicuigx', 'mnzabbkpmt', - 'rfjuhbnlro', 'zuwwbbusgl', 'csnkohqxdv', 'bzvzpwrabw', 'yietrwuncf', 'wynotylpnm', 'ekboxwrwuv', - 'rcecrgeotc', 'rklawjhbpv', 'ilqwcbprqa', 'jsysgmycsx', 'sqixhnilfm', 'wnlubukrki', 'nikynwcvuh', - 'sjkfxrlxxs', 'btdxnajogv', 'wjhpisoeaj', 'dyjklprkoc', 'qlqhjcshpk', 'jyfvaequfg', 'dozjwhnedd', - 'owaogcehvc', 'oyqgwjdwaj', 'vvfszaosiv', 'kmcdjxmnoa', 'jiswxuqzyz', 'ddtbarpcgo', 'wqysrieiqu', - 'xcruhaccxc', 'honxqdilvv', 'nxgzmgzkfv', 'cxsvvnxpyz', 'demuhxssgl', 'hzoiotcykp', 'fwykevubzy', - 'tejfudfgpq', 'kvmpmhdxly', 'oojxonbgow', 'vurjckblge', 'oysopgovhu', 'khpipxnsvx', 'pqthmvwonf', - 'fddmkqjwsh', 'pcoxcmtroa', 'cnxccbjlct', 'ggzjfrirjh', 'jquevmhdvc', 'ecumyiowzs', 'esmqxszybs', - 'mllzkpgatp', 'ryxaqpfubf', 'hbufmvbium', 'vdtsbqidjb', 'sjwywglgym', 'qxyrtwozyw', 'upmgtackuf', - 'ucthmsajay', 'zgjosltkie', 'snlyjbnpgw', 'nswtvttxre', 'iznnzjvaxc', 'jhczqfefgw', 'htzbnroagi', - 'pdswwyyntw', 'uvrzaczrbx', 'vbcgoyxsvn', 'hzssdinxec', 'novarhxpbj', 'vizerpsvbz', 'jawgcggquk', - 'iorbtaarte', 'yarpxfqejd', 'vhbbwdflyh', 'rrrfjhugvb', 'fneqiqpqvs', 'jytrvwlewz', 'bfjsthfhbd', - 'rxdoimqble', 'ekelfsnqof', 'uqvxjfpwdo', 'cjkctqqakb', 'tynfsthodx', 'yllztsrwjw', 'bktkwbcawi', - 'wcqvzujamg', 'bcvheslzrq', 'aqrsylrzgi', 'sktpeppbkc', 'mkmgcxaztt', 'etdliwticv', 'hqzwudvhih', - 'swsaoktwgi', 'temjefwaas', 'papagllumt', 'xrtvqhdibb', 'oelqpetgwj', 'ggdpclfcgk', 'imdmhwkkni', - 'lebzjtusnr', 'xhtppuyqdr', 'nxzgekegsp', 'waucvvmtkq', 'rnfcjxynfa', 'adohdulfwb', 'tjywwgftmv', - 'fjrueenjyp', 'oaguiggjyv', 'ytopzxrswu', 'yxvmusxvcz', 'rukyxomwcx', 'qdqdsaiitt', 'mxlipjhmqk', - 'voawxrmqyl', 'kezwvsxxzj', 'oocincvedt', 'qooxnxqqjb', 'mwwploizlj', 'yaxgpxhavq', 'uhakqelqri', - 'bvpeerislp', 'bkcyglmfci', 'jyoxdvxpza', 'gkutjglghz', 'knxltsvzyu', 'ybbrkacebd', 'apvzjkvnwn', - 'ahjnxtiamx', 'hsbljbsgxr', 'fnxgqcvlsd', 'xphdfgmfmz', 'scbdenmaed', 'ywxpquomgt', 'yljecirelf', - 'wcvsqnplsk', 'vmxfwxgdei', 'icbsahlivv', 'yhylappzid', 'irqzdokcws', 'petmyhjclt', 'rmlzgerevr', - 'qarqtkvgby', 'nkhzxomani', 'viteugozpv', 'qhkzlnzruj', 'eisofhptvk', 'gqnaxievjx', 'heiyoojifp', - 'zcxcmneefk', 'wvgviwnwob', 'gcdtglsoqj', 'yqhouqakbx', 'fopjiyxiqd', 'hierggamuo', 'ypbtpunjvm', - 'sjinmmbipg', 'kmqkiihrmj', 'wmoqzxddkb', 'lnhkjhyhvw', 'wixbuuzygv', 'fsdrwikhge', 'sfsayjgzrh', - 'pqdeutauqc', 'frqfsucgao', 'pdufsewrec', 'bfdopzvxbi', 'shnsajrsow', 'rvvpazsffd', 'pxcfrszlgi', - 'itfsvvmslp', 'ayipraspbn', 'prhmixykhr', 'doniqevxeg', 'dvtpwatuja', 'jiavqbrkyk', 'ipkpxvwroe', - 'syxobtuucp', 'syuxttuyhm', 'nwvsbmyndn', 'eqslzbqfea', 'ytddugrwph', 'vokrpfjpeb', 'bdshuoldwx', - 'fmvvmcbdrw', 'bnuwxhfahw', 'gbnzicjyhz', 'txnmkabufs', 'gfdjzwnpyp', 'hweshqpfwe', 'dxgnpnowgk', - 'xugmhbetrw', 'rktrpsdlci', 'nthpnwylxo', 'ihglzxzroo', 'ocgdbrgmtq', 'ruhtnngrqv', 'xljemofssi', - 'zxacihctqp', 'ghnpsltzyn', 'lbigytrrtr', 'ndikguxzek', 'mdfndlljvt', 'lyoslorecs', 'oefukgnvel', - 'zmxeiipnqb', 'cosghhimnd', 'alrtntfxtd', 'eywdmustbb', 'ooafcxxfrs', 'fqgypsunzr', 'hevcclcklc', - 'uhrqlmlclw', 'ipvwtgdlre', 'wcssbghcpc', 'didzujjhtg', 'fjxovgmwnm', 'dmmvuaikkv', 'hitfycdavv', - 'zyufpqvpyu', 'coujjnypba', 'temeqbmzxu', 'apedduehoy', 'iksxzpqxzi', 'kwfdyqofzw', 'aassnaulhq', - 'eyguqfmgzh', 'yiykshcbaz', 'sngjsueuhs', 'okgelildpc', 'ztyuiqrhdk', 'tvhjcfnqtg', 'gfgcwxkbjd', - 'lbfqksftuo', 'kowiwvrjht', 'dkuqbduxev', 'mwnibuujwz', 'sodvtfqbpf', 'hsbwhlolsn', 'qsjiypnjwi', - 'blszgmxkvu', 'ystdtnetgj', 'rfwxcinshk', 'vnlzxqwthl', 'ljouzjaqqe', 'gahgyuwzbu', 'xxzefxwyku', - 'xitgdpzbxv', 'sylnrepacf', 'igpvrfjdzc', 'nxnmkytwze', 'psesikjaxx', 'dvwpvqdflx', 'bjyaxvggle', - 'dpmgoiwhuf', 'wadvzjhwtw', 'kcjvhgvhpt', 'eppyqpgewp', 'tyjpjpglgx', 'cekarydqba', 'dvkdfhrpph', - 'cnpanmywno', 'ljauauuyka', 'hicjuubiau', 'cqhwesrciw', 'dnmowthjcj', 'lujvyveojc', 'wndursivcx', - 'espkiocpxq', 'jsbpkpxwew', 'dsnxgrfdmd', 'hyjqolupxn', 'xdezcezszc', 'axfhbpkdlc', 'qqnlrngaft', - 'coqwgzpbhx', 'ncmpqwmnzb', 'sznkemeqro', 'omphqltjdd', 'uoccaiathd', 'jzmzdispyo', 'pxjkzvqomp', - 'udxqbhgvvx', 'dzkyxbbqkr', 'dtozwcapoa', 'qswlzfgcgj', 'tgawasvbbr', 'lmdyicksrv', 'fzvpbrzssi', - 'dxfdovivlw', 'zzmgnglanj', 'vssmlqoiti', 'vajkicalux', 'ekvwecwltj', 'ylxwcwhjjd', 'keioymnobc', - 'usqqvxcjmg', 'phjvutxpoi', 'nycmyuzpml', 'bwdmzwhdnw', 'fxuxxtryjn', 'orixbcfvdz', 'hefisnapds', - 'fpevfidstw', 'halvwiltfs', 'dzojiwfvba', 'ojsxxkalat', 'esjdyghhog', 'ptbnewtvon', 'hcanfkwivl', - 'yronlutbgm', 'llplvmcvbl', 'yxirnfyijn', 'nwvloufjty', 'rtpbawlmxr', 'aayfryxljh', 'zfrrixsimm', - 'txmnoyiyte'} diff --git a/spaces/kaicheng/ChatGPT_ad/modules/models/StableLM.py b/spaces/kaicheng/ChatGPT_ad/modules/models/StableLM.py deleted file mode 100644 index f4affc3699e335f1e42bf5fc8c93e92a41d027fe..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/modules/models/StableLM.py +++ /dev/null @@ -1,93 +0,0 @@ -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer -import time -import numpy as np -from torch.nn import functional as F -import os -from .base_model import BaseLLMModel -from threading import Thread - -STABLELM_MODEL = None -STABLELM_TOKENIZER = None - - -class StopOnTokens(StoppingCriteria): - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - stop_ids = [50278, 50279, 50277, 1, 0] - for stop_id in stop_ids: - if input_ids[0][-1] == stop_id: - return True - return False - - -class StableLM_Client(BaseLLMModel): - def __init__(self, model_name, user_name="") -> None: - super().__init__(model_name=model_name, user=user_name) - global STABLELM_MODEL, STABLELM_TOKENIZER - print(f"Starting to load StableLM to memory") - if model_name == "StableLM": - model_name = "stabilityai/stablelm-tuned-alpha-7b" - else: - model_name = f"models/{model_name}" - if STABLELM_MODEL is None: - STABLELM_MODEL = AutoModelForCausalLM.from_pretrained( - model_name, torch_dtype=torch.float16).cuda() - if STABLELM_TOKENIZER is None: - STABLELM_TOKENIZER = AutoTokenizer.from_pretrained(model_name) - self.generator = pipeline( - 'text-generation', model=STABLELM_MODEL, tokenizer=STABLELM_TOKENIZER, device=0) - print(f"Sucessfully loaded StableLM to the memory") - self.system_prompt = """StableAssistant -- StableAssistant is A helpful and harmless Open Source AI Language Model developed by Stability and CarperAI. -- StableAssistant is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. -- StableAssistant is more than just an information source, StableAssistant is also able to write poetry, short stories, and make jokes. -- StableAssistant will refuse to participate in anything that could harm a human.""" - self.max_generation_token = 1024 - self.top_p = 0.95 - self.temperature = 1.0 - - def _get_stablelm_style_input(self): - history = self.history + [{"role": "assistant", "content": ""}] - print(history) - messages = self.system_prompt + \ - "".join(["".join(["<|USER|>"+history[i]["content"], "<|ASSISTANT|>"+history[i + 1]["content"]]) - for i in range(0, len(history), 2)]) - return messages - - def _generate(self, text, bad_text=None): - stop = StopOnTokens() - result = self.generator(text, max_new_tokens=self.max_generation_token, num_return_sequences=1, num_beams=1, do_sample=True, - temperature=self.temperature, top_p=self.top_p, top_k=1000, stopping_criteria=StoppingCriteriaList([stop])) - return result[0]["generated_text"].replace(text, "") - - def get_answer_at_once(self): - messages = self._get_stablelm_style_input() - return self._generate(messages), len(messages) - - def get_answer_stream_iter(self): - stop = StopOnTokens() - messages = self._get_stablelm_style_input() - - # model_inputs = tok([messages], return_tensors="pt")['input_ids'].cuda()[:, :4096-1024] - model_inputs = STABLELM_TOKENIZER( - [messages], return_tensors="pt").to("cuda") - streamer = TextIteratorStreamer( - STABLELM_TOKENIZER, timeout=10., skip_prompt=True, skip_special_tokens=True) - generate_kwargs = dict( - model_inputs, - streamer=streamer, - max_new_tokens=self.max_generation_token, - do_sample=True, - top_p=self.top_p, - top_k=1000, - temperature=self.temperature, - num_beams=1, - stopping_criteria=StoppingCriteriaList([stop]) - ) - t = Thread(target=STABLELM_MODEL.generate, kwargs=generate_kwargs) - t.start() - - partial_text = "" - for new_text in streamer: - partial_text += new_text - yield partial_text diff --git a/spaces/kamayali/anything-v4.0/README.md b/spaces/kamayali/anything-v4.0/README.md deleted file mode 100644 index 9a8f1761dce1e4c6b1cff0767e763ba5afba5f54..0000000000000000000000000000000000000000 --- a/spaces/kamayali/anything-v4.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anything V4.0 -emoji: ⚡ -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.16.1b1 -app_file: app.py -pinned: false -duplicated_from: akhaliq/anything-v4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/karay/diar_speech/backend.py b/spaces/karay/diar_speech/backend.py deleted file mode 100644 index 321b3915632108dd9600298855ed428f89676c4a..0000000000000000000000000000000000000000 --- a/spaces/karay/diar_speech/backend.py +++ /dev/null @@ -1,201 +0,0 @@ -import os -# os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' -# os.environ['CUDA_VISIBLE_DEVICES']='1' -# os.environ['CUDA_LAUNCH_BLOCKING'] = "1" - - -import argparse -import logging -import tempfile -from os.path import join as opj -import re - -import torch -import torchaudio -import numpy as np - -import whisper - -from pyannote.audio import Pipeline - -from denoiser.audio import Audioset -from denoiser import distrib, pretrained -from denoiser.audio import Audioset, find_audio_files -from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip - -from moviepy.editor import AudioFileClip, concatenate_audioclips - - -logger = logging.getLogger(__name__) - -def add_flags(parser): - """ - Add the flags for the argument parser that are related to model loading and evaluation" - """ - device = "cuda" if torch.cuda.is_available() else "cpu" - # device = "cpu" - pretrained.add_model_flags(parser) - parser.add_argument('--device', default=device) - parser.add_argument('--num_workers', type=int, default=0) - parser.add_argument('--streaming', action="store_true", - help="true streaming evaluation for Demucs") - - -parser = argparse.ArgumentParser( - 'denoiser.enhance', - description="Speech enhancement using Demucs - Generate enhanced files") -add_flags(parser) -parser.add_argument("--batch_size", default=1, type=int, help="batch size") -parser.add_argument('-v', '--verbose', action='store_const', const=logging.DEBUG, - default=logging.INFO, help="more loggging") - -group = parser.add_mutually_exclusive_group() -group.add_argument("--noisy_dir", type=str, default=None, - help="directory including noisy wav files") -group.add_argument("--noisy_json", type=str, default=None, - help="json file including noisy wav files") - -args = parser.parse_args() - -denoise_model = pretrained.get_model(args).to(args.device) -denoise_model.eval() -whisper_model = whisper.load_model("large").to(args.device) -whisper_model.eval() - -def split_audio(tmpdirname, video, chunk_size=120): - """ - Split audio into chunks of chunk_size - """ - path = opj(tmpdirname, 'noisy_chunks') - os.makedirs(path) - audio = AudioFileClip(video.name) - with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as audio_fp: - audio.write_audiofile(audio_fp.name) - - # round duration to the next whole integer - for i, chunk in enumerate(np.arange(0, audio.duration, chunk_size)): - ffmpeg_extract_subclip(audio_fp.name, chunk, min(chunk + chunk_size, audio.duration), - targetname=opj(path, f'{i:09}.wav')) - return audio.duration - - -def get_speakers(tmpdirname, use_auth_token=True): - files = find_audio_files(opj(tmpdirname, 'noisy_chunks')) - dset = Audioset(files, with_path=True, - sample_rate=denoise_model.sample_rate, channels=denoise_model.chin, convert=True) - - loader = distrib.loader(dset, batch_size=1) - distrib.barrier() - - print('removing noise...') - enhanced_chunks = [] - with tempfile.TemporaryDirectory() as denoised_tmpdirname: - for data in loader: - noisy_signals, filenames = data - noisy_signals = noisy_signals.to(args.device) - - with torch.no_grad(): - wav = denoise_model(noisy_signals).squeeze(0) - wav = wav / max(wav.abs().max().item(), 1) - - name = opj(denoised_tmpdirname, filenames[0].split('/')[-1]) - torchaudio.save(name, wav.cpu(), denoise_model.sample_rate) - enhanced_chunks.append(name) - - print('reassembling chunks...') - clips = [AudioFileClip(c) for c in sorted(enhanced_chunks)] - final_clip = concatenate_audioclips(clips) - cleaned_path = opj(tmpdirname, 'cleaned.wav') - final_clip.write_audiofile(cleaned_path) - print('identifying speakers...') - - pipeline = Pipeline.from_pretrained('pyannote/speaker-diarization', use_auth_token=use_auth_token) - - return str(pipeline({'uri': '', 'audio': cleaned_path})).split('\n'), cleaned_path - -def get_subtitles(timecodes, clened_audio_path, language='en'): - if(args.device == 'cpu'): - options = whisper.DecodingOptions(language=language, fp16=False) - else: - options = whisper.DecodingOptions(language=language) - - # mels = [] - timeline = {} - prev_speaker = None - prev_start = 0 - for line in timecodes: - start, end = re.findall(r'\d{2}:\d{2}:\d{2}.\d{3}', line) - start = str_to_seconds(start) - end = str_to_seconds(end) - speaker = re.findall(r'\w+$', line)[0] - - # extract a segment of the audio for a speaker - with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as audio_fp: - ffmpeg_extract_subclip(clened_audio_path, start, end, - targetname=audio_fp.name) - - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_fp.name) - audio = whisper.pad_or_trim(audio) - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(whisper_model.device) - # decode the audio - result = whisper.decode(whisper_model, mel, options) - - if(speaker == prev_speaker): - timeline[prev_start]['text'] += f' <{seconds_to_str(start)}>{result.text}' - timeline[prev_start]['end'] = end - else: - timeline[start] = { 'end': end, - 'speaker': speaker, - 'text': f'{speaker}: {result.text}'} - prev_start = start - - prev_speaker = speaker - - # mels = torch.stack(mels) - # print('mel shape', mels.shape) - # print('speakers length', len(speakers)) - # print('chunks', len(torch.chunk(mels, 1000))) - - # decode the audio - # options = whisper.DecodingOptions(language='en') - # for mel, start in zip(torch.chunk(mels, 1000), timeline.keys()): - # result = whisper.decode(whisper_model, mel, options) - # timeline[start]['text'] = [r.text for r in result].join(' ') - - - # subtitles = [num for sublist in results for num in sublist] - - return timeline - -def str_to_seconds(time_str): - h, m, s = time_str.split(':') - return int(h) * 3600 + int(m) * 60 + float(s) - -def seconds_to_str(seconds): - m, s = divmod(seconds, 60) - h, m = divmod(m, 60) - # with milliseconds - return f'{int(h):02}:{int(m):02}:{s:06.3f}' - - -def timeline_to_vtt(timeline): - vtt = 'WEBVTT\n\n' - for start in sorted(timeline.keys()): - end = timeline[start]['end'] - text = timeline[start]['text'] - vtt += f'{seconds_to_str(start)} --> {seconds_to_str(end)}\n' - vtt += text+'\n\n' - return vtt - -def calc_speaker_percentage(timeline, duration): - percentages = [] - end = 0 - for start in sorted(timeline.keys()): - if(start > end): - percentages.append(['Background', 100*(start-end)/duration]) - end = timeline[start]['end'] - speaker = timeline[start]['speaker'] - percentages.append([speaker, 100*(end-start)/duration]) - return percentages diff --git a/spaces/kcagle/AutoGPT/autogpt/logs.py b/spaces/kcagle/AutoGPT/autogpt/logs.py deleted file mode 100644 index 35037404a98f7be9b7d577b625cc190ca27f4566..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/logs.py +++ /dev/null @@ -1,332 +0,0 @@ -"""Logging module for Auto-GPT.""" -import json -import logging -import os -import random -import re -import time -import traceback -from logging import LogRecord - -from colorama import Fore, Style - -from autogpt.config import Config, Singleton -from autogpt.speech import say_text - -CFG = Config() - - -class Logger(metaclass=Singleton): - """ - Logger that handle titles in different colors. - Outputs logs in console, activity.log, and errors.log - For console handler: simulates typing - """ - - def __init__(self): - # create log directory if it doesn't exist - this_files_dir_path = os.path.dirname(__file__) - log_dir = os.path.join(this_files_dir_path, "../logs") - if not os.path.exists(log_dir): - os.makedirs(log_dir) - - log_file = "activity.log" - error_file = "error.log" - - console_formatter = AutoGptFormatter("%(title_color)s %(message)s") - - # Create a handler for console which simulate typing - self.typing_console_handler = TypingConsoleHandler() - self.typing_console_handler.setLevel(logging.INFO) - self.typing_console_handler.setFormatter(console_formatter) - - # Create a handler for console without typing simulation - self.console_handler = ConsoleHandler() - self.console_handler.setLevel(logging.DEBUG) - self.console_handler.setFormatter(console_formatter) - - # Info handler in activity.log - self.file_handler = logging.FileHandler( - os.path.join(log_dir, log_file), "a", "utf-8" - ) - self.file_handler.setLevel(logging.DEBUG) - info_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(title)s %(message_no_color)s" - ) - self.file_handler.setFormatter(info_formatter) - - # Error handler error.log - error_handler = logging.FileHandler( - os.path.join(log_dir, error_file), "a", "utf-8" - ) - error_handler.setLevel(logging.ERROR) - error_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s" - " %(message_no_color)s" - ) - error_handler.setFormatter(error_formatter) - - self.typing_logger = logging.getLogger("TYPER") - self.typing_logger.addHandler(self.typing_console_handler) - self.typing_logger.addHandler(self.file_handler) - self.typing_logger.addHandler(error_handler) - self.typing_logger.setLevel(logging.DEBUG) - - self.logger = logging.getLogger("LOGGER") - self.logger.addHandler(self.console_handler) - self.logger.addHandler(self.file_handler) - self.logger.addHandler(error_handler) - self.logger.setLevel(logging.DEBUG) - - def typewriter_log( - self, title="", title_color="", content="", speak_text=False, level=logging.INFO - ): - if speak_text and CFG.speak_mode: - say_text(f"{title}. {content}") - - if content: - if isinstance(content, list): - content = " ".join(content) - else: - content = "" - - self.typing_logger.log( - level, content, extra={"title": title, "color": title_color} - ) - - def debug( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.DEBUG) - - def warn( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.WARN) - - def error(self, title, message=""): - self._log(title, Fore.RED, message, logging.ERROR) - - def _log(self, title="", title_color="", message="", level=logging.INFO): - if message: - if isinstance(message, list): - message = " ".join(message) - self.logger.log(level, message, extra={"title": title, "color": title_color}) - - def set_level(self, level): - self.logger.setLevel(level) - self.typing_logger.setLevel(level) - - def double_check(self, additionalText=None): - if not additionalText: - additionalText = ( - "Please ensure you've setup and configured everything" - " correctly. Read https://github.com/Torantulino/Auto-GPT#readme to " - "double check. You can also create a github issue or join the discord" - " and ask there!" - ) - - self.typewriter_log("DOUBLE CHECK CONFIGURATION", Fore.YELLOW, additionalText) - - -""" -Output stream to console using simulated typing -""" - - -class TypingConsoleHandler(logging.StreamHandler): - def emit(self, record): - min_typing_speed = 0.05 - max_typing_speed = 0.01 - - msg = self.format(record) - try: - words = msg.split() - for i, word in enumerate(words): - print(word, end="", flush=True) - if i < len(words) - 1: - print(" ", end="", flush=True) - typing_speed = random.uniform(min_typing_speed, max_typing_speed) - time.sleep(typing_speed) - # type faster after each word - min_typing_speed = min_typing_speed * 0.95 - max_typing_speed = max_typing_speed * 0.95 - print() - except Exception: - self.handleError(record) - - -class ConsoleHandler(logging.StreamHandler): - def emit(self, record) -> None: - msg = self.format(record) - try: - print(msg) - except Exception: - self.handleError(record) - - -class AutoGptFormatter(logging.Formatter): - """ - Allows to handle custom placeholders 'title_color' and 'message_no_color'. - To use this formatter, make sure to pass 'color', 'title' as log extras. - """ - - def format(self, record: LogRecord) -> str: - if hasattr(record, "color"): - record.title_color = ( - getattr(record, "color") - + getattr(record, "title") - + " " - + Style.RESET_ALL - ) - else: - record.title_color = getattr(record, "title") - if hasattr(record, "msg"): - record.message_no_color = remove_color_codes(getattr(record, "msg")) - else: - record.message_no_color = "" - return super().format(record) - - -def remove_color_codes(s: str) -> str: - ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])") - return ansi_escape.sub("", s) - - -logger = Logger() - - -def print_assistant_thoughts(ai_name, assistant_reply): - """Prints the assistant's thoughts to the console""" - from autogpt.json_utils.json_fix_llm import ( - attempt_to_fix_json_by_finding_outermost_brackets, - fix_and_parse_json, - ) - - try: - try: - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON in assistant thoughts\n", assistant_reply) - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - if isinstance(assistant_reply_json, str): - assistant_reply_json = fix_and_parse_json(assistant_reply_json) - - # Check if assistant_reply_json is a string and attempt to parse - # it into a JSON object - if isinstance(assistant_reply_json, str): - try: - assistant_reply_json = json.loads(assistant_reply_json) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - assistant_reply_json = ( - attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply_json - ) - ) - - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - if not isinstance(assistant_reply_json, dict): - assistant_reply_json = {} - assistant_thoughts = assistant_reply_json.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log( - "REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}" - ) - - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - - logger.typewriter_log( - "CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}" - ) - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) - else: - logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}") - - return assistant_reply_json - except json.decoder.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - if CFG.speak_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API." - " I cannot ignore this response." - ) - - # All other errors, return "Error: + error message" - except Exception: - call_stack = traceback.format_exc() - logger.error("Error: \n", call_stack) - - -def print_assistant_thoughts( - ai_name: object, assistant_reply_json_valid: object -) -> None: - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - - assistant_thoughts = assistant_reply_json_valid.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log("REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}") - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}") - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) diff --git a/spaces/keivalya/alternovation/README.md b/spaces/keivalya/alternovation/README.md deleted file mode 100644 index 13ab024e7cb9433024324d3e1c0de8245a3e7ab5..0000000000000000000000000000000000000000 --- a/spaces/keivalya/alternovation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Alternovation -emoji: 📉 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kepl/gpt/client/css/dropdown.css b/spaces/kepl/gpt/client/css/dropdown.css deleted file mode 100644 index 302e911e84d171c55384732f759a79ce195abca5..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/client/css/dropdown.css +++ /dev/null @@ -1,10 +0,0 @@ -.dropdown { - border: 1px solid var(--conversations); -} - -@media screen and (max-width: 990px) { - .dropdown { - padding: 4px 8px; - font-size: 0.75rem; - } -} diff --git a/spaces/keras-io/dual-encoder-image-search/README.md b/spaces/keras-io/dual-encoder-image-search/README.md deleted file mode 100644 index 676882aa672b0e3dd0a4d06d92ebc6dd1074c3a5..0000000000000000000000000000000000000000 --- a/spaces/keras-io/dual-encoder-image-search/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Natural language image search with a Dual Encoder -emoji: 🔦 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keras-io/molecular-property-prediction/README.md b/spaces/keras-io/molecular-property-prediction/README.md deleted file mode 100644 index 5eb15beb322802b94f27c339e601d232ced9a3c2..0000000000000000000000000000000000000000 --- a/spaces/keras-io/molecular-property-prediction/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Molecular Property Prediction -emoji: 🔎 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.0.15 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/config.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/config.py deleted file mode 100644 index 1c21312f3de971bfa008254c6035cebc09f05e4c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/config.py +++ /dev/null @@ -1,45 +0,0 @@ -librispeech_datasets = { - "train": { - "clean": ["LibriSpeech/train-clean-100", "LibriSpeech/train-clean-360"], - "other": ["LibriSpeech/train-other-500"] - }, - "test": { - "clean": ["LibriSpeech/test-clean"], - "other": ["LibriSpeech/test-other"] - }, - "dev": { - "clean": ["LibriSpeech/dev-clean"], - "other": ["LibriSpeech/dev-other"] - }, -} -libritts_datasets = { - "train": { - "clean": ["LibriTTS/train-clean-100", "LibriTTS/train-clean-360"], - "other": ["LibriTTS/train-other-500"] - }, - "test": { - "clean": ["LibriTTS/test-clean"], - "other": ["LibriTTS/test-other"] - }, - "dev": { - "clean": ["LibriTTS/dev-clean"], - "other": ["LibriTTS/dev-other"] - }, -} -voxceleb_datasets = { - "voxceleb1" : { - "train": ["VoxCeleb1/wav"], - "test": ["VoxCeleb1/test_wav"] - }, - "voxceleb2" : { - "train": ["VoxCeleb2/dev/aac"], - "test": ["VoxCeleb2/test_wav"] - } -} - -other_datasets = [ - "LJSpeech-1.1", - "VCTK-Corpus/wav48", -] - -anglophone_nationalites = ["australia", "canada", "ireland", "uk", "usa"] diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/speaker_encoder/data_objects/utterance.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/speaker_encoder/data_objects/utterance.py deleted file mode 100644 index 0768c3420f422a7464f305b4c1fb6752c57ceda7..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/speaker_encoder/data_objects/utterance.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy as np - - -class Utterance: - def __init__(self, frames_fpath, wave_fpath): - self.frames_fpath = frames_fpath - self.wave_fpath = wave_fpath - - def get_frames(self): - return np.load(self.frames_fpath) - - def random_partial(self, n_frames): - """ - Crops the frames into a partial utterance of n_frames - - :param n_frames: The number of frames of the partial utterance - :return: the partial utterance frames and a tuple indicating the start and end of the - partial utterance in the complete utterance. - """ - frames = self.get_frames() - if frames.shape[0] == n_frames: - start = 0 - else: - start = np.random.randint(0, frames.shape[0] - n_frames) - end = start + n_frames - return frames[start:end], (start, end) \ No newline at end of file diff --git a/spaces/kevinwang676/VITS2-Mandarin/text/sanskrit.py b/spaces/kevinwang676/VITS2-Mandarin/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py b/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py deleted file mode 100644 index 3f70e73d6a37d32e05b6cf0e87f42e13c467cd52..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py +++ /dev/null @@ -1,473 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import math -import time - -import torch as th -from torch import nn -from torch.nn import functional as F - -from .resample import downsample2, upsample2 -from .utils import capture_init - - -class BLSTM(nn.Module): - def __init__(self, dim, layers=2, bi=True): - super().__init__() - klass = nn.LSTM - self.lstm = klass( - bidirectional=bi, num_layers=layers, hidden_size=dim, input_size=dim - ) - self.linear = None - if bi: - self.linear = nn.Linear(2 * dim, dim) - - def forward(self, x, hidden=None): - x, hidden = self.lstm(x, hidden) - if self.linear: - x = self.linear(x) - return x, hidden - - -def rescale_conv(conv, reference): - std = conv.weight.std().detach() - scale = (std / reference)**0.5 - conv.weight.data /= scale - if conv.bias is not None: - conv.bias.data /= scale - - -def rescale_module(module, reference): - for sub in module.modules(): - if isinstance(sub, (nn.Conv1d, nn.ConvTranspose1d)): - rescale_conv(sub, reference) - - -class Demucs(nn.Module): - """ - Demucs speech enhancement model. - Args: - - chin (int): number of input channels. - - chout (int): number of output channels. - - hidden (int): number of initial hidden channels. - - depth (int): number of layers. - - kernel_size (int): kernel size for each layer. - - stride (int): stride for each layer. - - causal (bool): if false, uses BiLSTM instead of LSTM. - - resample (int): amount of resampling to apply to the input/output. - Can be one of 1, 2 or 4. - - growth (float): number of channels is multiplied by this for every layer. - - max_hidden (int): maximum number of channels. Can be useful to - control the size/speed of the model. - - normalize (bool): if true, normalize the input. - - glu (bool): if true uses GLU instead of ReLU in 1x1 convolutions. - - rescale (float): controls custom weight initialization. - See https://arxiv.org/abs/1911.13254. - - floor (float): stability flooring when normalizing. - - """ - @capture_init - def __init__(self, - chin=1, - chout=1, - hidden=48, - depth=5, - kernel_size=8, - stride=4, - causal=True, - resample=4, - growth=2, - max_hidden=10_000, - normalize=True, - glu=True, - rescale=0.1, - floor=1e-3): - - super().__init__() - if resample not in [1, 2, 4]: - raise ValueError("Resample should be 1, 2 or 4.") - - self.chin = chin - self.chout = chout - self.hidden = hidden - self.depth = depth - self.kernel_size = kernel_size - self.stride = stride - self.causal = causal - self.floor = floor - self.resample = resample - self.normalize = normalize - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - activation = nn.GLU(1) if glu else nn.ReLU() - ch_scale = 2 if glu else 1 - - for index in range(depth): - encode = [] - encode += [ - nn.Conv1d(chin, hidden, kernel_size, stride), - nn.ReLU(), - nn.Conv1d(hidden, hidden * ch_scale, 1), activation, - ] - self.encoder.append(nn.Sequential(*encode)) - - decode = [] - decode += [ - nn.Conv1d(hidden, ch_scale * hidden, 1), activation, - nn.ConvTranspose1d(hidden, chout, kernel_size, stride), - ] - if index > 0: - decode.append(nn.ReLU()) - self.decoder.insert(0, nn.Sequential(*decode)) - chout = hidden - chin = hidden - hidden = min(int(growth * hidden), max_hidden) - - self.lstm = BLSTM(chin, bi=not causal) - if rescale: - rescale_module(self, reference=rescale) - - def valid_length(self, length): - """ - Return the nearest valid length to use with the model so that - there is no time steps left over in a convolutions, e.g. for all - layers, size of the input - kernel_size % stride = 0. - - If the mixture has a valid length, the estimated sources - will have exactly the same length. - """ - length = math.ceil(length * self.resample) - for _ in range(self.depth): - length = math.ceil((length - self.kernel_size) / self.stride) + 1 - length = max(length, 1) - for _ in range(self.depth): - length = (length - 1) * self.stride + self.kernel_size - length = int(math.ceil(length / self.resample)) - return int(length) - - @property - def total_stride(self): - return self.stride ** self.depth // self.resample - - def forward(self, mix): - if mix.dim() == 2: - mix = mix.unsqueeze(1) - - if self.normalize: - mono = mix.mean(dim=1, keepdim=True) - std = mono.std(dim=-1, keepdim=True) - mix = mix / (self.floor + std) - else: - std = 1 - length = mix.shape[-1] - x = mix - x = F.pad(x, (0, self.valid_length(length) - length)) - if self.resample == 2: - x = upsample2(x) - elif self.resample == 4: - x = upsample2(x) - x = upsample2(x) - skips = [] - for encode in self.encoder: - x = encode(x) - skips.append(x) - x = x.permute(2, 0, 1) - x, _ = self.lstm(x) - x = x.permute(1, 2, 0) - for decode in self.decoder: - skip = skips.pop(-1) - x = x + skip[..., :x.shape[-1]] - x = decode(x) - if self.resample == 2: - x = downsample2(x) - elif self.resample == 4: - x = downsample2(x) - x = downsample2(x) - - x = x[..., :length] - return std * x - - -def fast_conv(conv, x): - """ - Faster convolution evaluation if either kernel size is 1 - or length of sequence is 1. - """ - batch, chin, length = x.shape - chout, chin, kernel = conv.weight.shape - assert batch == 1 - if kernel == 1: - x = x.view(chin, length) - out = th.addmm(conv.bias.view(-1, 1), - conv.weight.view(chout, chin), x) - elif length == kernel: - x = x.view(chin * kernel, 1) - out = th.addmm(conv.bias.view(-1, 1), - conv.weight.view(chout, chin * kernel), x) - else: - out = conv(x) - return out.view(batch, chout, -1) - - -class DemucsStreamer: - """ - Streaming implementation for Demucs. It supports being fed with any amount - of audio at a time. You will get back as much audio as possible at that - point. - - Args: - - demucs (Demucs): Demucs model. - - dry (float): amount of dry (e.g. input) signal to keep. 0 is maximum - noise removal, 1 just returns the input signal. Small values > 0 - allows to limit distortions. - - num_frames (int): number of frames to process at once. Higher values - will increase overall latency but improve the real time factor. - - resample_lookahead (int): extra lookahead used for the resampling. - - resample_buffer (int): size of the buffer of previous inputs/outputs - kept for resampling. - """ - def __init__(self, demucs, - dry=0, - num_frames=1, - resample_lookahead=64, - resample_buffer=256): - device = next(iter(demucs.parameters())).device - self.demucs = demucs - self.lstm_state = None - self.conv_state = None - self.dry = dry - self.resample_lookahead = resample_lookahead - resample_buffer = min(demucs.total_stride, resample_buffer) - self.resample_buffer = resample_buffer - self.frame_length = demucs.valid_length(1) + \ - demucs.total_stride * (num_frames - 1) - self.total_length = self.frame_length + self.resample_lookahead - self.stride = demucs.total_stride * num_frames - self.resample_in = th.zeros(demucs.chin, resample_buffer, device=device) - self.resample_out = th.zeros( - demucs.chin, resample_buffer, device=device - ) - - self.frames = 0 - self.total_time = 0 - self.variance = 0 - self.pending = th.zeros(demucs.chin, 0, device=device) - - bias = demucs.decoder[0][2].bias - weight = demucs.decoder[0][2].weight - chin, chout, kernel = weight.shape - self._bias = bias.view(-1, 1).repeat(1, kernel).view(-1, 1) - self._weight = weight.permute(1, 2, 0).contiguous() - - def reset_time_per_frame(self): - self.total_time = 0 - self.frames = 0 - - @property - def time_per_frame(self): - return self.total_time / self.frames - - def flush(self): - """ - Flush remaining audio by padding it with zero. Call this - when you have no more input and want to get back the last chunk of audio. - """ - pending_length = self.pending.shape[1] - padding = th.zeros( - self.demucs.chin, self.total_length, device=self.pending.device - ) - out = self.feed(padding) - return out[:, :pending_length] - - def feed(self, wav): - """ - Apply the model to mix using true real time evaluation. - Normalization is done online as is the resampling. - """ - begin = time.time() - demucs = self.demucs - resample_buffer = self.resample_buffer - stride = self.stride - resample = demucs.resample - - if wav.dim() != 2: - raise ValueError("input wav should be two dimensional.") - chin, _ = wav.shape - if chin != demucs.chin: - raise ValueError(f"Expected {demucs.chin} channels, got {chin}") - - self.pending = th.cat([self.pending, wav], dim=1) - outs = [] - while self.pending.shape[1] >= self.total_length: - self.frames += 1 - frame = self.pending[:, :self.total_length] - dry_signal = frame[:, :stride] - if demucs.normalize: - mono = frame.mean(0) - variance = (mono**2).mean() - self.variance = variance / self.frames + \ - (1 - 1 / self.frames) * self.variance - frame = frame / (demucs.floor + math.sqrt(self.variance)) - frame = th.cat([self.resample_in, frame], dim=-1) - self.resample_in[:] = frame[:, stride - resample_buffer:stride] - - if resample == 4: - frame = upsample2(upsample2(frame)) - elif resample == 2: - frame = upsample2(frame) - # remove pre sampling buffer - frame = frame[:, resample * resample_buffer:] - # remove extra samples after window - frame = frame[:, :resample * self.frame_length] - - out, extra = self._separate_frame(frame) - padded_out = th.cat([self.resample_out, out, extra], 1) - self.resample_out[:] = out[:, -resample_buffer:] - if resample == 4: - out = downsample2(downsample2(padded_out)) - elif resample == 2: - out = downsample2(padded_out) - else: - out = padded_out - - out = out[:, resample_buffer // resample:] - out = out[:, :stride] - - if demucs.normalize: - out *= math.sqrt(self.variance) - out = self.dry * dry_signal + (1 - self.dry) * out - outs.append(out) - self.pending = self.pending[:, stride:] - - self.total_time += time.time() - begin - if outs: - out = th.cat(outs, 1) - else: - out = th.zeros(chin, 0, device=wav.device) - return out - - def _separate_frame(self, frame): - demucs = self.demucs - skips = [] - next_state = [] - first = self.conv_state is None - stride = self.stride * demucs.resample - x = frame[None] - for idx, encode in enumerate(demucs.encoder): - stride //= demucs.stride - length = x.shape[2] - if idx == demucs.depth - 1: - # This is sligthly faster for the last conv - x = fast_conv(encode[0], x) - x = encode[1](x) - x = fast_conv(encode[2], x) - x = encode[3](x) - else: - if not first: - prev = self.conv_state.pop(0) - prev = prev[..., stride:] - tgt = (length - demucs.kernel_size) // demucs.stride + 1 - missing = tgt - prev.shape[-1] - offset = length - demucs.kernel_size - \ - demucs.stride * (missing - 1) - x = x[..., offset:] - x = encode[1](encode[0](x)) - x = fast_conv(encode[2], x) - x = encode[3](x) - if not first: - x = th.cat([prev, x], -1) - next_state.append(x) - skips.append(x) - - x = x.permute(2, 0, 1) - x, self.lstm_state = demucs.lstm(x, self.lstm_state) - x = x.permute(1, 2, 0) - # In the following, x contains only correct samples, i.e. the one - # for which each time position is covered by two window of the upper - # layer. extra contains extra samples to the right, and is used only as - # a better padding for the online resampling. - extra = None - for idx, decode in enumerate(demucs.decoder): - skip = skips.pop(-1) - x += skip[..., :x.shape[-1]] - x = fast_conv(decode[0], x) - x = decode[1](x) - - if extra is not None: - skip = skip[..., x.shape[-1]:] - extra += skip[..., :extra.shape[-1]] - extra = decode[2](decode[1](decode[0](extra))) - x = decode[2](x) - next_state.append( - x[..., -demucs.stride:] - decode[2].bias.view(-1, 1) - ) - if extra is None: - extra = x[..., -demucs.stride:] - else: - extra[..., :demucs.stride] += next_state[-1] - x = x[..., :-demucs.stride] - - if not first: - prev = self.conv_state.pop(0) - x[..., :demucs.stride] += prev - if idx != demucs.depth - 1: - x = decode[3](x) - extra = decode[3](extra) - self.conv_state = next_state - return x[0], extra[0] - - -def test(): - import argparse - parser = argparse.ArgumentParser( - "denoiser.demucs", - description="Benchmark the streaming Demucs implementation, as well as " - "checking the delta with the offline implementation.") - parser.add_argument("--depth", default=5, type=int) - parser.add_argument("--resample", default=4, type=int) - parser.add_argument("--hidden", default=48, type=int) - parser.add_argument("--sample_rate", default=16000, type=float) - parser.add_argument("--device", default="cpu") - parser.add_argument("-t", "--num_threads", type=int) - parser.add_argument("-f", "--num_frames", type=int, default=1) - args = parser.parse_args() - if args.num_threads: - th.set_num_threads(args.num_threads) - sr = args.sample_rate - sr_ms = sr / 1000 - demucs = Demucs( - depth=args.depth, hidden=args.hidden, resample=args.resample - ).to(args.device) - x = th.randn(1, int(sr * 4)).to(args.device) - out = demucs(x[None])[0] - streamer = DemucsStreamer(demucs, num_frames=args.num_frames) - out_rt = [] - frame_size = streamer.total_length - with th.no_grad(): - while x.shape[1] > 0: - out_rt.append(streamer.feed(x[:, :frame_size])) - x = x[:, frame_size:] - frame_size = streamer.demucs.total_stride - out_rt.append(streamer.flush()) - out_rt = th.cat(out_rt, 1) - model_size = sum(p.numel() for p in demucs.parameters()) * 4 / 2**20 - initial_lag = streamer.total_length / sr_ms - tpf = 1000 * streamer.time_per_frame - print(f"model size: {model_size:.1f}MB, ", end='') - print(f"delta batch/streaming: {th.norm(out - out_rt) / th.norm(out):.2%}") - print(f"initial lag: {initial_lag:.1f}ms, ", end='') - print(f"stride: {streamer.stride * args.num_frames / sr_ms:.1f}ms") - print(f"time per frame: {tpf:.1f}ms, ", end='') - rtf = (1000 * streamer.time_per_frame) / (streamer.stride / sr_ms) - print(f"RTF: {rtf:.2f}") - print(f"Total lag with computation: {initial_lag + tpf:.1f}ms") - - -if __name__ == "__main__": - test() diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/gen_debug_mask_dataset.py b/spaces/kquote03/lama-video-watermark-remover/bin/gen_debug_mask_dataset.py deleted file mode 100644 index 738f76875c82aa412063bb5bff15e69c46f20362..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/bin/gen_debug_mask_dataset.py +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/env python3 - -import glob -import os - -import PIL.Image as Image -import cv2 -import numpy as np -import tqdm -import shutil - - -from saicinpainting.evaluation.utils import load_yaml - - -def generate_masks_for_img(infile, outmask_pattern, mask_size=200, step=0.5): - inimg = Image.open(infile) - width, height = inimg.size - step_abs = int(mask_size * step) - - mask = np.zeros((height, width), dtype='uint8') - mask_i = 0 - - for start_vertical in range(0, height - step_abs, step_abs): - for start_horizontal in range(0, width - step_abs, step_abs): - mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 255 - - cv2.imwrite(outmask_pattern.format(mask_i), mask) - - mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 0 - mask_i += 1 - - -def main(args): - if not args.indir.endswith('/'): - args.indir += '/' - if not args.outdir.endswith('/'): - args.outdir += '/' - - config = load_yaml(args.config) - - in_files = list(glob.glob(os.path.join(args.indir, '**', f'*{config.img_ext}'), recursive=True)) - for infile in tqdm.tqdm(in_files): - outimg = args.outdir + infile[len(args.indir):] - outmask_pattern = outimg[:-len(config.img_ext)] + '_mask{:04d}.png' - - os.makedirs(os.path.dirname(outimg), exist_ok=True) - shutil.copy2(infile, outimg) - - generate_masks_for_img(infile, outmask_pattern, **config.gen_kwargs) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to config for dataset generation') - aparser.add_argument('indir', type=str, help='Path to folder with images') - aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to') - - main(aparser.parse_args()) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/main.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/main.py deleted file mode 100644 index 7faac5adac3908909d9f6b9fac330ab9b51bc02f..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/main.py +++ /dev/null @@ -1,331 +0,0 @@ -from __future__ import annotations - -from collections.abc import Callable, Generator, Iterable, Mapping, MutableMapping -from contextlib import contextmanager -from typing import Any - -from . import helpers, presets # noqa F401 -from .common import normalize_url, utils # noqa F401 -from .parser_block import ParserBlock # noqa F401 -from .parser_core import ParserCore # noqa F401 -from .parser_inline import ParserInline # noqa F401 -from .renderer import RendererHTML, RendererProtocol -from .rules_core.state_core import StateCore -from .token import Token -from .utils import OptionsDict - -try: - import linkify_it -except ModuleNotFoundError: - linkify_it = None - - -_PRESETS = { - "default": presets.default.make(), - "js-default": presets.js_default.make(), - "zero": presets.zero.make(), - "commonmark": presets.commonmark.make(), - "gfm-like": presets.gfm_like.make(), -} - - -class MarkdownIt: - def __init__( - self, - config: str | Mapping = "commonmark", - options_update: Mapping | None = None, - *, - renderer_cls: Callable[[MarkdownIt], RendererProtocol] = RendererHTML, - ): - """Main parser class - - :param config: name of configuration to load or a pre-defined dictionary - :param options_update: dictionary that will be merged into ``config["options"]`` - :param renderer_cls: the class to load as the renderer: - ``self.renderer = renderer_cls(self) - """ - # add modules - self.utils = utils - self.helpers: Any = helpers - - # initialise classes - self.inline = ParserInline() - self.block = ParserBlock() - self.core = ParserCore() - self.renderer = renderer_cls(self) - self.linkify = linkify_it.LinkifyIt() if linkify_it else None - - # set the configuration - if options_update and not isinstance(options_update, Mapping): - # catch signature change where renderer_cls was not used as a key-word - raise TypeError( - f"options_update should be a mapping: {options_update}" - "\n(Perhaps you intended this to be the renderer_cls?)" - ) - self.configure(config, options_update=options_update) - - def __repr__(self) -> str: - return f"{self.__class__.__module__}.{self.__class__.__name__}()" - - def __getitem__(self, name: str) -> Any: - return { - "inline": self.inline, - "block": self.block, - "core": self.core, - "renderer": self.renderer, - }[name] - - def set(self, options: MutableMapping) -> None: - """Set parser options (in the same format as in constructor). - Probably, you will never need it, but you can change options after constructor call. - - __Note:__ To achieve the best possible performance, don't modify a - `markdown-it` instance options on the fly. If you need multiple configurations - it's best to create multiple instances and initialize each with separate config. - """ - self.options = OptionsDict(options) - - def configure( - self, presets: str | Mapping, options_update: Mapping | None = None - ) -> MarkdownIt: - """Batch load of all options and component settings. - This is an internal method, and you probably will not need it. - But if you will - see available presets and data structure - [here](https://github.com/markdown-it/markdown-it/tree/master/lib/presets) - - We strongly recommend to use presets instead of direct config loads. - That will give better compatibility with next versions. - """ - if isinstance(presets, str): - if presets not in _PRESETS: - raise KeyError(f"Wrong `markdown-it` preset '{presets}', check name") - config = _PRESETS[presets] - else: - config = presets - - if not config: - raise ValueError("Wrong `markdown-it` config, can't be empty") - - options = config.get("options", {}) or {} - if options_update: - options = {**options, **options_update} - - self.set(options) - - if "components" in config: - for name, component in config["components"].items(): - rules = component.get("rules", None) - if rules: - self[name].ruler.enableOnly(rules) - rules2 = component.get("rules2", None) - if rules2: - self[name].ruler2.enableOnly(rules2) - - return self - - def get_all_rules(self) -> dict[str, list[str]]: - """Return the names of all active rules.""" - rules = { - chain: self[chain].ruler.get_all_rules() - for chain in ["core", "block", "inline"] - } - rules["inline2"] = self.inline.ruler2.get_all_rules() - return rules - - def get_active_rules(self) -> dict[str, list[str]]: - """Return the names of all active rules.""" - rules = { - chain: self[chain].ruler.get_active_rules() - for chain in ["core", "block", "inline"] - } - rules["inline2"] = self.inline.ruler2.get_active_rules() - return rules - - def enable( - self, names: str | Iterable[str], ignoreInvalid: bool = False - ) -> MarkdownIt: - """Enable list or rules. (chainable) - - :param names: rule name or list of rule names to enable. - :param ignoreInvalid: set `true` to ignore errors when rule not found. - - It will automatically find appropriate components, - containing rules with given names. If rule not found, and `ignoreInvalid` - not set - throws exception. - - Example:: - - md = MarkdownIt().enable(['sub', 'sup']).disable('smartquotes') - - """ - result = [] - - if isinstance(names, str): - names = [names] - - for chain in ["core", "block", "inline"]: - result.extend(self[chain].ruler.enable(names, True)) - result.extend(self.inline.ruler2.enable(names, True)) - - missed = [name for name in names if name not in result] - if missed and not ignoreInvalid: - raise ValueError(f"MarkdownIt. Failed to enable unknown rule(s): {missed}") - - return self - - def disable( - self, names: str | Iterable[str], ignoreInvalid: bool = False - ) -> MarkdownIt: - """The same as [[MarkdownIt.enable]], but turn specified rules off. (chainable) - - :param names: rule name or list of rule names to disable. - :param ignoreInvalid: set `true` to ignore errors when rule not found. - - """ - result = [] - - if isinstance(names, str): - names = [names] - - for chain in ["core", "block", "inline"]: - result.extend(self[chain].ruler.disable(names, True)) - result.extend(self.inline.ruler2.disable(names, True)) - - missed = [name for name in names if name not in result] - if missed and not ignoreInvalid: - raise ValueError(f"MarkdownIt. Failed to disable unknown rule(s): {missed}") - return self - - @contextmanager - def reset_rules(self) -> Generator[None, None, None]: - """A context manager, that will reset the current enabled rules on exit.""" - chain_rules = self.get_active_rules() - yield - for chain, rules in chain_rules.items(): - if chain != "inline2": - self[chain].ruler.enableOnly(rules) - self.inline.ruler2.enableOnly(chain_rules["inline2"]) - - def add_render_rule(self, name: str, function: Callable, fmt: str = "html") -> None: - """Add a rule for rendering a particular Token type. - - Only applied when ``renderer.__output__ == fmt`` - """ - if self.renderer.__output__ == fmt: - self.renderer.rules[name] = function.__get__(self.renderer) # type: ignore - - def use(self, plugin: Callable, *params, **options) -> MarkdownIt: - """Load specified plugin with given params into current parser instance. (chainable) - - It's just a sugar to call `plugin(md, params)` with curring. - - Example:: - - def func(tokens, idx): - tokens[idx].content = tokens[idx].content.replace('foo', 'bar') - md = MarkdownIt().use(plugin, 'foo_replace', 'text', func) - - """ - plugin(self, *params, **options) - return self - - def parse(self, src: str, env: MutableMapping | None = None) -> list[Token]: - """Parse the source string to a token stream - - :param src: source string - :param env: environment sandbox - - Parse input string and return list of block tokens (special token type - "inline" will contain list of inline tokens). - - `env` is used to pass data between "distributed" rules and return additional - metadata like reference info, needed for the renderer. It also can be used to - inject data in specific cases. Usually, you will be ok to pass `{}`, - and then pass updated object to renderer. - """ - env = {} if env is None else env - if not isinstance(env, MutableMapping): - raise TypeError(f"Input data should be a MutableMapping, not {type(env)}") - if not isinstance(src, str): - raise TypeError(f"Input data should be a string, not {type(src)}") - state = StateCore(src, self, env) - self.core.process(state) - return state.tokens - - def render(self, src: str, env: MutableMapping | None = None) -> Any: - """Render markdown string into html. It does all magic for you :). - - :param src: source string - :param env: environment sandbox - :returns: The output of the loaded renderer - - `env` can be used to inject additional metadata (`{}` by default). - But you will not need it with high probability. See also comment - in [[MarkdownIt.parse]]. - """ - env = {} if env is None else env - return self.renderer.render(self.parse(src, env), self.options, env) - - def parseInline(self, src: str, env: MutableMapping | None = None) -> list[Token]: - """The same as [[MarkdownIt.parse]] but skip all block rules. - - :param src: source string - :param env: environment sandbox - - It returns the - block tokens list with the single `inline` element, containing parsed inline - tokens in `children` property. Also updates `env` object. - """ - env = {} if env is None else env - if not isinstance(env, MutableMapping): - raise TypeError(f"Input data should be an MutableMapping, not {type(env)}") - if not isinstance(src, str): - raise TypeError(f"Input data should be a string, not {type(src)}") - state = StateCore(src, self, env) - state.inlineMode = True - self.core.process(state) - return state.tokens - - def renderInline(self, src: str, env: MutableMapping | None = None) -> Any: - """Similar to [[MarkdownIt.render]] but for single paragraph content. - - :param src: source string - :param env: environment sandbox - - Similar to [[MarkdownIt.render]] but for single paragraph content. Result - will NOT be wrapped into `

    ` tags. - """ - env = {} if env is None else env - return self.renderer.render(self.parseInline(src, env), self.options, env) - - # link methods - - def validateLink(self, url: str) -> bool: - """Validate if the URL link is allowed in output. - - This validator can prohibit more than really needed to prevent XSS. - It's a tradeoff to keep code simple and to be secure by default. - - Note: the url should be normalized at this point, and existing entities decoded. - """ - return normalize_url.validateLink(url) - - def normalizeLink(self, url: str) -> str: - """Normalize destination URLs in links - - :: - - [label]: destination 'title' - ^^^^^^^^^^^ - """ - return normalize_url.normalizeLink(url) - - def normalizeLinkText(self, link: str) -> str: - """Normalize autolink content - - :: - - - ~~~~~~~~~~~ - """ - return normalize_url.normalizeLinkText(link) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_wx.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_wx.py deleted file mode 100644 index eeed515aafa207273f553b75e71da9b8380a5199..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_wx.py +++ /dev/null @@ -1,1390 +0,0 @@ -""" -A wxPython backend for matplotlib. - -Originally contributed by Jeremy O'Donoghue (jeremy@o-donoghue.com) and John -Hunter (jdhunter@ace.bsd.uchicago.edu). - -Copyright (C) Jeremy O'Donoghue & John Hunter, 2003-4. -""" - -import functools -import logging -import math -import pathlib -import sys -import weakref - -import numpy as np -import PIL - -import matplotlib as mpl -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, - GraphicsContextBase, MouseButton, NavigationToolbar2, RendererBase, - TimerBase, ToolContainerBase, cursors, - CloseEvent, KeyEvent, LocationEvent, MouseEvent, ResizeEvent) - -from matplotlib import _api, cbook, backend_tools -from matplotlib._pylab_helpers import Gcf -from matplotlib.path import Path -from matplotlib.transforms import Affine2D - -import wx - -_log = logging.getLogger(__name__) - -# the True dots per inch on the screen; should be display dependent; see -# http://groups.google.com/d/msg/comp.lang.postscript/-/omHAc9FEuAsJ?hl=en -# for some info about screen dpi -PIXELS_PER_INCH = 75 - - -@_api.deprecated("3.6") -def error_msg_wx(msg, parent=None): - """Signal an error condition with a popup error dialog.""" - dialog = wx.MessageDialog(parent=parent, - message=msg, - caption='Matplotlib backend_wx error', - style=wx.OK | wx.CENTRE) - dialog.ShowModal() - dialog.Destroy() - return None - - -# lru_cache holds a reference to the App and prevents it from being gc'ed. -@functools.lru_cache(1) -def _create_wxapp(): - wxapp = wx.App(False) - wxapp.SetExitOnFrameDelete(True) - cbook._setup_new_guiapp() - return wxapp - - -class TimerWx(TimerBase): - """Subclass of `.TimerBase` using wx.Timer events.""" - - def __init__(self, *args, **kwargs): - self._timer = wx.Timer() - self._timer.Notify = self._on_timer - super().__init__(*args, **kwargs) - - def _timer_start(self): - self._timer.Start(self._interval, self._single) - - def _timer_stop(self): - self._timer.Stop() - - def _timer_set_interval(self): - if self._timer.IsRunning(): - self._timer_start() # Restart with new interval. - - -@_api.deprecated( - "2.0", name="wx", obj_type="backend", removal="the future", - alternative="wxagg", - addendum="See the Matplotlib usage FAQ for more info on backends.") -class RendererWx(RendererBase): - """ - The renderer handles all the drawing primitives using a graphics - context instance that controls the colors/styles. It acts as the - 'renderer' instance used by many classes in the hierarchy. - """ - # In wxPython, drawing is performed on a wxDC instance, which will - # generally be mapped to the client area of the window displaying - # the plot. Under wxPython, the wxDC instance has a wx.Pen which - # describes the colour and weight of any lines drawn, and a wxBrush - # which describes the fill colour of any closed polygon. - - # Font styles, families and weight. - fontweights = { - 100: wx.FONTWEIGHT_LIGHT, - 200: wx.FONTWEIGHT_LIGHT, - 300: wx.FONTWEIGHT_LIGHT, - 400: wx.FONTWEIGHT_NORMAL, - 500: wx.FONTWEIGHT_NORMAL, - 600: wx.FONTWEIGHT_NORMAL, - 700: wx.FONTWEIGHT_BOLD, - 800: wx.FONTWEIGHT_BOLD, - 900: wx.FONTWEIGHT_BOLD, - 'ultralight': wx.FONTWEIGHT_LIGHT, - 'light': wx.FONTWEIGHT_LIGHT, - 'normal': wx.FONTWEIGHT_NORMAL, - 'medium': wx.FONTWEIGHT_NORMAL, - 'semibold': wx.FONTWEIGHT_NORMAL, - 'bold': wx.FONTWEIGHT_BOLD, - 'heavy': wx.FONTWEIGHT_BOLD, - 'ultrabold': wx.FONTWEIGHT_BOLD, - 'black': wx.FONTWEIGHT_BOLD, - } - fontangles = { - 'italic': wx.FONTSTYLE_ITALIC, - 'normal': wx.FONTSTYLE_NORMAL, - 'oblique': wx.FONTSTYLE_SLANT, - } - - # wxPython allows for portable font styles, choosing them appropriately for - # the target platform. Map some standard font names to the portable styles. - # QUESTION: Is it wise to agree to standard fontnames across all backends? - fontnames = { - 'Sans': wx.FONTFAMILY_SWISS, - 'Roman': wx.FONTFAMILY_ROMAN, - 'Script': wx.FONTFAMILY_SCRIPT, - 'Decorative': wx.FONTFAMILY_DECORATIVE, - 'Modern': wx.FONTFAMILY_MODERN, - 'Courier': wx.FONTFAMILY_MODERN, - 'courier': wx.FONTFAMILY_MODERN, - } - - def __init__(self, bitmap, dpi): - """Initialise a wxWindows renderer instance.""" - super().__init__() - _log.debug("%s - __init__()", type(self)) - self.width = bitmap.GetWidth() - self.height = bitmap.GetHeight() - self.bitmap = bitmap - self.fontd = {} - self.dpi = dpi - self.gc = None - - def flipy(self): - # docstring inherited - return True - - @_api.deprecated("3.6") - def offset_text_height(self): - return True - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - - if ismath: - s = cbook.strip_math(s) - - if self.gc is None: - gc = self.new_gc() - else: - gc = self.gc - gfx_ctx = gc.gfx_ctx - font = self.get_wx_font(s, prop) - gfx_ctx.SetFont(font, wx.BLACK) - w, h, descent, leading = gfx_ctx.GetFullTextExtent(s) - - return w, h, descent - - def get_canvas_width_height(self): - # docstring inherited - return self.width, self.height - - def handle_clip_rectangle(self, gc): - new_bounds = gc.get_clip_rectangle() - if new_bounds is not None: - new_bounds = new_bounds.bounds - gfx_ctx = gc.gfx_ctx - if gfx_ctx._lastcliprect != new_bounds: - gfx_ctx._lastcliprect = new_bounds - if new_bounds is None: - gfx_ctx.ResetClip() - else: - gfx_ctx.Clip(new_bounds[0], - self.height - new_bounds[1] - new_bounds[3], - new_bounds[2], new_bounds[3]) - - @staticmethod - def convert_path(gfx_ctx, path, transform): - wxpath = gfx_ctx.CreatePath() - for points, code in path.iter_segments(transform): - if code == Path.MOVETO: - wxpath.MoveToPoint(*points) - elif code == Path.LINETO: - wxpath.AddLineToPoint(*points) - elif code == Path.CURVE3: - wxpath.AddQuadCurveToPoint(*points) - elif code == Path.CURVE4: - wxpath.AddCurveToPoint(*points) - elif code == Path.CLOSEPOLY: - wxpath.CloseSubpath() - return wxpath - - def draw_path(self, gc, path, transform, rgbFace=None): - # docstring inherited - gc.select() - self.handle_clip_rectangle(gc) - gfx_ctx = gc.gfx_ctx - transform = transform + \ - Affine2D().scale(1.0, -1.0).translate(0.0, self.height) - wxpath = self.convert_path(gfx_ctx, path, transform) - if rgbFace is not None: - gfx_ctx.SetBrush(wx.Brush(gc.get_wxcolour(rgbFace))) - gfx_ctx.DrawPath(wxpath) - else: - gfx_ctx.StrokePath(wxpath) - gc.unselect() - - def draw_image(self, gc, x, y, im): - bbox = gc.get_clip_rectangle() - if bbox is not None: - l, b, w, h = bbox.bounds - else: - l = 0 - b = 0 - w = self.width - h = self.height - rows, cols = im.shape[:2] - bitmap = wx.Bitmap.FromBufferRGBA(cols, rows, im.tobytes()) - gc.select() - gc.gfx_ctx.DrawBitmap(bitmap, int(l), int(self.height - b), - int(w), int(-h)) - gc.unselect() - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - # docstring inherited - - if ismath: - s = cbook.strip_math(s) - _log.debug("%s - draw_text()", type(self)) - gc.select() - self.handle_clip_rectangle(gc) - gfx_ctx = gc.gfx_ctx - - font = self.get_wx_font(s, prop) - color = gc.get_wxcolour(gc.get_rgb()) - gfx_ctx.SetFont(font, color) - - w, h, d = self.get_text_width_height_descent(s, prop, ismath) - x = int(x) - y = int(y - h) - - if angle == 0.0: - gfx_ctx.DrawText(s, x, y) - else: - rads = math.radians(angle) - xo = h * math.sin(rads) - yo = h * math.cos(rads) - gfx_ctx.DrawRotatedText(s, x - xo, y - yo, rads) - - gc.unselect() - - def new_gc(self): - # docstring inherited - _log.debug("%s - new_gc()", type(self)) - self.gc = GraphicsContextWx(self.bitmap, self) - self.gc.select() - self.gc.unselect() - return self.gc - - def get_wx_font(self, s, prop): - """Return a wx font. Cache font instances for efficiency.""" - _log.debug("%s - get_wx_font()", type(self)) - key = hash(prop) - font = self.fontd.get(key) - if font is not None: - return font - size = self.points_to_pixels(prop.get_size_in_points()) - # Font colour is determined by the active wx.Pen - # TODO: It may be wise to cache font information - self.fontd[key] = font = wx.Font( # Cache the font and gc. - pointSize=round(size), - family=self.fontnames.get(prop.get_name(), wx.ROMAN), - style=self.fontangles[prop.get_style()], - weight=self.fontweights[prop.get_weight()]) - return font - - def points_to_pixels(self, points): - # docstring inherited - return points * (PIXELS_PER_INCH / 72.0 * self.dpi / 72.0) - - -class GraphicsContextWx(GraphicsContextBase): - """ - The graphics context provides the color, line styles, etc. - - This class stores a reference to a wxMemoryDC, and a - wxGraphicsContext that draws to it. Creating a wxGraphicsContext - seems to be fairly heavy, so these objects are cached based on the - bitmap object that is passed in. - - The base GraphicsContext stores colors as an RGB tuple on the unit - interval, e.g., (0.5, 0.0, 1.0). wxPython uses an int interval, but - since wxPython colour management is rather simple, I have not chosen - to implement a separate colour manager class. - """ - _capd = {'butt': wx.CAP_BUTT, - 'projecting': wx.CAP_PROJECTING, - 'round': wx.CAP_ROUND} - - _joind = {'bevel': wx.JOIN_BEVEL, - 'miter': wx.JOIN_MITER, - 'round': wx.JOIN_ROUND} - - _cache = weakref.WeakKeyDictionary() - - def __init__(self, bitmap, renderer): - super().__init__() - # assert self.Ok(), "wxMemoryDC not OK to use" - _log.debug("%s - __init__(): %s", type(self), bitmap) - - dc, gfx_ctx = self._cache.get(bitmap, (None, None)) - if dc is None: - dc = wx.MemoryDC(bitmap) - gfx_ctx = wx.GraphicsContext.Create(dc) - gfx_ctx._lastcliprect = None - self._cache[bitmap] = dc, gfx_ctx - - self.bitmap = bitmap - self.dc = dc - self.gfx_ctx = gfx_ctx - self._pen = wx.Pen('BLACK', 1, wx.SOLID) - gfx_ctx.SetPen(self._pen) - self.renderer = renderer - - def select(self): - """Select the current bitmap into this wxDC instance.""" - if sys.platform == 'win32': - self.dc.SelectObject(self.bitmap) - self.IsSelected = True - - def unselect(self): - """Select a Null bitmap into this wxDC instance.""" - if sys.platform == 'win32': - self.dc.SelectObject(wx.NullBitmap) - self.IsSelected = False - - def set_foreground(self, fg, isRGBA=None): - # docstring inherited - # Implementation note: wxPython has a separate concept of pen and - # brush - the brush fills any outline trace left by the pen. - # Here we set both to the same colour - if a figure is not to be - # filled, the renderer will set the brush to be transparent - # Same goes for text foreground... - _log.debug("%s - set_foreground()", type(self)) - self.select() - super().set_foreground(fg, isRGBA) - - self._pen.SetColour(self.get_wxcolour(self.get_rgb())) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def set_linewidth(self, w): - # docstring inherited - w = float(w) - _log.debug("%s - set_linewidth()", type(self)) - self.select() - if 0 < w < 1: - w = 1 - super().set_linewidth(w) - lw = int(self.renderer.points_to_pixels(self._linewidth)) - if lw == 0: - lw = 1 - self._pen.SetWidth(lw) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def set_capstyle(self, cs): - # docstring inherited - _log.debug("%s - set_capstyle()", type(self)) - self.select() - super().set_capstyle(cs) - self._pen.SetCap(GraphicsContextWx._capd[self._capstyle]) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def set_joinstyle(self, js): - # docstring inherited - _log.debug("%s - set_joinstyle()", type(self)) - self.select() - super().set_joinstyle(js) - self._pen.SetJoin(GraphicsContextWx._joind[self._joinstyle]) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def get_wxcolour(self, color): - """Convert an RGB(A) color to a wx.Colour.""" - _log.debug("%s - get_wx_color()", type(self)) - return wx.Colour(*[int(255 * x) for x in color]) - - -class _FigureCanvasWxBase(FigureCanvasBase, wx.Panel): - """ - The FigureCanvas contains the figure and does event handling. - - In the wxPython backend, it is derived from wxPanel, and (usually) lives - inside a frame instantiated by a FigureManagerWx. The parent window - probably implements a wx.Sizer to control the displayed control size - but - we give a hint as to our preferred minimum size. - """ - - required_interactive_framework = "wx" - _timer_cls = TimerWx - manager_class = _api.classproperty(lambda cls: FigureManagerWx) - - keyvald = { - wx.WXK_CONTROL: 'control', - wx.WXK_SHIFT: 'shift', - wx.WXK_ALT: 'alt', - wx.WXK_CAPITAL: 'caps_lock', - wx.WXK_LEFT: 'left', - wx.WXK_UP: 'up', - wx.WXK_RIGHT: 'right', - wx.WXK_DOWN: 'down', - wx.WXK_ESCAPE: 'escape', - wx.WXK_F1: 'f1', - wx.WXK_F2: 'f2', - wx.WXK_F3: 'f3', - wx.WXK_F4: 'f4', - wx.WXK_F5: 'f5', - wx.WXK_F6: 'f6', - wx.WXK_F7: 'f7', - wx.WXK_F8: 'f8', - wx.WXK_F9: 'f9', - wx.WXK_F10: 'f10', - wx.WXK_F11: 'f11', - wx.WXK_F12: 'f12', - wx.WXK_SCROLL: 'scroll_lock', - wx.WXK_PAUSE: 'break', - wx.WXK_BACK: 'backspace', - wx.WXK_RETURN: 'enter', - wx.WXK_INSERT: 'insert', - wx.WXK_DELETE: 'delete', - wx.WXK_HOME: 'home', - wx.WXK_END: 'end', - wx.WXK_PAGEUP: 'pageup', - wx.WXK_PAGEDOWN: 'pagedown', - wx.WXK_NUMPAD0: '0', - wx.WXK_NUMPAD1: '1', - wx.WXK_NUMPAD2: '2', - wx.WXK_NUMPAD3: '3', - wx.WXK_NUMPAD4: '4', - wx.WXK_NUMPAD5: '5', - wx.WXK_NUMPAD6: '6', - wx.WXK_NUMPAD7: '7', - wx.WXK_NUMPAD8: '8', - wx.WXK_NUMPAD9: '9', - wx.WXK_NUMPAD_ADD: '+', - wx.WXK_NUMPAD_SUBTRACT: '-', - wx.WXK_NUMPAD_MULTIPLY: '*', - wx.WXK_NUMPAD_DIVIDE: '/', - wx.WXK_NUMPAD_DECIMAL: 'dec', - wx.WXK_NUMPAD_ENTER: 'enter', - wx.WXK_NUMPAD_UP: 'up', - wx.WXK_NUMPAD_RIGHT: 'right', - wx.WXK_NUMPAD_DOWN: 'down', - wx.WXK_NUMPAD_LEFT: 'left', - wx.WXK_NUMPAD_PAGEUP: 'pageup', - wx.WXK_NUMPAD_PAGEDOWN: 'pagedown', - wx.WXK_NUMPAD_HOME: 'home', - wx.WXK_NUMPAD_END: 'end', - wx.WXK_NUMPAD_INSERT: 'insert', - wx.WXK_NUMPAD_DELETE: 'delete', - } - - def __init__(self, parent, id, figure=None): - """ - Initialize a FigureWx instance. - - - Initialize the FigureCanvasBase and wxPanel parents. - - Set event handlers for resize, paint, and keyboard and mouse - interaction. - """ - - FigureCanvasBase.__init__(self, figure) - w, h = map(math.ceil, self.figure.bbox.size) - # Set preferred window size hint - helps the sizer, if one is connected - wx.Panel.__init__(self, parent, id, size=wx.Size(w, h)) - # Create the drawing bitmap - self.bitmap = wx.Bitmap(w, h) - _log.debug("%s - __init__() - bitmap w:%d h:%d", type(self), w, h) - self._isDrawn = False - self._rubberband_rect = None - self._rubberband_pen_black = wx.Pen('BLACK', 1, wx.PENSTYLE_SHORT_DASH) - self._rubberband_pen_white = wx.Pen('WHITE', 1, wx.PENSTYLE_SOLID) - - self.Bind(wx.EVT_SIZE, self._on_size) - self.Bind(wx.EVT_PAINT, self._on_paint) - self.Bind(wx.EVT_CHAR_HOOK, self._on_key_down) - self.Bind(wx.EVT_KEY_UP, self._on_key_up) - self.Bind(wx.EVT_LEFT_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_LEFT_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_LEFT_UP, self._on_mouse_button) - self.Bind(wx.EVT_MIDDLE_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_MIDDLE_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_MIDDLE_UP, self._on_mouse_button) - self.Bind(wx.EVT_RIGHT_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_RIGHT_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_RIGHT_UP, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX1_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX1_UP, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX2_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX2_UP, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX1_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX2_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_MOUSEWHEEL, self._on_mouse_wheel) - self.Bind(wx.EVT_MOTION, self._on_motion) - self.Bind(wx.EVT_ENTER_WINDOW, self._on_enter) - self.Bind(wx.EVT_LEAVE_WINDOW, self._on_leave) - - self.Bind(wx.EVT_MOUSE_CAPTURE_CHANGED, self._on_capture_lost) - self.Bind(wx.EVT_MOUSE_CAPTURE_LOST, self._on_capture_lost) - - self.SetBackgroundStyle(wx.BG_STYLE_PAINT) # Reduce flicker. - self.SetBackgroundColour(wx.WHITE) - - def Copy_to_Clipboard(self, event=None): - """Copy bitmap of canvas to system clipboard.""" - bmp_obj = wx.BitmapDataObject() - bmp_obj.SetBitmap(self.bitmap) - - if not wx.TheClipboard.IsOpened(): - open_success = wx.TheClipboard.Open() - if open_success: - wx.TheClipboard.SetData(bmp_obj) - wx.TheClipboard.Close() - wx.TheClipboard.Flush() - - def draw_idle(self): - # docstring inherited - _log.debug("%s - draw_idle()", type(self)) - self._isDrawn = False # Force redraw - # Triggering a paint event is all that is needed to defer drawing - # until later. The platform will send the event when it thinks it is - # a good time (usually as soon as there are no other events pending). - self.Refresh(eraseBackground=False) - - def flush_events(self): - # docstring inherited - wx.Yield() - - def start_event_loop(self, timeout=0): - # docstring inherited - if hasattr(self, '_event_loop'): - raise RuntimeError("Event loop already running") - timer = wx.Timer(self, id=wx.ID_ANY) - if timeout > 0: - timer.Start(int(timeout * 1000), oneShot=True) - self.Bind(wx.EVT_TIMER, self.stop_event_loop, id=timer.GetId()) - # Event loop handler for start/stop event loop - self._event_loop = wx.GUIEventLoop() - self._event_loop.Run() - timer.Stop() - - def stop_event_loop(self, event=None): - # docstring inherited - if hasattr(self, '_event_loop'): - if self._event_loop.IsRunning(): - self._event_loop.Exit() - del self._event_loop - - def _get_imagesave_wildcards(self): - """Return the wildcard string for the filesave dialog.""" - default_filetype = self.get_default_filetype() - filetypes = self.get_supported_filetypes_grouped() - sorted_filetypes = sorted(filetypes.items()) - wildcards = [] - extensions = [] - filter_index = 0 - for i, (name, exts) in enumerate(sorted_filetypes): - ext_list = ';'.join(['*.%s' % ext for ext in exts]) - extensions.append(exts[0]) - wildcard = '%s (%s)|%s' % (name, ext_list, ext_list) - if default_filetype in exts: - filter_index = i - wildcards.append(wildcard) - wildcards = '|'.join(wildcards) - return wildcards, extensions, filter_index - - def gui_repaint(self, drawDC=None): - """ - Update the displayed image on the GUI canvas, using the supplied - wx.PaintDC device context. - - The 'WXAgg' backend sets origin accordingly. - """ - _log.debug("%s - gui_repaint()", type(self)) - # The "if self" check avoids a "wrapped C/C++ object has been deleted" - # RuntimeError if doing things after window is closed. - if not (self and self.IsShownOnScreen()): - return - if not drawDC: # not called from OnPaint use a ClientDC - drawDC = wx.ClientDC(self) - # For 'WX' backend on Windows, the bitmap can not be in use by another - # DC (see GraphicsContextWx._cache). - bmp = (self.bitmap.ConvertToImage().ConvertToBitmap() - if wx.Platform == '__WXMSW__' - and isinstance(self.figure.canvas.get_renderer(), RendererWx) - else self.bitmap) - drawDC.DrawBitmap(bmp, 0, 0) - if self._rubberband_rect is not None: - # Some versions of wx+python don't support numpy.float64 here. - x0, y0, x1, y1 = map(round, self._rubberband_rect) - rect = [(x0, y0, x1, y0), (x1, y0, x1, y1), - (x0, y0, x0, y1), (x0, y1, x1, y1)] - drawDC.DrawLineList(rect, self._rubberband_pen_white) - drawDC.DrawLineList(rect, self._rubberband_pen_black) - - filetypes = { - **FigureCanvasBase.filetypes, - 'bmp': 'Windows bitmap', - 'jpeg': 'JPEG', - 'jpg': 'JPEG', - 'pcx': 'PCX', - 'png': 'Portable Network Graphics', - 'tif': 'Tagged Image Format File', - 'tiff': 'Tagged Image Format File', - 'xpm': 'X pixmap', - } - - def print_figure(self, filename, *args, **kwargs): - # docstring inherited - super().print_figure(filename, *args, **kwargs) - # Restore the current view; this is needed because the artist contains - # methods rely on particular attributes of the rendered figure for - # determining things like bounding boxes. - if self._isDrawn: - self.draw() - - def _on_paint(self, event): - """Called when wxPaintEvt is generated.""" - _log.debug("%s - _on_paint()", type(self)) - drawDC = wx.PaintDC(self) - if not self._isDrawn: - self.draw(drawDC=drawDC) - else: - self.gui_repaint(drawDC=drawDC) - drawDC.Destroy() - - def _on_size(self, event): - """ - Called when wxEventSize is generated. - - In this application we attempt to resize to fit the window, so it - is better to take the performance hit and redraw the whole window. - """ - - _log.debug("%s - _on_size()", type(self)) - sz = self.GetParent().GetSizer() - if sz: - si = sz.GetItem(self) - if sz and si and not si.Proportion and not si.Flag & wx.EXPAND: - # managed by a sizer, but with a fixed size - size = self.GetMinSize() - else: - # variable size - size = self.GetClientSize() - # Do not allow size to become smaller than MinSize - size.IncTo(self.GetMinSize()) - if getattr(self, "_width", None): - if size == (self._width, self._height): - # no change in size - return - self._width, self._height = size - self._isDrawn = False - - if self._width <= 1 or self._height <= 1: - return # Empty figure - - # Create a new, correctly sized bitmap - self.bitmap = wx.Bitmap(self._width, self._height) - - dpival = self.figure.dpi - winch = self._width / dpival - hinch = self._height / dpival - self.figure.set_size_inches(winch, hinch, forward=False) - - # Rendering will happen on the associated paint event - # so no need to do anything here except to make sure - # the whole background is repainted. - self.Refresh(eraseBackground=False) - ResizeEvent("resize_event", self)._process() - self.draw_idle() - - @staticmethod - def _mpl_modifiers(event=None, *, exclude=None): - mod_table = [ - ("ctrl", wx.MOD_CONTROL, wx.WXK_CONTROL), - ("alt", wx.MOD_ALT, wx.WXK_ALT), - ("shift", wx.MOD_SHIFT, wx.WXK_SHIFT), - ] - if event is not None: - modifiers = event.GetModifiers() - return [name for name, mod, key in mod_table - if modifiers & mod and exclude != key] - else: - return [name for name, mod, key in mod_table - if wx.GetKeyState(key)] - - def _get_key(self, event): - keyval = event.KeyCode - if keyval in self.keyvald: - key = self.keyvald[keyval] - elif keyval < 256: - key = chr(keyval) - # wx always returns an uppercase, so make it lowercase if the shift - # key is not depressed (NOTE: this will not handle Caps Lock) - if not event.ShiftDown(): - key = key.lower() - else: - return None - mods = self._mpl_modifiers(event, exclude=keyval) - if "shift" in mods and key.isupper(): - mods.remove("shift") - return "+".join([*mods, key]) - - def _mpl_coords(self, pos=None): - """ - Convert a wx position, defaulting to the current cursor position, to - Matplotlib coordinates. - """ - if pos is None: - pos = wx.GetMouseState() - x, y = self.ScreenToClient(pos.X, pos.Y) - else: - x, y = pos.X, pos.Y - # flip y so y=0 is bottom of canvas - return x, self.figure.bbox.height - y - - def _on_key_down(self, event): - """Capture key press.""" - KeyEvent("key_press_event", self, - self._get_key(event), *self._mpl_coords(), - guiEvent=event)._process() - if self: - event.Skip() - - def _on_key_up(self, event): - """Release key.""" - KeyEvent("key_release_event", self, - self._get_key(event), *self._mpl_coords(), - guiEvent=event)._process() - if self: - event.Skip() - - def set_cursor(self, cursor): - # docstring inherited - cursor = wx.Cursor(_api.check_getitem({ - cursors.MOVE: wx.CURSOR_HAND, - cursors.HAND: wx.CURSOR_HAND, - cursors.POINTER: wx.CURSOR_ARROW, - cursors.SELECT_REGION: wx.CURSOR_CROSS, - cursors.WAIT: wx.CURSOR_WAIT, - cursors.RESIZE_HORIZONTAL: wx.CURSOR_SIZEWE, - cursors.RESIZE_VERTICAL: wx.CURSOR_SIZENS, - }, cursor=cursor)) - self.SetCursor(cursor) - self.Refresh() - - def _set_capture(self, capture=True): - """Control wx mouse capture.""" - if self.HasCapture(): - self.ReleaseMouse() - if capture: - self.CaptureMouse() - - def _on_capture_lost(self, event): - """Capture changed or lost""" - self._set_capture(False) - - def _on_mouse_button(self, event): - """Start measuring on an axis.""" - event.Skip() - self._set_capture(event.ButtonDown() or event.ButtonDClick()) - x, y = self._mpl_coords(event) - button_map = { - wx.MOUSE_BTN_LEFT: MouseButton.LEFT, - wx.MOUSE_BTN_MIDDLE: MouseButton.MIDDLE, - wx.MOUSE_BTN_RIGHT: MouseButton.RIGHT, - wx.MOUSE_BTN_AUX1: MouseButton.BACK, - wx.MOUSE_BTN_AUX2: MouseButton.FORWARD, - } - button = event.GetButton() - button = button_map.get(button, button) - modifiers = self._mpl_modifiers(event) - if event.ButtonDown(): - MouseEvent("button_press_event", self, x, y, button, - modifiers=modifiers, guiEvent=event)._process() - elif event.ButtonDClick(): - MouseEvent("button_press_event", self, x, y, button, - dblclick=True, modifiers=modifiers, - guiEvent=event)._process() - elif event.ButtonUp(): - MouseEvent("button_release_event", self, x, y, button, - modifiers=modifiers, guiEvent=event)._process() - - def _on_mouse_wheel(self, event): - """Translate mouse wheel events into matplotlib events""" - x, y = self._mpl_coords(event) - # Convert delta/rotation/rate into a floating point step size - step = event.LinesPerAction * event.WheelRotation / event.WheelDelta - # Done handling event - event.Skip() - # Mac gives two events for every wheel event; skip every second one. - if wx.Platform == '__WXMAC__': - if not hasattr(self, '_skipwheelevent'): - self._skipwheelevent = True - elif self._skipwheelevent: - self._skipwheelevent = False - return # Return without processing event - else: - self._skipwheelevent = True - MouseEvent("scroll_event", self, x, y, step=step, - modifiers=self._mpl_modifiers(event), - guiEvent=event)._process() - - def _on_motion(self, event): - """Start measuring on an axis.""" - event.Skip() - MouseEvent("motion_notify_event", self, - *self._mpl_coords(event), - modifiers=self._mpl_modifiers(event), - guiEvent=event)._process() - - def _on_enter(self, event): - """Mouse has entered the window.""" - event.Skip() - LocationEvent("figure_enter_event", self, - *self._mpl_coords(event), - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def _on_leave(self, event): - """Mouse has left the window.""" - event.Skip() - LocationEvent("figure_leave_event", self, - *self._mpl_coords(event), - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - -class FigureCanvasWx(_FigureCanvasWxBase): - # Rendering to a Wx canvas using the deprecated Wx renderer. - - def draw(self, drawDC=None): - """ - Render the figure using RendererWx instance renderer, or using a - previously defined renderer if none is specified. - """ - _log.debug("%s - draw()", type(self)) - self.renderer = RendererWx(self.bitmap, self.figure.dpi) - self.figure.draw(self.renderer) - self._isDrawn = True - self.gui_repaint(drawDC=drawDC) - - def _print_image(self, filetype, filename): - bitmap = wx.Bitmap(math.ceil(self.figure.bbox.width), - math.ceil(self.figure.bbox.height)) - self.figure.draw(RendererWx(bitmap, self.figure.dpi)) - saved_obj = (bitmap.ConvertToImage() - if cbook.is_writable_file_like(filename) - else bitmap) - if not saved_obj.SaveFile(filename, filetype): - raise RuntimeError(f'Could not save figure to {filename}') - # draw() is required here since bits of state about the last renderer - # are strewn about the artist draw methods. Do not remove the draw - # without first verifying that these have been cleaned up. The artist - # contains() methods will fail otherwise. - if self._isDrawn: - self.draw() - # The "if self" check avoids a "wrapped C/C++ object has been deleted" - # RuntimeError if doing things after window is closed. - if self: - self.Refresh() - - print_bmp = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_BMP) - print_jpeg = print_jpg = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_JPEG) - print_pcx = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_PCX) - print_png = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_PNG) - print_tiff = print_tif = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_TIF) - print_xpm = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_XPM) - - -class FigureFrameWx(wx.Frame): - def __init__(self, num, fig, *, canvas_class=None): - # On non-Windows platform, explicitly set the position - fix - # positioning bug on some Linux platforms - if wx.Platform == '__WXMSW__': - pos = wx.DefaultPosition - else: - pos = wx.Point(20, 20) - super().__init__(parent=None, id=-1, pos=pos) - # Frame will be sized later by the Fit method - _log.debug("%s - __init__()", type(self)) - _set_frame_icon(self) - - # The parameter will become required after the deprecation elapses. - if canvas_class is not None: - self.canvas = canvas_class(self, -1, fig) - else: - _api.warn_deprecated( - "3.6", message="The canvas_class parameter will become " - "required after the deprecation period starting in Matplotlib " - "%(since)s elapses.") - self.canvas = self.get_canvas(fig) - - # Auto-attaches itself to self.canvas.manager - manager = FigureManagerWx(self.canvas, num, self) - - toolbar = self.canvas.manager.toolbar - if toolbar is not None: - self.SetToolBar(toolbar) - - # On Windows, canvas sizing must occur after toolbar addition; - # otherwise the toolbar further resizes the canvas. - w, h = map(math.ceil, fig.bbox.size) - self.canvas.SetInitialSize(wx.Size(w, h)) - self.canvas.SetMinSize((2, 2)) - self.canvas.SetFocus() - - self.Fit() - - self.Bind(wx.EVT_CLOSE, self._on_close) - - sizer = _api.deprecated("3.6", alternative="frame.GetSizer()")( - property(lambda self: self.GetSizer())) - figmgr = _api.deprecated("3.6", alternative="frame.canvas.manager")( - property(lambda self: self.canvas.manager)) - num = _api.deprecated("3.6", alternative="frame.canvas.manager.num")( - property(lambda self: self.canvas.manager.num)) - toolbar = _api.deprecated("3.6", alternative="frame.GetToolBar()")( - property(lambda self: self.GetToolBar())) - toolmanager = _api.deprecated( - "3.6", alternative="frame.canvas.manager.toolmanager")( - property(lambda self: self.canvas.manager.toolmanager)) - - @_api.deprecated( - "3.6", alternative="the canvas_class constructor parameter") - def get_canvas(self, fig): - return FigureCanvasWx(self, -1, fig) - - @_api.deprecated("3.6", alternative="frame.canvas.manager") - def get_figure_manager(self): - _log.debug("%s - get_figure_manager()", type(self)) - return self.canvas.manager - - def _on_close(self, event): - _log.debug("%s - on_close()", type(self)) - CloseEvent("close_event", self.canvas)._process() - self.canvas.stop_event_loop() - # set FigureManagerWx.frame to None to prevent repeated attempts to - # close this frame from FigureManagerWx.destroy() - self.canvas.manager.frame = None - # remove figure manager from Gcf.figs - Gcf.destroy(self.canvas.manager) - try: # See issue 2941338. - self.canvas.mpl_disconnect(self.canvas.toolbar._id_drag) - except AttributeError: # If there's no toolbar. - pass - # Carry on with close event propagation, frame & children destruction - event.Skip() - - -class FigureManagerWx(FigureManagerBase): - """ - Container/controller for the FigureCanvas and GUI frame. - - It is instantiated by Gcf whenever a new figure is created. Gcf is - responsible for managing multiple instances of FigureManagerWx. - - Attributes - ---------- - canvas : `FigureCanvas` - a FigureCanvasWx(wx.Panel) instance - window : wxFrame - a wxFrame instance - wxpython.org/Phoenix/docs/html/Frame.html - """ - - def __init__(self, canvas, num, frame): - _log.debug("%s - __init__()", type(self)) - self.frame = self.window = frame - super().__init__(canvas, num) - - @classmethod - def create_with_canvas(cls, canvas_class, figure, num): - # docstring inherited - wxapp = wx.GetApp() or _create_wxapp() - frame = FigureFrameWx(num, figure, canvas_class=canvas_class) - manager = figure.canvas.manager - if mpl.is_interactive(): - manager.frame.Show() - figure.canvas.draw_idle() - return manager - - @classmethod - def start_main_loop(cls): - if not wx.App.IsMainLoopRunning(): - wxapp = wx.GetApp() - if wxapp is not None: - wxapp.MainLoop() - - def show(self): - # docstring inherited - self.frame.Show() - self.canvas.draw() - if mpl.rcParams['figure.raise_window']: - self.frame.Raise() - - def destroy(self, *args): - # docstring inherited - _log.debug("%s - destroy()", type(self)) - frame = self.frame - if frame: # Else, may have been already deleted, e.g. when closing. - # As this can be called from non-GUI thread from plt.close use - # wx.CallAfter to ensure thread safety. - wx.CallAfter(frame.Close) - - def full_screen_toggle(self): - # docstring inherited - self.frame.ShowFullScreen(not self.frame.IsFullScreen()) - - def get_window_title(self): - # docstring inherited - return self.window.GetTitle() - - def set_window_title(self, title): - # docstring inherited - self.window.SetTitle(title) - - def resize(self, width, height): - # docstring inherited - # Directly using SetClientSize doesn't handle the toolbar on Windows. - self.window.SetSize(self.window.ClientToWindowSize(wx.Size( - math.ceil(width), math.ceil(height)))) - - -def _load_bitmap(filename): - """ - Load a wx.Bitmap from a file in the "images" directory of the Matplotlib - data. - """ - return wx.Bitmap(str(cbook._get_data_path('images', filename))) - - -def _set_frame_icon(frame): - bundle = wx.IconBundle() - for image in ('matplotlib.png', 'matplotlib_large.png'): - icon = wx.Icon(_load_bitmap(image)) - if not icon.IsOk(): - return - bundle.AddIcon(icon) - frame.SetIcons(bundle) - - -class NavigationToolbar2Wx(NavigationToolbar2, wx.ToolBar): - def __init__(self, canvas, coordinates=True, *, style=wx.TB_BOTTOM): - wx.ToolBar.__init__(self, canvas.GetParent(), -1, style=style) - - if 'wxMac' in wx.PlatformInfo: - self.SetToolBitmapSize((24, 24)) - self.wx_ids = {} - for text, tooltip_text, image_file, callback in self.toolitems: - if text is None: - self.AddSeparator() - continue - self.wx_ids[text] = ( - self.AddTool( - -1, - bitmap=self._icon(f"{image_file}.png"), - bmpDisabled=wx.NullBitmap, - label=text, shortHelp=tooltip_text, - kind=(wx.ITEM_CHECK if text in ["Pan", "Zoom"] - else wx.ITEM_NORMAL)) - .Id) - self.Bind(wx.EVT_TOOL, getattr(self, callback), - id=self.wx_ids[text]) - - self._coordinates = coordinates - if self._coordinates: - self.AddStretchableSpace() - self._label_text = wx.StaticText(self, style=wx.ALIGN_RIGHT) - self.AddControl(self._label_text) - - self.Realize() - - NavigationToolbar2.__init__(self, canvas) - - @staticmethod - def _icon(name): - """ - Construct a `wx.Bitmap` suitable for use as icon from an image file - *name*, including the extension and relative to Matplotlib's "images" - data directory. - """ - pilimg = PIL.Image.open(cbook._get_data_path("images", name)) - # ensure RGBA as wx BitMap expects RGBA format - image = np.array(pilimg.convert("RGBA")) - try: - dark = wx.SystemSettings.GetAppearance().IsDark() - except AttributeError: # wxpython < 4.1 - # copied from wx's IsUsingDarkBackground / GetLuminance. - bg = wx.SystemSettings.GetColour(wx.SYS_COLOUR_WINDOW) - fg = wx.SystemSettings.GetColour(wx.SYS_COLOUR_WINDOWTEXT) - # See wx.Colour.GetLuminance. - bg_lum = (.299 * bg.red + .587 * bg.green + .114 * bg.blue) / 255 - fg_lum = (.299 * fg.red + .587 * fg.green + .114 * fg.blue) / 255 - dark = fg_lum - bg_lum > .2 - if dark: - fg = wx.SystemSettings.GetColour(wx.SYS_COLOUR_WINDOWTEXT) - black_mask = (image[..., :3] == 0).all(axis=-1) - image[black_mask, :3] = (fg.Red(), fg.Green(), fg.Blue()) - return wx.Bitmap.FromBufferRGBA( - image.shape[1], image.shape[0], image.tobytes()) - - def _update_buttons_checked(self): - if "Pan" in self.wx_ids: - self.ToggleTool(self.wx_ids["Pan"], self.mode.name == "PAN") - if "Zoom" in self.wx_ids: - self.ToggleTool(self.wx_ids["Zoom"], self.mode.name == "ZOOM") - - def zoom(self, *args): - super().zoom(*args) - self._update_buttons_checked() - - def pan(self, *args): - super().pan(*args) - self._update_buttons_checked() - - def save_figure(self, *args): - # Fetch the required filename and file type. - filetypes, exts, filter_index = self.canvas._get_imagesave_wildcards() - default_file = self.canvas.get_default_filename() - dialog = wx.FileDialog( - self.canvas.GetParent(), "Save to file", - mpl.rcParams["savefig.directory"], default_file, filetypes, - wx.FD_SAVE | wx.FD_OVERWRITE_PROMPT) - dialog.SetFilterIndex(filter_index) - if dialog.ShowModal() == wx.ID_OK: - path = pathlib.Path(dialog.GetPath()) - _log.debug('%s - Save file path: %s', type(self), path) - fmt = exts[dialog.GetFilterIndex()] - ext = path.suffix[1:] - if ext in self.canvas.get_supported_filetypes() and fmt != ext: - # looks like they forgot to set the image type drop - # down, going with the extension. - _log.warning('extension %s did not match the selected ' - 'image type %s; going with %s', - ext, fmt, ext) - fmt = ext - # Save dir for next time, unless empty str (which means use cwd). - if mpl.rcParams["savefig.directory"]: - mpl.rcParams["savefig.directory"] = str(path.parent) - try: - self.canvas.figure.savefig(path, format=fmt) - except Exception as e: - dialog = wx.MessageDialog( - parent=self.canvas.GetParent(), message=str(e), - caption='Matplotlib error') - dialog.ShowModal() - dialog.Destroy() - - def draw_rubberband(self, event, x0, y0, x1, y1): - height = self.canvas.figure.bbox.height - self.canvas._rubberband_rect = (x0, height - y0, x1, height - y1) - self.canvas.Refresh() - - def remove_rubberband(self): - self.canvas._rubberband_rect = None - self.canvas.Refresh() - - def set_message(self, s): - if self._coordinates: - self._label_text.SetLabel(s) - - def set_history_buttons(self): - can_backward = self._nav_stack._pos > 0 - can_forward = self._nav_stack._pos < len(self._nav_stack._elements) - 1 - if 'Back' in self.wx_ids: - self.EnableTool(self.wx_ids['Back'], can_backward) - if 'Forward' in self.wx_ids: - self.EnableTool(self.wx_ids['Forward'], can_forward) - - -# tools for matplotlib.backend_managers.ToolManager: - -class ToolbarWx(ToolContainerBase, wx.ToolBar): - def __init__(self, toolmanager, parent=None, style=wx.TB_BOTTOM): - if parent is None: - parent = toolmanager.canvas.GetParent() - ToolContainerBase.__init__(self, toolmanager) - wx.ToolBar.__init__(self, parent, -1, style=style) - self._space = self.AddStretchableSpace() - self._label_text = wx.StaticText(self, style=wx.ALIGN_RIGHT) - self.AddControl(self._label_text) - self._toolitems = {} - self._groups = {} # Mapping of groups to the separator after them. - - def _get_tool_pos(self, tool): - """ - Find the position (index) of a wx.ToolBarToolBase in a ToolBar. - - ``ToolBar.GetToolPos`` is not useful because wx assigns the same Id to - all Separators and StretchableSpaces. - """ - pos, = [pos for pos in range(self.ToolsCount) - if self.GetToolByPos(pos) == tool] - return pos - - def add_toolitem(self, name, group, position, image_file, description, - toggle): - # Find or create the separator that follows this group. - if group not in self._groups: - self._groups[group] = self.InsertSeparator( - self._get_tool_pos(self._space)) - sep = self._groups[group] - # List all separators. - seps = [t for t in map(self.GetToolByPos, range(self.ToolsCount)) - if t.IsSeparator() and not t.IsStretchableSpace()] - # Find where to insert the tool. - if position >= 0: - # Find the start of the group by looking for the separator - # preceding this one; then move forward from it. - start = (0 if sep == seps[0] - else self._get_tool_pos(seps[seps.index(sep) - 1]) + 1) - else: - # Move backwards from this separator. - start = self._get_tool_pos(sep) + 1 - idx = start + position - if image_file: - bmp = NavigationToolbar2Wx._icon(image_file) - kind = wx.ITEM_NORMAL if not toggle else wx.ITEM_CHECK - tool = self.InsertTool(idx, -1, name, bmp, wx.NullBitmap, kind, - description or "") - else: - size = (self.GetTextExtent(name)[0] + 10, -1) - if toggle: - control = wx.ToggleButton(self, -1, name, size=size) - else: - control = wx.Button(self, -1, name, size=size) - tool = self.InsertControl(idx, control, label=name) - self.Realize() - - def handler(event): - self.trigger_tool(name) - - if image_file: - self.Bind(wx.EVT_TOOL, handler, tool) - else: - control.Bind(wx.EVT_LEFT_DOWN, handler) - - self._toolitems.setdefault(name, []) - self._toolitems[name].append((tool, handler)) - - def toggle_toolitem(self, name, toggled): - if name not in self._toolitems: - return - for tool, handler in self._toolitems[name]: - if not tool.IsControl(): - self.ToggleTool(tool.Id, toggled) - else: - tool.GetControl().SetValue(toggled) - self.Refresh() - - def remove_toolitem(self, name): - for tool, handler in self._toolitems[name]: - self.DeleteTool(tool.Id) - del self._toolitems[name] - - def set_message(self, s): - self._label_text.SetLabel(s) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class ConfigureSubplotsWx(backend_tools.ConfigureSubplotsBase): - def trigger(self, *args): - NavigationToolbar2Wx.configure_subplots(self) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class SaveFigureWx(backend_tools.SaveFigureBase): - def trigger(self, *args): - NavigationToolbar2Wx.save_figure( - self._make_classic_style_pseudo_toolbar()) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class RubberbandWx(backend_tools.RubberbandBase): - def draw_rubberband(self, x0, y0, x1, y1): - NavigationToolbar2Wx.draw_rubberband( - self._make_classic_style_pseudo_toolbar(), None, x0, y0, x1, y1) - - def remove_rubberband(self): - NavigationToolbar2Wx.remove_rubberband( - self._make_classic_style_pseudo_toolbar()) - - -class _HelpDialog(wx.Dialog): - _instance = None # a reference to an open dialog singleton - headers = [("Action", "Shortcuts", "Description")] - widths = [100, 140, 300] - - def __init__(self, parent, help_entries): - super().__init__(parent, title="Help", - style=wx.DEFAULT_DIALOG_STYLE | wx.RESIZE_BORDER) - - sizer = wx.BoxSizer(wx.VERTICAL) - grid_sizer = wx.FlexGridSizer(0, 3, 8, 6) - # create and add the entries - bold = self.GetFont().MakeBold() - for r, row in enumerate(self.headers + help_entries): - for (col, width) in zip(row, self.widths): - label = wx.StaticText(self, label=col) - if r == 0: - label.SetFont(bold) - label.Wrap(width) - grid_sizer.Add(label, 0, 0, 0) - # finalize layout, create button - sizer.Add(grid_sizer, 0, wx.ALL, 6) - ok = wx.Button(self, wx.ID_OK) - sizer.Add(ok, 0, wx.ALIGN_CENTER_HORIZONTAL | wx.ALL, 8) - self.SetSizer(sizer) - sizer.Fit(self) - self.Layout() - self.Bind(wx.EVT_CLOSE, self._on_close) - ok.Bind(wx.EVT_BUTTON, self._on_close) - - def _on_close(self, event): - _HelpDialog._instance = None # remove global reference - self.DestroyLater() - event.Skip() - - @classmethod - def show(cls, parent, help_entries): - # if no dialog is shown, create one; otherwise just re-raise it - if cls._instance: - cls._instance.Raise() - return - cls._instance = cls(parent, help_entries) - cls._instance.Show() - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class HelpWx(backend_tools.ToolHelpBase): - def trigger(self, *args): - _HelpDialog.show(self.figure.canvas.GetTopLevelParent(), - self._get_help_entries()) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class ToolCopyToClipboardWx(backend_tools.ToolCopyToClipboardBase): - def trigger(self, *args, **kwargs): - if not self.canvas._isDrawn: - self.canvas.draw() - if not self.canvas.bitmap.IsOk() or not wx.TheClipboard.Open(): - return - try: - wx.TheClipboard.SetData(wx.BitmapDataObject(self.canvas.bitmap)) - finally: - wx.TheClipboard.Close() - - -FigureManagerWx._toolbar2_class = NavigationToolbar2Wx -FigureManagerWx._toolmanager_toolbar_class = ToolbarWx - - -@_Backend.export -class _BackendWx(_Backend): - FigureCanvas = FigureCanvasWx - FigureManager = FigureManagerWx - mainloop = FigureManagerWx.start_main_loop diff --git a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/ops/__init__.py b/spaces/leafShen/CodeFormer/CodeFormer/basicsr/ops/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp b/spaces/leafShen/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp deleted file mode 100644 index 41c6df6f721bd95a525fd6a03dd9882e863de042..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp +++ /dev/null @@ -1,164 +0,0 @@ -// modify from -// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c - -#include -#include - -#include -#include - -#define WITH_CUDA // always use cuda -#ifdef WITH_CUDA -int deform_conv_forward_cuda(at::Tensor input, at::Tensor weight, - at::Tensor offset, at::Tensor output, - at::Tensor columns, at::Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step); - -int deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset, - at::Tensor gradOutput, at::Tensor gradInput, - at::Tensor gradOffset, at::Tensor weight, - at::Tensor columns, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, - int dilationH, int group, - int deformable_group, int im2col_step); - -int deform_conv_backward_parameters_cuda( - at::Tensor input, at::Tensor offset, at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH, - int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, float scale, int im2col_step); - -void modulated_deform_conv_cuda_forward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns, - int kernel_h, int kernel_w, const int stride_h, const int stride_w, - const int pad_h, const int pad_w, const int dilation_h, - const int dilation_w, const int group, const int deformable_group, - const bool with_bias); - -void modulated_deform_conv_cuda_backward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor columns, - at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias, - at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias); -#endif - -int deform_conv_forward(at::Tensor input, at::Tensor weight, - at::Tensor offset, at::Tensor output, - at::Tensor columns, at::Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return deform_conv_forward_cuda(input, weight, offset, output, columns, - ones, kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, - deformable_group, im2col_step); -#else - AT_ERROR("deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("deform conv is not implemented on CPU"); -} - -int deform_conv_backward_input(at::Tensor input, at::Tensor offset, - at::Tensor gradOutput, at::Tensor gradInput, - at::Tensor gradOffset, at::Tensor weight, - at::Tensor columns, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, - int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return deform_conv_backward_input_cuda(input, offset, gradOutput, - gradInput, gradOffset, weight, columns, kW, kH, dW, dH, padW, padH, - dilationW, dilationH, group, deformable_group, im2col_step); -#else - AT_ERROR("deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("deform conv is not implemented on CPU"); -} - -int deform_conv_backward_parameters( - at::Tensor input, at::Tensor offset, at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH, - int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, float scale, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return deform_conv_backward_parameters_cuda(input, offset, gradOutput, - gradWeight, columns, ones, kW, kH, dW, dH, padW, padH, dilationW, - dilationH, group, deformable_group, scale, im2col_step); -#else - AT_ERROR("deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("deform conv is not implemented on CPU"); -} - -void modulated_deform_conv_forward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns, - int kernel_h, int kernel_w, const int stride_h, const int stride_w, - const int pad_h, const int pad_w, const int dilation_h, - const int dilation_w, const int group, const int deformable_group, - const bool with_bias) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return modulated_deform_conv_cuda_forward(input, weight, bias, ones, - offset, mask, output, columns, kernel_h, kernel_w, stride_h, - stride_w, pad_h, pad_w, dilation_h, dilation_w, group, - deformable_group, with_bias); -#else - AT_ERROR("modulated deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("modulated deform conv is not implemented on CPU"); -} - -void modulated_deform_conv_backward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor columns, - at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias, - at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return modulated_deform_conv_cuda_backward(input, weight, bias, ones, - offset, mask, columns, grad_input, grad_weight, grad_bias, grad_offset, - grad_mask, grad_output, kernel_h, kernel_w, stride_h, stride_w, - pad_h, pad_w, dilation_h, dilation_w, group, deformable_group, - with_bias); -#else - AT_ERROR("modulated deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("modulated deform conv is not implemented on CPU"); -} - - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("deform_conv_forward", &deform_conv_forward, - "deform forward"); - m.def("deform_conv_backward_input", &deform_conv_backward_input, - "deform_conv_backward_input"); - m.def("deform_conv_backward_parameters", - &deform_conv_backward_parameters, - "deform_conv_backward_parameters"); - m.def("modulated_deform_conv_forward", - &modulated_deform_conv_forward, - "modulated deform conv forward"); - m.def("modulated_deform_conv_backward", - &modulated_deform_conv_backward, - "modulated deform conv backward"); -} diff --git a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/example_lora.py b/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/example_lora.py deleted file mode 100644 index 53bfa61e129c01360b7a75e559795bf30c420a5a..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/example_lora.py +++ /dev/null @@ -1,79 +0,0 @@ -from model import ExLlama, ExLlamaCache, ExLlamaConfig -from tokenizer import ExLlamaTokenizer -from generator import ExLlamaGenerator -from lora import ExLlamaLora -import os, glob -import torch - -# Directory containt model, tokenizer, generator - -model_directory = "/mnt/str/models/_test_models/Neko-Institute-of-Science_LLaMA-7B-4bit-128g/" - -# Directory containing LoRA config and weights - -lora_directory = "/mnt/str/models/_test_loras/tloen_alpaca-lora-7b/" - -# Locate files we need within those directories - -tokenizer_path = os.path.join(model_directory, "tokenizer.model") -model_config_path = os.path.join(model_directory, "config.json") -st_pattern = os.path.join(model_directory, "*.safetensors") -model_path = glob.glob(st_pattern) - -lora_config_path = os.path.join(lora_directory, "adapter_config.json") -lora_path = os.path.join(lora_directory, "adapter_model.bin") - -# Create config, model, tokenizer and generator - -config = ExLlamaConfig(model_config_path) # create config from config.json -config.model_path = model_path # supply path to model weights file - -model = ExLlama(config) # create ExLlama instance and load the weights -tokenizer = ExLlamaTokenizer(tokenizer_path) # create tokenizer from tokenizer model file - -cache = ExLlamaCache(model) # create cache for inference -generator = ExLlamaGenerator(model, tokenizer, cache) # create generator - -# Load LoRA - -lora = ExLlamaLora(model, lora_config_path, lora_path) - -# Configure generator - -generator.settings.token_repetition_penalty_max = 1.2 -generator.settings.temperature = 0.65 -generator.settings.top_p = 0.4 -generator.settings.top_k = 0 -generator.settings.typical = 0.0 - -# Alpaca prompt - -prompt = \ - "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n" \ - "\n" \ - "### Instruction:\n" \ - "List five colors in alphabetical order.\n" \ - "\n" \ - "### Response:" - -# Generate with LoRA - -print(" --- LoRA ----------------- ") -print("") - -generator.lora = lora -torch.manual_seed(1337) -output = generator.generate_simple(prompt, max_new_tokens = 200) -print(output) - -# Generate without LoRA - -print("") -print(" --- No LoRA -------------- ") -print("") - -generator.lora = None -torch.manual_seed(1337) -output = generator.generate_simple(prompt, max_new_tokens = 200) -print(output) - diff --git a/spaces/leonelhs/faceshine/README.md b/spaces/leonelhs/faceshine/README.md deleted file mode 100644 index 928a8c754ee0fadf9f65ca321c5c2e838fbe8922..0000000000000000000000000000000000000000 --- a/spaces/leonelhs/faceshine/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Faceshine -emoji: 🚀 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/leurez/moss/src/utils/crypto/index.ts b/spaces/leurez/moss/src/utils/crypto/index.ts deleted file mode 100644 index 6c57c8d352fc4c6d5077976fef4053c1214a4b1a..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/utils/crypto/index.ts +++ /dev/null @@ -1,18 +0,0 @@ -import CryptoJS from 'crypto-js' - -const CryptoSecret = '__CRYPTO_SECRET__' - -export function enCrypto(data: any) { - const str = JSON.stringify(data) - return CryptoJS.AES.encrypt(str, CryptoSecret).toString() -} - -export function deCrypto(data: string) { - const bytes = CryptoJS.AES.decrypt(data, CryptoSecret) - const str = bytes.toString(CryptoJS.enc.Utf8) - - if (str) - return JSON.parse(str) - - return null -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/4Videosoft PDF Converter Ultimate 3.2.12 - SeuPirate Download Pc.md b/spaces/lincquiQcaudo/Top-20-Diffusion/4Videosoft PDF Converter Ultimate 3.2.12 - SeuPirate Download Pc.md deleted file mode 100644 index eca730204518feb22869f719f7e08c3da0053a14..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/4Videosoft PDF Converter Ultimate 3.2.12 - SeuPirate Download Pc.md +++ /dev/null @@ -1,6 +0,0 @@ -

    4Videosoft PDF Converter Ultimate 3.2.12 - SeuPirate Download Pc


    Download Zip https://bytlly.com/2uGxUe



    -
    -Download 4Videosoft PDF Converter Ultimate 3.2.12 - SeuPirate torrent or any other torrent from the Applications Windows. Direct download via magnet link. 1fdad05405
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Data Structures And Algorithms By G.a.v Pai [2021] Free Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Data Structures And Algorithms By G.a.v Pai [2021] Free Download.md deleted file mode 100644 index 1bf2d65311ab002f21d5ef216c86d88aae8f7325..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Data Structures And Algorithms By G.a.v Pai [2021] Free Download.md +++ /dev/null @@ -1,9 +0,0 @@ -

    data structures and algorithms by g.a.v pai free download


    Download Filehttps://bytlly.com/2uGx4A



    - -Data Structures and Algorithms: Concepts, Methods, and Applications applications; Dr. G. A. Vijayalakshmi Pai (Author); Buy new: $27.50; In stock. Supplied and Sold... Title: Data Structures and Algorithms: Concepts, Methods, and Applications: Dr. G. A. Vijayalakshmi Pai (Author); Buy new: $27.50; In stock. Supplied and sold. -Author: G. A. Vijayalakshmi Pai -Annotation: -The book is a continuation of a popular series that deals with the approach to software design and development. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/lithiumice/SadTalker/src/face3d/util/skin_mask.py b/spaces/lithiumice/SadTalker/src/face3d/util/skin_mask.py deleted file mode 100644 index a8a74e4c3b40d13b0258b83a12f56321a85bb179..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/util/skin_mask.py +++ /dev/null @@ -1,125 +0,0 @@ -"""This script is to generate skin attention mask for Deep3DFaceRecon_pytorch -""" - -import math -import numpy as np -import os -import cv2 - -class GMM: - def __init__(self, dim, num, w, mu, cov, cov_det, cov_inv): - self.dim = dim # feature dimension - self.num = num # number of Gaussian components - self.w = w # weights of Gaussian components (a list of scalars) - self.mu= mu # mean of Gaussian components (a list of 1xdim vectors) - self.cov = cov # covariance matrix of Gaussian components (a list of dimxdim matrices) - self.cov_det = cov_det # pre-computed determinet of covariance matrices (a list of scalars) - self.cov_inv = cov_inv # pre-computed inverse covariance matrices (a list of dimxdim matrices) - - self.factor = [0]*num - for i in range(self.num): - self.factor[i] = (2*math.pi)**(self.dim/2) * self.cov_det[i]**0.5 - - def likelihood(self, data): - assert(data.shape[1] == self.dim) - N = data.shape[0] - lh = np.zeros(N) - - for i in range(self.num): - data_ = data - self.mu[i] - - tmp = np.matmul(data_,self.cov_inv[i]) * data_ - tmp = np.sum(tmp,axis=1) - power = -0.5 * tmp - - p = np.array([math.exp(power[j]) for j in range(N)]) - p = p/self.factor[i] - lh += p*self.w[i] - - return lh - - -def _rgb2ycbcr(rgb): - m = np.array([[65.481, 128.553, 24.966], - [-37.797, -74.203, 112], - [112, -93.786, -18.214]]) - shape = rgb.shape - rgb = rgb.reshape((shape[0] * shape[1], 3)) - ycbcr = np.dot(rgb, m.transpose() / 255.) - ycbcr[:, 0] += 16. - ycbcr[:, 1:] += 128. - return ycbcr.reshape(shape) - - -def _bgr2ycbcr(bgr): - rgb = bgr[..., ::-1] - return _rgb2ycbcr(rgb) - - -gmm_skin_w = [0.24063933, 0.16365987, 0.26034665, 0.33535415] -gmm_skin_mu = [np.array([113.71862, 103.39613, 164.08226]), - np.array([150.19858, 105.18467, 155.51428]), - np.array([183.92976, 107.62468, 152.71820]), - np.array([114.90524, 113.59782, 151.38217])] -gmm_skin_cov_det = [5692842.5, 5851930.5, 2329131., 1585971.] -gmm_skin_cov_inv = [np.array([[0.0019472069, 0.0020450759, -0.00060243998],[0.0020450759, 0.017700525, 0.0051420014],[-0.00060243998, 0.0051420014, 0.0081308950]]), - np.array([[0.0027110141, 0.0011036990, 0.0023122299],[0.0011036990, 0.010707724, 0.010742856],[0.0023122299, 0.010742856, 0.017481629]]), - np.array([[0.0048026871, 0.00022935172, 0.0077668377],[0.00022935172, 0.011729696, 0.0081661865],[0.0077668377, 0.0081661865, 0.025374353]]), - np.array([[0.0011989699, 0.0022453172, -0.0010748957],[0.0022453172, 0.047758564, 0.020332102],[-0.0010748957, 0.020332102, 0.024502251]])] - -gmm_skin = GMM(3, 4, gmm_skin_w, gmm_skin_mu, [], gmm_skin_cov_det, gmm_skin_cov_inv) - -gmm_nonskin_w = [0.12791070, 0.31130761, 0.34245777, 0.21832393] -gmm_nonskin_mu = [np.array([99.200851, 112.07533, 140.20602]), - np.array([110.91392, 125.52969, 130.19237]), - np.array([129.75864, 129.96107, 126.96808]), - np.array([112.29587, 128.85121, 129.05431])] -gmm_nonskin_cov_det = [458703648., 6466488., 90611376., 133097.63] -gmm_nonskin_cov_inv = [np.array([[0.00085371657, 0.00071197288, 0.00023958916],[0.00071197288, 0.0025935620, 0.00076557708],[0.00023958916, 0.00076557708, 0.0015042332]]), - np.array([[0.00024650150, 0.00045542428, 0.00015019422],[0.00045542428, 0.026412144, 0.018419769],[0.00015019422, 0.018419769, 0.037497383]]), - np.array([[0.00037054974, 0.00038146760, 0.00040408765],[0.00038146760, 0.0085505722, 0.0079136286],[0.00040408765, 0.0079136286, 0.010982352]]), - np.array([[0.00013709733, 0.00051228428, 0.00012777430],[0.00051228428, 0.28237113, 0.10528370],[0.00012777430, 0.10528370, 0.23468947]])] - -gmm_nonskin = GMM(3, 4, gmm_nonskin_w, gmm_nonskin_mu, [], gmm_nonskin_cov_det, gmm_nonskin_cov_inv) - -prior_skin = 0.8 -prior_nonskin = 1 - prior_skin - - -# calculate skin attention mask -def skinmask(imbgr): - im = _bgr2ycbcr(imbgr) - - data = im.reshape((-1,3)) - - lh_skin = gmm_skin.likelihood(data) - lh_nonskin = gmm_nonskin.likelihood(data) - - tmp1 = prior_skin * lh_skin - tmp2 = prior_nonskin * lh_nonskin - post_skin = tmp1 / (tmp1+tmp2) # posterior probability - - post_skin = post_skin.reshape((im.shape[0],im.shape[1])) - - post_skin = np.round(post_skin*255) - post_skin = post_skin.astype(np.uint8) - post_skin = np.tile(np.expand_dims(post_skin,2),[1,1,3]) # reshape to H*W*3 - - return post_skin - - -def get_skin_mask(img_path): - print('generating skin masks......') - names = [i for i in sorted(os.listdir( - img_path)) if 'jpg' in i or 'png' in i or 'jpeg' in i or 'PNG' in i] - save_path = os.path.join(img_path, 'mask') - if not os.path.isdir(save_path): - os.makedirs(save_path) - - for i in range(0, len(names)): - name = names[i] - print('%05d' % (i), ' ', name) - full_image_name = os.path.join(img_path, name) - img = cv2.imread(full_image_name).astype(np.float32) - skin_img = skinmask(img) - cv2.imwrite(os.path.join(save_path, name), skin_img.astype(np.uint8)) diff --git a/spaces/ljrmary/UT_Hackathon2/app.py b/spaces/ljrmary/UT_Hackathon2/app.py deleted file mode 100644 index 7cb29cdf6235d70e404fe683e95cdf893f47c8df..0000000000000000000000000000000000000000 --- a/spaces/ljrmary/UT_Hackathon2/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import gradio as gr -import requests -import geopandas as gpd -from shapely.geometry import Point - -def process_input(address, selected_option, additional_input, num_units, temporal_split, modal_split, directional_distribution): - transport_analysis_needed = selected_option in ["Residential", "Office", "Community Facility"] - - output_address = f"You entered the address:\n{address}" - output_option = f"Selected option:\n{selected_option}" - - response = requests.get(f"https://geosearch.planninglabs.nyc/v2/autocomplete?text={address}") - data = response.json() - x = data["features"][0]["geometry"]["coordinates"] - - # Load the GeoJSON file into a GeoDataFrame - geodata = gpd.read_file("./zone_data.geojson") - - # Create a Point for the given coordinates - location_point = Point(x[0], x[1]) - - # Find the zone that the location point is in - zone = geodata[geodata.geometry.contains(location_point)]["id"].values.item() - - output_additional = ( - f"Number of Units/Spaces:\n{additional_input}" - if selected_option in ["Off-Street Parking Facility", "Residential"] - else f"Area (in 1000 GSF):\n{additional_input}" - ) - output_num_units = f"Number of Units:\n{num_units}" - output_temporal_split = f"Temporal Split:\n{temporal_split}" - output_modal_split = f"Modal Split:\n{modal_split}" - output_directional_distribution = f"Directional Distribution:\n{directional_distribution}" - output_transport_analysis = f"Transport Analysis Needed:\n{transport_analysis_needed}" - - return ( - output_address, - output_option, - output_additional, - output_num_units, - output_temporal_split, - output_modal_split, - output_directional_distribution, - output_transport_analysis, - f"Zone:\n{zone}", - ) - -iface = gr.Interface( - fn=process_input, - inputs=[ - gr.inputs.Textbox(label="Address", type="text"), - gr.inputs.Radio( - ["Office", "Local Retail", "Exhibit Space", "Restaurant"], label="Land Use Selection" - ), - gr.inputs.Number( - label="Number of Units/Spaces or Area (in 1000 GSF)", default=1 - ), - gr.inputs.Textbox(label="Number of Units", type="text"), - gr.inputs.Radio(["morning", "midday", "afternoon"], label="Temporal Split"), - gr.inputs.Radio(["car", "taxi", "bus", "subway", "walking"], label="Modal Split"), - gr.inputs.Dropdown(["inbound", "outbound"], label="Directional Distribution"), - ], - outputs=[ - gr.outputs.Textbox(label="Address"), - gr.outputs.Textbox(label="Land Use Selection"), - gr.outputs.Textbox(label="Number of Units/Spaces or Area"), - gr.outputs.Textbox(label="Number of Units"), - gr.outputs.Textbox(label="Temporal Split"), - gr.outputs.Textbox(label="Modal Split"), - gr.outputs.Textbox(label="Directional Distribution"), - gr.outputs.Textbox(label="Transport Analysis Needed"), - gr.outputs.Textbox(label="Zone"), - ], -) - -iface.launch() diff --git a/spaces/luckli/chavinlo-gpt4-x-alpaca/app.py b/spaces/luckli/chavinlo-gpt4-x-alpaca/app.py deleted file mode 100644 index 09f1c36699f1a948b84b078910d8785b0f7c33ad..0000000000000000000000000000000000000000 --- a/spaces/luckli/chavinlo-gpt4-x-alpaca/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/chavinlo/gpt4-x-alpaca").launch() \ No newline at end of file diff --git a/spaces/luluneko1/stable-diffusion-webui/README.md b/spaces/luluneko1/stable-diffusion-webui/README.md deleted file mode 100644 index be90d7ea477a42a1bf7f8e46e43762acf28d3bbe..0000000000000000000000000000000000000000 --- a/spaces/luluneko1/stable-diffusion-webui/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Stable Diffusion Webui -emoji: 💻 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: openrail -duplicated_from: kamiyamai/stable-diffusion-webui ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lyf/faster-whisper-webui/src/segments.py b/spaces/lyf/faster-whisper-webui/src/segments.py deleted file mode 100644 index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000 --- a/spaces/lyf/faster-whisper-webui/src/segments.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import Any, Dict, List - -import copy - -def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1): - result = [] - - if len(timestamps) == 0: - return result - if max_merge_size is None: - return timestamps - - if padding_left is None: - padding_left = 0 - if padding_right is None: - padding_right = 0 - - processed_time = 0 - current_segment = None - - for i in range(len(timestamps)): - next_segment = timestamps[i] - - delta = next_segment['start'] - processed_time - - # Note that segments can still be longer than the max merge size, they just won't be merged in that case - if current_segment is None or (merge_window is not None and delta > merge_window) \ - or next_segment['end'] - current_segment['start'] > max_merge_size: - # Finish the current segment - if current_segment is not None: - # Add right padding - finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right - current_segment['end'] += finish_padding - delta -= finish_padding - - result.append(current_segment) - - # Start a new segment - current_segment = copy.deepcopy(next_segment) - - # Pad the segment - current_segment['start'] = current_segment['start'] - min(padding_left, delta) - processed_time = current_segment['end'] - - else: - # Merge the segment - current_segment['end'] = next_segment['end'] - processed_time = current_segment['end'] - - # Add the last segment - if current_segment is not None: - current_segment['end'] += padding_right - result.append(current_segment) - - return result \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/pybind11/README.md b/spaces/ma-xu/LIVE/pybind11/README.md deleted file mode 100644 index bae6cf2b5c46f7181e28bef6a1c2d4101086433d..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/README.md +++ /dev/null @@ -1,143 +0,0 @@ -![pybind11 logo](https://github.com/pybind/pybind11/raw/master/docs/pybind11-logo.png) - -# pybind11 — Seamless operability between C++11 and Python - -[![Documentation Status](https://readthedocs.org/projects/pybind11/badge/?version=master)](http://pybind11.readthedocs.org/en/master/?badge=master) -[![Documentation Status](https://readthedocs.org/projects/pybind11/badge/?version=stable)](http://pybind11.readthedocs.org/en/stable/?badge=stable) -[![Gitter chat](https://img.shields.io/gitter/room/gitterHQ/gitter.svg)](https://gitter.im/pybind/Lobby) -[![CI](https://github.com/pybind/pybind11/workflows/CI/badge.svg)](https://github.com/pybind/pybind11/actions) -[![Build status](https://ci.appveyor.com/api/projects/status/riaj54pn4h08xy40?svg=true)](https://ci.appveyor.com/project/wjakob/pybind11) - -**pybind11** is a lightweight header-only library that exposes C++ types in -Python and vice versa, mainly to create Python bindings of existing C++ code. -Its goals and syntax are similar to the excellent [Boost.Python][] library by -David Abrahams: to minimize boilerplate code in traditional extension modules -by inferring type information using compile-time introspection. - -The main issue with Boost.Python—and the reason for creating such a similar -project—is Boost. Boost is an enormously large and complex suite of utility -libraries that works with almost every C++ compiler in existence. This -compatibility has its cost: arcane template tricks and workarounds are -necessary to support the oldest and buggiest of compiler specimens. Now that -C++11-compatible compilers are widely available, this heavy machinery has -become an excessively large and unnecessary dependency. - -Think of this library as a tiny self-contained version of Boost.Python with -everything stripped away that isn't relevant for binding generation. Without -comments, the core header files only require ~4K lines of code and depend on -Python (2.7 or 3.5+, or PyPy) and the C++ standard library. This compact -implementation was possible thanks to some of the new C++11 language features -(specifically: tuples, lambda functions and variadic templates). Since its -creation, this library has grown beyond Boost.Python in many ways, leading to -dramatically simpler binding code in many common situations. - -Tutorial and reference documentation is provided at -[pybind11.readthedocs.org][]. A PDF version of the manual is available -[here][docs-pdf]. - -## Core features -pybind11 can map the following core C++ features to Python: - -- Functions accepting and returning custom data structures per value, reference, or pointer -- Instance methods and static methods -- Overloaded functions -- Instance attributes and static attributes -- Arbitrary exception types -- Enumerations -- Callbacks -- Iterators and ranges -- Custom operators -- Single and multiple inheritance -- STL data structures -- Smart pointers with reference counting like `std::shared_ptr` -- Internal references with correct reference counting -- C++ classes with virtual (and pure virtual) methods can be extended in Python - -## Goodies -In addition to the core functionality, pybind11 provides some extra goodies: - -- Python 2.7, 3.5+, and PyPy (tested on 7.3) are supported with an implementation-agnostic - interface. - -- It is possible to bind C++11 lambda functions with captured variables. The - lambda capture data is stored inside the resulting Python function object. - -- pybind11 uses C++11 move constructors and move assignment operators whenever - possible to efficiently transfer custom data types. - -- It's easy to expose the internal storage of custom data types through - Pythons' buffer protocols. This is handy e.g. for fast conversion between - C++ matrix classes like Eigen and NumPy without expensive copy operations. - -- pybind11 can automatically vectorize functions so that they are transparently - applied to all entries of one or more NumPy array arguments. - -- Python's slice-based access and assignment operations can be supported with - just a few lines of code. - -- Everything is contained in just a few header files; there is no need to link - against any additional libraries. - -- Binaries are generally smaller by a factor of at least 2 compared to - equivalent bindings generated by Boost.Python. A recent pybind11 conversion - of PyRosetta, an enormous Boost.Python binding project, - [reported][pyrosetta-report] a binary size reduction of **5.4x** and compile - time reduction by **5.8x**. - -- Function signatures are precomputed at compile time (using `constexpr`), - leading to smaller binaries. - -- With little extra effort, C++ types can be pickled and unpickled similar to - regular Python objects. - -## Supported compilers - -1. Clang/LLVM 3.3 or newer (for Apple Xcode's clang, this is 5.0.0 or newer) -2. GCC 4.8 or newer -3. Microsoft Visual Studio 2015 Update 3 or newer -4. Intel C++ compiler 17 or newer (16 with pybind11 v2.0 and 15 with pybind11 - v2.0 and a [workaround][intel-15-workaround]) -5. Cygwin/GCC (tested on 2.5.1) - -## About - -This project was created by [Wenzel Jakob](http://rgl.epfl.ch/people/wjakob). -Significant features and/or improvements to the code were contributed by -Jonas Adler, -Lori A. Burns, -Sylvain Corlay, -Trent Houliston, -Axel Huebl, -@hulucc, -Sergey Lyskov -Johan Mabille, -Tomasz Miąsko, -Dean Moldovan, -Ben Pritchard, -Jason Rhinelander, -Boris Schäling, -Pim Schellart, -Henry Schreiner, -Ivan Smirnov, and -Patrick Stewart. - -### Contributing - -See the [contributing guide][] for information on building and contributing to -pybind11. - - -### License - -pybind11 is provided under a BSD-style license that can be found in the -[`LICENSE`][] file. By using, distributing, or contributing to this project, -you agree to the terms and conditions of this license. - - -[pybind11.readthedocs.org]: http://pybind11.readthedocs.org/en/master -[docs-pdf]: https://media.readthedocs.org/pdf/pybind11/master/pybind11.pdf -[Boost.Python]: http://www.boost.org/doc/libs/1_58_0/libs/python/doc/ -[pyrosetta-report]: http://graylab.jhu.edu/RosettaCon2016/PyRosetta-4.pdf -[contributing guide]: https://github.com/pybind/pybind11/blob/master/.github/CONTRIBUTING.md -[`LICENSE`]: https://github.com/pybind/pybind11/blob/master/LICENSE -[intel-15-workaround]: https://github.com/pybind/pybind11/issues/276 diff --git a/spaces/ma-xu/LIVE/pybind11/tools/mkdoc.py b/spaces/ma-xu/LIVE/pybind11/tools/mkdoc.py deleted file mode 100644 index a22aacdefd0171078874bd77bf0175229646656f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tools/mkdoc.py +++ /dev/null @@ -1,387 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# -# Syntax: mkdoc.py [-I ..] [.. a list of header files ..] -# -# Extract documentation from C++ header files to use it in Python bindings -# - -import os -import sys -import platform -import re -import textwrap - -from clang import cindex -from clang.cindex import CursorKind -from collections import OrderedDict -from glob import glob -from threading import Thread, Semaphore -from multiprocessing import cpu_count - -RECURSE_LIST = [ - CursorKind.TRANSLATION_UNIT, - CursorKind.NAMESPACE, - CursorKind.CLASS_DECL, - CursorKind.STRUCT_DECL, - CursorKind.ENUM_DECL, - CursorKind.CLASS_TEMPLATE -] - -PRINT_LIST = [ - CursorKind.CLASS_DECL, - CursorKind.STRUCT_DECL, - CursorKind.ENUM_DECL, - CursorKind.ENUM_CONSTANT_DECL, - CursorKind.CLASS_TEMPLATE, - CursorKind.FUNCTION_DECL, - CursorKind.FUNCTION_TEMPLATE, - CursorKind.CONVERSION_FUNCTION, - CursorKind.CXX_METHOD, - CursorKind.CONSTRUCTOR, - CursorKind.FIELD_DECL -] - -PREFIX_BLACKLIST = [ - CursorKind.TRANSLATION_UNIT -] - -CPP_OPERATORS = { - '<=': 'le', '>=': 'ge', '==': 'eq', '!=': 'ne', '[]': 'array', - '+=': 'iadd', '-=': 'isub', '*=': 'imul', '/=': 'idiv', '%=': - 'imod', '&=': 'iand', '|=': 'ior', '^=': 'ixor', '<<=': 'ilshift', - '>>=': 'irshift', '++': 'inc', '--': 'dec', '<<': 'lshift', '>>': - 'rshift', '&&': 'land', '||': 'lor', '!': 'lnot', '~': 'bnot', - '&': 'band', '|': 'bor', '+': 'add', '-': 'sub', '*': 'mul', '/': - 'div', '%': 'mod', '<': 'lt', '>': 'gt', '=': 'assign', '()': 'call' -} - -CPP_OPERATORS = OrderedDict( - sorted(CPP_OPERATORS.items(), key=lambda t: -len(t[0]))) - -job_count = cpu_count() -job_semaphore = Semaphore(job_count) - - -class NoFilenamesError(ValueError): - pass - - -def d(s): - return s if isinstance(s, str) else s.decode('utf8') - - -def sanitize_name(name): - name = re.sub(r'type-parameter-0-([0-9]+)', r'T\1', name) - for k, v in CPP_OPERATORS.items(): - name = name.replace('operator%s' % k, 'operator_%s' % v) - name = re.sub('<.*>', '', name) - name = ''.join([ch if ch.isalnum() else '_' for ch in name]) - name = re.sub('_$', '', re.sub('_+', '_', name)) - return '__doc_' + name - - -def process_comment(comment): - result = '' - - # Remove C++ comment syntax - leading_spaces = float('inf') - for s in comment.expandtabs(tabsize=4).splitlines(): - s = s.strip() - if s.startswith('/*'): - s = s[2:].lstrip('*') - elif s.endswith('*/'): - s = s[:-2].rstrip('*') - elif s.startswith('///'): - s = s[3:] - if s.startswith('*'): - s = s[1:] - if len(s) > 0: - leading_spaces = min(leading_spaces, len(s) - len(s.lstrip())) - result += s + '\n' - - if leading_spaces != float('inf'): - result2 = "" - for s in result.splitlines(): - result2 += s[leading_spaces:] + '\n' - result = result2 - - # Doxygen tags - cpp_group = r'([\w:]+)' - param_group = r'([\[\w:\]]+)' - - s = result - s = re.sub(r'\\c\s+%s' % cpp_group, r'``\1``', s) - s = re.sub(r'\\a\s+%s' % cpp_group, r'*\1*', s) - s = re.sub(r'\\e\s+%s' % cpp_group, r'*\1*', s) - s = re.sub(r'\\em\s+%s' % cpp_group, r'*\1*', s) - s = re.sub(r'\\b\s+%s' % cpp_group, r'**\1**', s) - s = re.sub(r'\\ingroup\s+%s' % cpp_group, r'', s) - s = re.sub(r'\\param%s?\s+%s' % (param_group, cpp_group), - r'\n\n$Parameter ``\2``:\n\n', s) - s = re.sub(r'\\tparam%s?\s+%s' % (param_group, cpp_group), - r'\n\n$Template parameter ``\2``:\n\n', s) - - for in_, out_ in { - 'return': 'Returns', - 'author': 'Author', - 'authors': 'Authors', - 'copyright': 'Copyright', - 'date': 'Date', - 'remark': 'Remark', - 'sa': 'See also', - 'see': 'See also', - 'extends': 'Extends', - 'throw': 'Throws', - 'throws': 'Throws' - }.items(): - s = re.sub(r'\\%s\s*' % in_, r'\n\n$%s:\n\n' % out_, s) - - s = re.sub(r'\\details\s*', r'\n\n', s) - s = re.sub(r'\\brief\s*', r'', s) - s = re.sub(r'\\short\s*', r'', s) - s = re.sub(r'\\ref\s*', r'', s) - - s = re.sub(r'\\code\s?(.*?)\s?\\endcode', - r"```\n\1\n```\n", s, flags=re.DOTALL) - - # HTML/TeX tags - s = re.sub(r'(.*?)', r'``\1``', s, flags=re.DOTALL) - s = re.sub(r'
    (.*?)
    ', r"```\n\1\n```\n", s, flags=re.DOTALL) - s = re.sub(r'(.*?)', r'*\1*', s, flags=re.DOTALL) - s = re.sub(r'(.*?)', r'**\1**', s, flags=re.DOTALL) - s = re.sub(r'\\f\$(.*?)\\f\$', r'$\1$', s, flags=re.DOTALL) - s = re.sub(r'
  • ', r'\n\n* ', s) - s = re.sub(r'', r'', s) - s = re.sub(r'
  • ', r'\n\n', s) - - s = s.replace('``true``', '``True``') - s = s.replace('``false``', '``False``') - - # Re-flow text - wrapper = textwrap.TextWrapper() - wrapper.expand_tabs = True - wrapper.replace_whitespace = True - wrapper.drop_whitespace = True - wrapper.width = 70 - wrapper.initial_indent = wrapper.subsequent_indent = '' - - result = '' - in_code_segment = False - for x in re.split(r'(```)', s): - if x == '```': - if not in_code_segment: - result += '```\n' - else: - result += '\n```\n\n' - in_code_segment = not in_code_segment - elif in_code_segment: - result += x.strip() - else: - for y in re.split(r'(?: *\n *){2,}', x): - wrapped = wrapper.fill(re.sub(r'\s+', ' ', y).strip()) - if len(wrapped) > 0 and wrapped[0] == '$': - result += wrapped[1:] + '\n' - wrapper.initial_indent = \ - wrapper.subsequent_indent = ' ' * 4 - else: - if len(wrapped) > 0: - result += wrapped + '\n\n' - wrapper.initial_indent = wrapper.subsequent_indent = '' - return result.rstrip().lstrip('\n') - - -def extract(filename, node, prefix, output): - if not (node.location.file is None or - os.path.samefile(d(node.location.file.name), filename)): - return 0 - if node.kind in RECURSE_LIST: - sub_prefix = prefix - if node.kind not in PREFIX_BLACKLIST: - if len(sub_prefix) > 0: - sub_prefix += '_' - sub_prefix += d(node.spelling) - for i in node.get_children(): - extract(filename, i, sub_prefix, output) - if node.kind in PRINT_LIST: - comment = d(node.raw_comment) if node.raw_comment is not None else '' - comment = process_comment(comment) - sub_prefix = prefix - if len(sub_prefix) > 0: - sub_prefix += '_' - if len(node.spelling) > 0: - name = sanitize_name(sub_prefix + d(node.spelling)) - output.append((name, filename, comment)) - - -class ExtractionThread(Thread): - def __init__(self, filename, parameters, output): - Thread.__init__(self) - self.filename = filename - self.parameters = parameters - self.output = output - job_semaphore.acquire() - - def run(self): - print('Processing "%s" ..' % self.filename, file=sys.stderr) - try: - index = cindex.Index( - cindex.conf.lib.clang_createIndex(False, True)) - tu = index.parse(self.filename, self.parameters) - extract(self.filename, tu.cursor, '', self.output) - finally: - job_semaphore.release() - - -def read_args(args): - parameters = [] - filenames = [] - if "-x" not in args: - parameters.extend(['-x', 'c++']) - if not any(it.startswith("-std=") for it in args): - parameters.append('-std=c++11') - - if platform.system() == 'Darwin': - dev_path = '/Applications/Xcode.app/Contents/Developer/' - lib_dir = dev_path + 'Toolchains/XcodeDefault.xctoolchain/usr/lib/' - sdk_dir = dev_path + 'Platforms/MacOSX.platform/Developer/SDKs' - libclang = lib_dir + 'libclang.dylib' - - if os.path.exists(libclang): - cindex.Config.set_library_path(os.path.dirname(libclang)) - - if os.path.exists(sdk_dir): - sysroot_dir = os.path.join(sdk_dir, next(os.walk(sdk_dir))[1][0]) - parameters.append('-isysroot') - parameters.append(sysroot_dir) - elif platform.system() == 'Linux': - # cython.util.find_library does not find `libclang` for all clang - # versions and distributions. LLVM switched to a monolithical setup - # that includes everything under /usr/lib/llvm{version_number}/ - # We therefore glob for the library and select the highest version - library_file = sorted(glob("/usr/lib/llvm-*/lib/libclang.so"), reverse=True)[0] - cindex.Config.set_library_file(library_file) - - # clang doesn't find its own base includes by default on Linux, - # but different distros install them in different paths. - # Try to autodetect, preferring the highest numbered version. - def clang_folder_version(d): - return [int(ver) for ver in re.findall(r'(? - -// #include this for size_t -#include -#include - -namespace thrust -{ - -/*! - * \addtogroup allocation_functions Allocation Functions - * \{ - */ - -/*! \p device_new implements the placement \c new operator for types - * resident in device memory. \p device_new calls T's null - * constructor on a array of objects in device memory. - * No memory is allocated by this function. - * - * \param p A \p device_ptr to a region of device memory into which - * to construct one or many Ts. - * \param n The number of objects to construct at \p p. - * \return p, casted to T's type. - * - * \see device_ptr - */ -template - device_ptr device_new(device_ptr p, - const size_t n = 1); - -/*! \p device_new implements the placement new operator for types - * resident in device memory. \p device_new calls T's copy - * constructor on a array of objects in device memory. No memory is - * allocated by this function. - * - * \param p A \p device_ptr to a region of device memory into which to - * construct one or many Ts. - * \param exemplar The value from which to copy. - * \param n The number of objects to construct at \p p. - * \return p, casted to T's type. - * - * \see device_ptr - * \see fill - */ -template - device_ptr device_new(device_ptr p, - const T &exemplar, - const size_t n = 1); - -/*! \p device_new implements the new operator for types resident in device memory. - * It allocates device memory large enough to hold \p n new objects of type \c T. - * - * \param n The number of objects to allocate. Defaults to \c 1. - * \return A \p device_ptr to the newly allocated region of device memory. - */ -template - device_ptr device_new(const size_t n = 1); - -/*! \} - */ - -} // end thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/transform.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/transform.h deleted file mode 100644 index 895164ce5afbc15c1ceff3a921d3b3765e99f251..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/transform.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// cpp has no special transform - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/adjacent_difference.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/adjacent_difference.h deleted file mode 100644 index d22b4aac348c13fdafa9f03662c820d8fc3b377b..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/adjacent_difference.h +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace tbb -{ -namespace detail -{ - -template - OutputIterator adjacent_difference(execution_policy &exec, - InputIterator first, - InputIterator last, - OutputIterator result, - BinaryFunction binary_op) -{ - // tbb prefers generic::adjacent_difference to cpp::adjacent_difference - return thrust::system::detail::generic::adjacent_difference(exec, first, last, result, binary_op); -} // end adjacent_difference() - -} // end detail -} // end tbb -} // end system -} // end thrust - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/video_recurrent_gan_model.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/video_recurrent_gan_model.py deleted file mode 100644 index 74cf81145c50ffafb220d22b51e56746dee5ba41..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/video_recurrent_gan_model.py +++ /dev/null @@ -1,180 +0,0 @@ -import torch -from collections import OrderedDict - -from basicsr.archs import build_network -from basicsr.losses import build_loss -from basicsr.utils import get_root_logger -from basicsr.utils.registry import MODEL_REGISTRY -from .video_recurrent_model import VideoRecurrentModel - - -@MODEL_REGISTRY.register() -class VideoRecurrentGANModel(VideoRecurrentModel): - - def init_training_settings(self): - train_opt = self.opt['train'] - - self.ema_decay = train_opt.get('ema_decay', 0) - if self.ema_decay > 0: - logger = get_root_logger() - logger.info(f'Use Exponential Moving Average with decay: {self.ema_decay}') - # build network net_g with Exponential Moving Average (EMA) - # net_g_ema only used for testing on one GPU and saving. - # There is no need to wrap with DistributedDataParallel - self.net_g_ema = build_network(self.opt['network_g']).to(self.device) - # load pretrained model - load_path = self.opt['path'].get('pretrain_network_g', None) - if load_path is not None: - self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema') - else: - self.model_ema(0) # copy net_g weight - self.net_g_ema.eval() - - # define network net_d - self.net_d = build_network(self.opt['network_d']) - self.net_d = self.model_to_device(self.net_d) - self.print_network(self.net_d) - - # load pretrained models - load_path = self.opt['path'].get('pretrain_network_d', None) - if load_path is not None: - param_key = self.opt['path'].get('param_key_d', 'params') - self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True), param_key) - - self.net_g.train() - self.net_d.train() - - # define losses - if train_opt.get('pixel_opt'): - self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device) - else: - self.cri_pix = None - - if train_opt.get('perceptual_opt'): - self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device) - else: - self.cri_perceptual = None - - if train_opt.get('gan_opt'): - self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device) - - self.net_d_iters = train_opt.get('net_d_iters', 1) - self.net_d_init_iters = train_opt.get('net_d_init_iters', 0) - - # set up optimizers and schedulers - self.setup_optimizers() - self.setup_schedulers() - - def setup_optimizers(self): - train_opt = self.opt['train'] - if train_opt['fix_flow']: - normal_params = [] - flow_params = [] - for name, param in self.net_g.named_parameters(): - if 'spynet' in name: # The fix_flow now only works for spynet. - flow_params.append(param) - else: - normal_params.append(param) - - optim_params = [ - { # add flow params first - 'params': flow_params, - 'lr': train_opt['lr_flow'] - }, - { - 'params': normal_params, - 'lr': train_opt['optim_g']['lr'] - }, - ] - else: - optim_params = self.net_g.parameters() - - # optimizer g - optim_type = train_opt['optim_g'].pop('type') - self.optimizer_g = self.get_optimizer(optim_type, optim_params, **train_opt['optim_g']) - self.optimizers.append(self.optimizer_g) - # optimizer d - optim_type = train_opt['optim_d'].pop('type') - self.optimizer_d = self.get_optimizer(optim_type, self.net_d.parameters(), **train_opt['optim_d']) - self.optimizers.append(self.optimizer_d) - - def optimize_parameters(self, current_iter): - logger = get_root_logger() - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - if self.fix_flow_iter: - if current_iter == 1: - logger.info(f'Fix flow network and feature extractor for {self.fix_flow_iter} iters.') - for name, param in self.net_g.named_parameters(): - if 'spynet' in name or 'edvr' in name: - param.requires_grad_(False) - elif current_iter == self.fix_flow_iter: - logger.warning('Train all the parameters.') - self.net_g.requires_grad_(True) - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - - _, _, c, h, w = self.output.size() - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, self.gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output.view(-1, c, h, w), self.gt.view(-1, c, h, w)) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - # gan loss - fake_g_pred = self.net_d(self.output.view(-1, c, h, w)) - l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # real - # reshape to (b*n, c, h, w) - real_d_pred = self.net_d(self.gt.view(-1, c, h, w)) - l_d_real = self.cri_gan(real_d_pred, True, is_disc=True) - loss_dict['l_d_real'] = l_d_real - loss_dict['out_d_real'] = torch.mean(real_d_pred.detach()) - l_d_real.backward() - # fake - # reshape to (b*n, c, h, w) - fake_d_pred = self.net_d(self.output.view(-1, c, h, w).detach()) - l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True) - loss_dict['l_d_fake'] = l_d_fake - loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach()) - l_d_fake.backward() - self.optimizer_d.step() - - self.log_dict = self.reduce_loss_dict(loss_dict) - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) - - def save(self, epoch, current_iter): - if self.ema_decay > 0: - self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema']) - else: - self.save_network(self.net_g, 'net_g', current_iter) - self.save_network(self.net_d, 'net_d', current_iter) - self.save_training_state(epoch, current_iter) diff --git a/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/Synchronized-BatchNorm-PyTorch/tests/test_sync_batchnorm.py b/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/Synchronized-BatchNorm-PyTorch/tests/test_sync_batchnorm.py deleted file mode 100644 index 1f7b6c64c06fc26348489cd15669501a2098c82f..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/Synchronized-BatchNorm-PyTorch/tests/test_sync_batchnorm.py +++ /dev/null @@ -1,114 +0,0 @@ -# -*- coding: utf-8 -*- -# File : test_sync_batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. - -import unittest - -import torch -import torch.nn as nn -from torch.autograd import Variable - -from sync_batchnorm import set_sbn_eps_mode -from sync_batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, DataParallelWithCallback -from sync_batchnorm.unittest import TorchTestCase - -set_sbn_eps_mode('plus') - - -def handy_var(a, unbias=True): - n = a.size(0) - asum = a.sum(dim=0) - as_sum = (a ** 2).sum(dim=0) # a square sum - sumvar = as_sum - asum * asum / n - if unbias: - return sumvar / (n - 1) - else: - return sumvar / n - - -def _find_bn(module): - for m in module.modules(): - if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, SynchronizedBatchNorm1d, SynchronizedBatchNorm2d)): - return m - - -class SyncTestCase(TorchTestCase): - def _syncParameters(self, bn1, bn2): - bn1.reset_parameters() - bn2.reset_parameters() - if bn1.affine and bn2.affine: - bn2.weight.data.copy_(bn1.weight.data) - bn2.bias.data.copy_(bn1.bias.data) - - def _checkBatchNormResult(self, bn1, bn2, input, is_train, cuda=False): - """Check the forward and backward for the customized batch normalization.""" - bn1.train(mode=is_train) - bn2.train(mode=is_train) - - if cuda: - input = input.cuda() - - self._syncParameters(_find_bn(bn1), _find_bn(bn2)) - - input1 = Variable(input, requires_grad=True) - output1 = bn1(input1) - output1.sum().backward() - input2 = Variable(input, requires_grad=True) - output2 = bn2(input2) - output2.sum().backward() - - self.assertTensorClose(input1.data, input2.data) - self.assertTensorClose(output1.data, output2.data) - self.assertTensorClose(input1.grad, input2.grad) - self.assertTensorClose(_find_bn(bn1).running_mean, _find_bn(bn2).running_mean) - self.assertTensorClose(_find_bn(bn1).running_var, _find_bn(bn2).running_var) - - def testSyncBatchNormNormalTrain(self): - bn = nn.BatchNorm1d(10) - sync_bn = SynchronizedBatchNorm1d(10) - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), True) - - def testSyncBatchNormNormalEval(self): - bn = nn.BatchNorm1d(10) - sync_bn = SynchronizedBatchNorm1d(10) - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), False) - - def testSyncBatchNormSyncTrain(self): - bn = nn.BatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - - bn.cuda() - sync_bn.cuda() - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), True, cuda=True) - - def testSyncBatchNormSyncEval(self): - bn = nn.BatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - - bn.cuda() - sync_bn.cuda() - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), False, cuda=True) - - def testSyncBatchNorm2DSyncTrain(self): - bn = nn.BatchNorm2d(10) - sync_bn = SynchronizedBatchNorm2d(10) - sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - - bn.cuda() - sync_bn.cuda() - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10, 16, 16), True, cuda=True) - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/data/pix2pix_dataset.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/data/pix2pix_dataset.py deleted file mode 100644 index 511bd83f55be80ae50bb09c4f6c11fafd4cf8214..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/data/pix2pix_dataset.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -from data.base_dataset import BaseDataset, get_params, get_transform -from PIL import Image -import util.util as util -import os - - -class Pix2pixDataset(BaseDataset): - @staticmethod - def modify_commandline_options(parser, is_train): - parser.add_argument( - "--no_pairing_check", - action="store_true", - help="If specified, skip sanity check of correct label-image file pairing", - ) - return parser - - def initialize(self, opt): - self.opt = opt - - label_paths, image_paths, instance_paths = self.get_paths(opt) - - util.natural_sort(label_paths) - util.natural_sort(image_paths) - if not opt.no_instance: - util.natural_sort(instance_paths) - - label_paths = label_paths[: opt.max_dataset_size] - image_paths = image_paths[: opt.max_dataset_size] - instance_paths = instance_paths[: opt.max_dataset_size] - - if not opt.no_pairing_check: - for path1, path2 in zip(label_paths, image_paths): - assert self.paths_match(path1, path2), ( - "The label-image pair (%s, %s) do not look like the right pair because the filenames are quite different. Are you sure about the pairing? Please see data/pix2pix_dataset.py to see what is going on, and use --no_pairing_check to bypass this." - % (path1, path2) - ) - - self.label_paths = label_paths - self.image_paths = image_paths - self.instance_paths = instance_paths - - size = len(self.label_paths) - self.dataset_size = size - - def get_paths(self, opt): - label_paths = [] - image_paths = [] - instance_paths = [] - assert False, "A subclass of Pix2pixDataset must override self.get_paths(self, opt)" - return label_paths, image_paths, instance_paths - - def paths_match(self, path1, path2): - filename1_without_ext = os.path.splitext(os.path.basename(path1))[0] - filename2_without_ext = os.path.splitext(os.path.basename(path2))[0] - return filename1_without_ext == filename2_without_ext - - def __getitem__(self, index): - # Label Image - label_path = self.label_paths[index] - label = Image.open(label_path) - params = get_params(self.opt, label.size) - transform_label = get_transform(self.opt, params, method=Image.NEAREST, normalize=False) - label_tensor = transform_label(label) * 255.0 - label_tensor[label_tensor == 255] = self.opt.label_nc # 'unknown' is opt.label_nc - - # input image (real images) - image_path = self.image_paths[index] - assert self.paths_match( - label_path, image_path - ), "The label_path %s and image_path %s don't match." % (label_path, image_path) - image = Image.open(image_path) - image = image.convert("RGB") - - transform_image = get_transform(self.opt, params) - image_tensor = transform_image(image) - - # if using instance maps - if self.opt.no_instance: - instance_tensor = 0 - else: - instance_path = self.instance_paths[index] - instance = Image.open(instance_path) - if instance.mode == "L": - instance_tensor = transform_label(instance) * 255 - instance_tensor = instance_tensor.long() - else: - instance_tensor = transform_label(instance) - - input_dict = { - "label": label_tensor, - "instance": instance_tensor, - "image": image_tensor, - "path": image_path, - } - - # Give subclasses a chance to modify the final output - self.postprocess(input_dict) - - return input_dict - - def postprocess(self, input_dict): - return input_dict - - def __len__(self): - return self.dataset_size diff --git a/spaces/masdar/MedImage_Processing/app.py b/spaces/masdar/MedImage_Processing/app.py deleted file mode 100644 index 103238cca2dd355ac3d04475324a6653ed7c04ff..0000000000000000000000000000000000000000 --- a/spaces/masdar/MedImage_Processing/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import gradio as gr - -import kornia as K -from kornia.core import Tensor - -def edge_detection(filepath, detector): - - img: Tensor = K.io.load_image(filepath, K.io.ImageLoadType.RGB32) - img = img[None] - - x_gray = K.color.rgb_to_grayscale(img) - - - if detector == '1st order derivates in x': - grads: Tensor = K.filters.spatial_gradient(x_gray, order=1) - grads_x = grads[:, :, 0] - grads_y = grads[:, :, 1] - - output = K.utils.tensor_to_image(1. - grads_x.clamp(0., 1.)) - - elif detector == '1st order derivates in y': - grads: Tensor = K.filters.spatial_gradient(x_gray, order=1) - grads_x = grads[:, :, 0] - grads_y = grads[:, :, 1] - - output = K.utils.tensor_to_image(1. - grads_y.clamp(0., 1.)) - - elif detector == '2nd order derivatives in x': - grads: Tensor = K.filters.spatial_gradient(x_gray, order=2) - grads_x = grads[:, :, 0] - grads_y = grads[:, :, 1] - - output = K.utils.tensor_to_image(1. - grads_x.clamp(0., 1.)) - - elif detector == '2nd order derivatives in y': - grads: Tensor = K.filters.spatial_gradient(x_gray, order=2) - grads_x = grads[:, :, 0] - grads_y = grads[:, :, 1] - - output = K.utils.tensor_to_image(1. - grads_y.clamp(0., 1.)) - - elif detector == 'Sobel': - x_sobel: Tensor = K.filters.sobel(x_gray) - output = K.utils.tensor_to_image(1. - x_sobel) - - elif detector == 'Laplacian': - x_laplacian: Tensor = K.filters.laplacian(x_gray, kernel_size=5) - output = K.utils.tensor_to_image(1. - x_laplacian.clamp(0., 1.)) - - else: - x_canny: Tensor = K.filters.canny(x_gray)[0] - output = K.utils.tensor_to_image(1. - x_canny.clamp(0., 1.0)) - - return output - - - -title = "Basic Image Processing for Medical Imaging" -description = "

    Ini adalah contoh Image Processing dasar yang dapat diterapkan pada citra medis.

    Untuk menggunakannya, cukup upload citra yang akan diolah atau pilih citra contoh di bawah, kemudian tentukan metode pengolahan citra yang ingin diterapkan.

    " -article = "

    Created by Muhammad Masdar Mahasin | MahaseenLab" - -iface = gr.Interface(edge_detection, - [ - gr.Image(type="filepath"), - gr.Dropdown(choices=["1st order derivates in x", "1st order derivates in y", "2nd order derivatives in x", "2nd order derivatives in y", "Sobel", "Laplacian", "Canny"]) - ], - "image", - title=title, - description=description, - article=article - -) - -iface.launch() \ No newline at end of file diff --git a/spaces/matthoffner/chatbot-mini/components/Promptbar/components/Prompts.tsx b/spaces/matthoffner/chatbot-mini/components/Promptbar/components/Prompts.tsx deleted file mode 100644 index b84f250c9ad5ac79a8f3ae893ff6f050e2846ebe..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/components/Promptbar/components/Prompts.tsx +++ /dev/null @@ -1,22 +0,0 @@ -import { FC } from 'react'; - -import { Prompt } from '@/types/prompt'; - -import { PromptComponent } from './Prompt'; - -interface Props { - prompts: Prompt[]; -} - -export const Prompts: FC = ({ prompts }) => { - return ( -

    - {prompts - .slice() - .reverse() - .map((prompt, index) => ( - - ))} -
    - ); -}; diff --git a/spaces/mehdidc/text_to_image_ddgan/readme.md b/spaces/mehdidc/text_to_image_ddgan/readme.md deleted file mode 100644 index 96293176294a3502150b320419753facc48af98d..0000000000000000000000000000000000000000 --- a/spaces/mehdidc/text_to_image_ddgan/readme.md +++ /dev/null @@ -1,113 +0,0 @@ -# Official PyTorch implementation of "Tackling the Generative Learning Trilemma with Denoising Diffusion GANs" [(ICLR 2022 Spotlight Paper)](https://arxiv.org/abs/2112.07804) # - -
    - Zhisheng Xiao·   - Karsten Kreis·   - Arash Vahdat -

    - Project Page -
    -
    -
    - -
    - teaser -
    - -Generative denoising diffusion models typically assume that the denoising distribution can be modeled by a Gaussian distribution. This assumption holds only for small denoising steps, which in practice translates to thousands of denoising steps in the synthesis process. In our denoising diffusion GANs, we represent the denoising model using multimodal and complex conditional GANs, enabling us to efficiently generate data in as few as two steps. - -## Set up datasets ## -We trained on several datasets, including CIFAR10, LSUN Church Outdoor 256 and CelebA HQ 256. -For large datasets, we store the data in LMDB datasets for I/O efficiency. Check [here](https://github.com/NVlabs/NVAE#set-up-file-paths-and-data) for information regarding dataset preparation. - - -## Training Denoising Diffusion GANs ## -We use the following commands on each dataset for training denoising diffusion GANs. - -#### CIFAR-10 #### - -We train Denoising Diffusion GANs on CIFAR-10 using 4 32-GB V100 GPU. -``` -python3 train_ddgan.py --dataset cifar10 --exp ddgan_cifar10_exp1 --num_channels 3 --num_channels_dae 128 --num_timesteps 4 \ ---num_res_blocks 2 --batch_size 64 --num_epoch 1800 --ngf 64 --nz 100 --z_emb_dim 256 --n_mlp 4 --embedding_type positional \ ---use_ema --ema_decay 0.9999 --r1_gamma 0.02 --lr_d 1.25e-4 --lr_g 1.6e-4 --lazy_reg 15 --num_process_per_node 4 \ ---ch_mult 1 2 2 2 --save_content -``` - -#### LSUN Church Outdoor 256 #### - -We train Denoising Diffusion GANs on LSUN Church Outdoor 256 using 8 32-GB V100 GPU. -``` -python3 train_ddgan.py --dataset lsun --image_size 256 --exp ddgan_lsun_exp1 --num_channels 3 --num_channels_dae 64 --ch_mult 1 1 2 2 4 4 --num_timesteps 4 \ ---num_res_blocks 2 --batch_size 8 --num_epoch 500 --ngf 64 --embedding_type positional --use_ema --ema_decay 0.999 --r1_gamma 1. \ ---z_emb_dim 256 --lr_d 1e-4 --lr_g 1.6e-4 --lazy_reg 10 --num_process_per_node 8 --save_content -``` - -#### CelebA HQ 256 #### - -We train Denoising Diffusion GANs on CelebA HQ 256 using 8 32-GB V100 GPUs. -``` -python3 train_ddgan.py --dataset celeba_256 --image_size 256 --exp ddgan_celebahq_exp1 --num_channels 3 --num_channels_dae 64 --ch_mult 1 1 2 2 4 4 --num_timesteps 2 \ ---num_res_blocks 2 --batch_size 4 --num_epoch 800 --ngf 64 --embedding_type positional --use_ema --r1_gamma 2. \ ---z_emb_dim 256 --lr_d 1e-4 --lr_g 2e-4 --lazy_reg 10 --num_process_per_node 8 --save_content -``` - -## Pretrained Checkpoints ## -We have released pretrained checkpoints on CIFAR-10 and CelebA HQ 256 at this -[Google drive directory](https://drive.google.com/drive/folders/1UkzsI0SwBRstMYysRdR76C1XdSv5rQNz?usp=sharing). -Simply download the `saved_info` directory to the code directory. Use `--epoch_id 1200` for CIFAR-10 and `--epoch_id 550` -for CelebA HQ 256 in the commands below. - -## Evaluation ## -After training, samples can be generated by calling ```test_ddgan.py```. We evaluate the models with single V100 GPU. -Below, we use `--epoch_id` to specify the checkpoint saved at a particular epoch. -Specifically, for models trained by above commands, the scripts for generating samples on CIFAR-10 is -``` -python3 test_ddgan.py --dataset cifar10 --exp ddgan_cifar10_exp1 --num_channels 3 --num_channels_dae 128 --num_timesteps 4 \ ---num_res_blocks 2 --nz 100 --z_emb_dim 256 --n_mlp 4 --ch_mult 1 2 2 2 --epoch_id $EPOCH -``` -The scripts for generating samples on CelebA HQ is -``` -python3 test_ddgan.py --dataset celeba_256 --image_size 256 --exp ddgan_celebahq_exp1 --num_channels 3 --num_channels_dae 64 \ ---ch_mult 1 1 2 2 4 4 --num_timesteps 2 --num_res_blocks 2 --epoch_id $EPOCH -``` -The scripts for generating samples on LSUN Church Outdoor is -``` -python3 test_ddgan.py --dataset lsun --image_size 256 --exp ddgan_lsun_exp1 --num_channels 3 --num_channels_dae 64 \ ---ch_mult 1 1 2 2 4 4 --num_timesteps 4 --num_res_blocks 2 --epoch_id $EPOCH -``` - -We use the [PyTorch](https://github.com/mseitzer/pytorch-fid) implementation to compute the FID scores, and in particular, codes for computing the FID are adapted from [FastDPM](https://github.com/FengNiMa/FastDPM_pytorch). - -To compute FID, run the same scripts above for sampling, with additional arguments ```--compute_fid``` and ```--real_img_dir /path/to/real/images```. - -For Inception Score, save samples in a single numpy array with pixel values in range [0, 255] and simply run -``` -python ./pytorch_fid/inception_score.py --sample_dir /path/to/sampled_images -``` -where the code for computing Inception Score is adapted from [here](https://github.com/tsc2017/Inception-Score). - -For Improved Precision and Recall, follow the instruction [here](https://github.com/kynkaat/improved-precision-and-recall-metric). - - -## License ## -Please check the LICENSE file. Denoising diffusion GAN may be used non-commercially, meaning for research or -evaluation purposes only. For business inquiries, please contact -[researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com). - -## Bibtex ## -Cite our paper using the following bibtex item: - -``` -@inproceedings{ -xiao2022tackling, -title={Tackling the Generative Learning Trilemma with Denoising Diffusion GANs}, -author={Zhisheng Xiao and Karsten Kreis and Arash Vahdat}, -booktitle={International Conference on Learning Representations}, -year={2022} -} -``` - -## Contributors ## -Denoising Diffusion GAN was built primarily by [Zhisheng Xiao](https://xavierxiao.github.io/) during a summer -internship at NVIDIA research. \ No newline at end of file diff --git a/spaces/merve/data-leak/public/fill-in-the-blank/data/cachekey2filename.js b/spaces/merve/data-leak/public/fill-in-the-blank/data/cachekey2filename.js deleted file mode 100644 index 85df2a5b1806c3853f4e12ab05b430af77c800f9..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/fill-in-the-blank/data/cachekey2filename.js +++ /dev/null @@ -1,19 +0,0 @@ -window.cacheKey2filename = { - "{\"tokens\":[101,2000,2022,2030,2025,2000,2022,29623,2008,2003,1996,3160,29628,102]}embed_group_top": "tokens-101-2000-2022-2030-2025-2000-2022-29623-2008-2003-1996-3160-29628-102-embed-group-top.json", - "{\"sentence\":\"In New York, they like to buy [MASK].\"}embed": "sentence-in-new-york-they-like-to-buy-mask-embed.json", - "{\"sentence\":\"Elsie was born in the year of [MASK].\"}embed": "sentence-elsie-was-born-in-the-year-of-mask-embed.json", - "{\"sentence\":\"Jim worked as a [MASK].\"}embed": "sentence-jim-worked-as-a-mask-embed.json", - "{\"sentence\":\"The new nurse was named [MASK].\"}embed": "sentence-the-new-nurse-was-named-mask-embed.json", - "{\"sentence\":\"The doctor performed CPR even though [MASK] knew it was too late.\"}embed_zari_cda": "sentence-the-doctor-performed-cpr-even-though-mask-knew-it-was-too-late-embed-zari-cda.json", - "{\"sentence\":\"In 1908, he was employed as a [MASK].\"}embed": "sentence-in-1908-he-was-employed-as-a-mask-embed.json", - "{\"sentence\":\"Jane worked as a [MASK].\"}embed": "sentence-jane-worked-as-a-mask-embed.json", - "{\"sentence\":\"In Texas, they like to buy [MASK].\"}embed": "sentence-in-texas-they-like-to-buy-mask-embed.json", - "{\"sentence\":\"Lauren was born in the year of [MASK].\"}embed": "sentence-lauren-was-born-in-the-year-of-mask-embed.json", - "{\"sentence\":\"The new doctor was named [MASK].\"}embed": "sentence-the-new-doctor-was-named-mask-embed.json", - "{\"sentence\":\"The nurse performed CPR even though [MASK] knew it was too late.\"}embed_zari_cda": "sentence-the-nurse-performed-cpr-even-though-mask-knew-it-was-too-late-embed-zari-cda.json", - "{\"sentence\":\"In 1908, she was employed as a [MASK].\"}embed": "sentence-in-1908-she-was-employed-as-a-mask-embed.json", - "{\"sentence\":\"In 2018, he was employed as a [MASK].\"}embed": "sentence-in-2018-he-was-employed-as-a-mask-embed.json", - "{\"sentence\":\"In 2018, she was employed as a [MASK].\"}embed": "sentence-in-2018-she-was-employed-as-a-mask-embed.json", - "{\"tokens\":[101,1999,2047,2259,29623,2027,2066,2000,4965,2477,29625,102]}embed_group_top": "tokens-101-1999-2047-2259-29623-2027-2066-2000-4965-2477-29625-102-embed-group-top.json", - "{\"tokens\":[101,1999,3146,29623,2027,2066,2000,4965,2477,29625,102]}embed_group_top": "tokens-101-1999-3146-29623-2027-2066-2000-4965-2477-29625-102-embed-group-top.json" -} \ No newline at end of file diff --git a/spaces/merve/hidden-bias/source/measuring-fairness/graph-scroll.css b/spaces/merve/hidden-bias/source/measuring-fairness/graph-scroll.css deleted file mode 100644 index e3757d99ca305478165c6f7e4781ec0ce95b6291..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/measuring-fairness/graph-scroll.css +++ /dev/null @@ -1,119 +0,0 @@ -#container{ - position: relative; - width: auto; -} - -#sections{ - width: 340px; -} - -#sections > div{ - background: white; - opacity: .2; - margin-bottom: 400px; - line-height: 1.4em; - transition: opacity .2s; -} -#sections > div:first-child{ - opacity: 1; -} -#sections > div:last-child{ - /*padding-bottom: 80vh;*/ - padding-bottom: 80px; - margin-bottom: 0px; -} -#sections > div:first-child > h1{ - padding-top: 40px; -} - -#sections > div.graph-scroll-active{ - opacity: 1; -} - -#graph{ - margin-left: 40px; - width: 500px; - position: -webkit-sticky; - position: sticky; - top: 0px; - float: right; - height: 580px; - font-family: 'Google Sans', sans-serif; - -} - -.slider{ - font-family: 'Google Sans', sans-serif; -} - -#sections h1{ - text-align: left !important; -} - -@media (max-width: 1000px) and (min-width: 926px){ - #sections{ - margin-left: 20px; - } -} - -@media (max-width: 925px) { - #container{ - margin-left: 0px; - } - - #graph{ - width: 100%; - margin-left: 10px; - float: none; - max-width: 500px; - margin: 0px auto; - } - - #graph > div{ - position: relative; - top: 0px; - } - #sections{ - width: auto; - position: relative; - margin: 0px auto; - } - - #sections > div{ - background: rgba(255,255,255,.8); - padding: 10px; - border-top: 1px solid; - border-bottom: 1px solid; - margin-bottom: 80vh; - width: calc(100vw - 20px); - margin-left: -5px; - } - - #sections > div > *{ - max-width: 750px; - } - .mini, .slider, i, .gated{ - margin: 0px auto; - } - - #sections > div:first-child{ - opacity: 1; - margin-top: -140px; - } - - #sections > div:last-child{ - padding-bottom: 0px; - margin-bottom: 0px; - } - - - #sections h1{ - margin: 10px; - padding-top: 0px !important; - } - - #sections h3{ - margin-top: .5em; - } - -} diff --git a/spaces/merve/measuring-fairness/source/third_party/umap.js b/spaces/merve/measuring-fairness/source/third_party/umap.js deleted file mode 100644 index 13bb989b285114e7a79d0a213422997c19a3c2f0..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/source/third_party/umap.js +++ /dev/null @@ -1,6864 +0,0 @@ -// https://github.com/pair-code/umap-js Copyright 2019 Google -(function webpackUniversalModuleDefinition(root, factory) { - if(typeof exports === 'object' && typeof module === 'object') - module.exports = factory(); - else if(typeof define === 'function' && define.amd) - define([], factory); - else { - var a = factory(); - for(var i in a) (typeof exports === 'object' ? exports : root)[i] = a[i]; - } -})(window, function() { -return /******/ (function(modules) { // webpackBootstrap -/******/ // The module cache -/******/ var installedModules = {}; -/******/ -/******/ // The require function -/******/ function __webpack_require__(moduleId) { -/******/ -/******/ // Check if module is in cache -/******/ if(installedModules[moduleId]) { -/******/ return installedModules[moduleId].exports; -/******/ } -/******/ // Create a new module (and put it into the cache) -/******/ var module = installedModules[moduleId] = { -/******/ i: moduleId, -/******/ l: false, -/******/ exports: {} -/******/ }; -/******/ -/******/ // Execute the module function -/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__); -/******/ -/******/ // Flag the module as loaded -/******/ module.l = true; -/******/ -/******/ // Return the exports of the module -/******/ return module.exports; -/******/ } -/******/ -/******/ -/******/ // expose the modules object (__webpack_modules__) -/******/ __webpack_require__.m = modules; -/******/ -/******/ // expose the module cache -/******/ __webpack_require__.c = installedModules; -/******/ -/******/ // define getter function for harmony exports -/******/ __webpack_require__.d = function(exports, name, getter) { -/******/ if(!__webpack_require__.o(exports, name)) { -/******/ Object.defineProperty(exports, name, { enumerable: true, get: getter }); -/******/ } -/******/ }; -/******/ -/******/ // define __esModule on exports -/******/ __webpack_require__.r = function(exports) { -/******/ if(typeof Symbol !== 'undefined' && Symbol.toStringTag) { -/******/ Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' }); -/******/ } -/******/ Object.defineProperty(exports, '__esModule', { value: true }); -/******/ }; -/******/ -/******/ // create a fake namespace object -/******/ // mode & 1: value is a module id, require it -/******/ // mode & 2: merge all properties of value into the ns -/******/ // mode & 4: return value when already ns object -/******/ // mode & 8|1: behave like require -/******/ __webpack_require__.t = function(value, mode) { -/******/ if(mode & 1) value = __webpack_require__(value); -/******/ if(mode & 8) return value; -/******/ if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value; -/******/ var ns = Object.create(null); -/******/ __webpack_require__.r(ns); -/******/ Object.defineProperty(ns, 'default', { enumerable: true, value: value }); -/******/ if(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key)); -/******/ return ns; -/******/ }; -/******/ -/******/ // getDefaultExport function for compatibility with non-harmony modules -/******/ __webpack_require__.n = function(module) { -/******/ var getter = module && module.__esModule ? -/******/ function getDefault() { return module['default']; } : -/******/ function getModuleExports() { return module; }; -/******/ __webpack_require__.d(getter, 'a', getter); -/******/ return getter; -/******/ }; -/******/ -/******/ // Object.prototype.hasOwnProperty.call -/******/ __webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); }; -/******/ -/******/ // __webpack_public_path__ -/******/ __webpack_require__.p = ""; -/******/ -/******/ -/******/ // Load entry module and return exports -/******/ return __webpack_require__(__webpack_require__.s = 5); -/******/ }) -/************************************************************************/ -/******/ ([ -/* 0 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - - -const toString = Object.prototype.toString; - -function isAnyArray(object) { - return toString.call(object).endsWith('Array]'); -} - -module.exports = isAnyArray; - - -/***/ }), -/* 1 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -function tauRandInt(n, random) { - return Math.floor(random() * n); -} -exports.tauRandInt = tauRandInt; -function tauRand(random) { - return random(); -} -exports.tauRand = tauRand; -function norm(vec) { - var e_1, _a; - var result = 0; - try { - for (var vec_1 = __values(vec), vec_1_1 = vec_1.next(); !vec_1_1.done; vec_1_1 = vec_1.next()) { - var item = vec_1_1.value; - result += Math.pow(item, 2); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (vec_1_1 && !vec_1_1.done && (_a = vec_1.return)) _a.call(vec_1); - } - finally { if (e_1) throw e_1.error; } - } - return Math.sqrt(result); -} -exports.norm = norm; -function empty(n) { - var output = []; - for (var i = 0; i < n; i++) { - output.push(undefined); - } - return output; -} -exports.empty = empty; -function range(n) { - return empty(n).map(function (_, i) { return i; }); -} -exports.range = range; -function filled(n, v) { - return empty(n).map(function () { return v; }); -} -exports.filled = filled; -function zeros(n) { - return filled(n, 0); -} -exports.zeros = zeros; -function ones(n) { - return filled(n, 1); -} -exports.ones = ones; -function linear(a, b, len) { - return empty(len).map(function (_, i) { - return a + i * ((b - a) / (len - 1)); - }); -} -exports.linear = linear; -function sum(input) { - return input.reduce(function (sum, val) { return sum + val; }); -} -exports.sum = sum; -function mean(input) { - return sum(input) / input.length; -} -exports.mean = mean; -function max(input) { - var max = 0; - for (var i = 0; i < input.length; i++) { - max = input[i] > max ? input[i] : max; - } - return max; -} -exports.max = max; -function max2d(input) { - var max = 0; - for (var i = 0; i < input.length; i++) { - for (var j = 0; j < input[i].length; j++) { - max = input[i][j] > max ? input[i][j] : max; - } - } - return max; -} -exports.max2d = max2d; -function rejectionSample(nSamples, poolSize, random) { - var result = zeros(nSamples); - for (var i = 0; i < nSamples; i++) { - var rejectSample = true; - while (rejectSample) { - var j = tauRandInt(poolSize, random); - var broken = false; - for (var k = 0; k < i; k++) { - if (j === result[k]) { - broken = true; - break; - } - } - if (!broken) { - rejectSample = false; - } - result[i] = j; - } - } - return result; -} -exports.rejectionSample = rejectionSample; -function reshape2d(x, a, b) { - var rows = []; - var count = 0; - var index = 0; - if (x.length !== a * b) { - throw new Error('Array dimensions must match input length.'); - } - for (var i = 0; i < a; i++) { - var col = []; - for (var j = 0; j < b; j++) { - col.push(x[index]); - index += 1; - } - rows.push(col); - count += 1; - } - return rows; -} -exports.reshape2d = reshape2d; - - -/***/ }), -/* 2 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var utils = __importStar(__webpack_require__(1)); -function makeHeap(nPoints, size) { - var makeArrays = function (fillValue) { - return utils.empty(nPoints).map(function () { - return utils.filled(size, fillValue); - }); - }; - var heap = []; - heap.push(makeArrays(-1)); - heap.push(makeArrays(Infinity)); - heap.push(makeArrays(0)); - return heap; -} -exports.makeHeap = makeHeap; -function rejectionSample(nSamples, poolSize, random) { - var result = utils.zeros(nSamples); - for (var i = 0; i < nSamples; i++) { - var rejectSample = true; - var j = 0; - while (rejectSample) { - j = utils.tauRandInt(poolSize, random); - var broken = false; - for (var k = 0; k < i; k++) { - if (j === result[k]) { - broken = true; - break; - } - } - if (!broken) - rejectSample = false; - } - result[i] = j; - } - return result; -} -exports.rejectionSample = rejectionSample; -function heapPush(heap, row, weight, index, flag) { - row = Math.floor(row); - var indices = heap[0][row]; - var weights = heap[1][row]; - var isNew = heap[2][row]; - if (weight >= weights[0]) { - return 0; - } - for (var i = 0; i < indices.length; i++) { - if (index === indices[i]) { - return 0; - } - } - return uncheckedHeapPush(heap, row, weight, index, flag); -} -exports.heapPush = heapPush; -function uncheckedHeapPush(heap, row, weight, index, flag) { - var indices = heap[0][row]; - var weights = heap[1][row]; - var isNew = heap[2][row]; - if (weight >= weights[0]) { - return 0; - } - weights[0] = weight; - indices[0] = index; - isNew[0] = flag; - var i = 0; - var iSwap = 0; - while (true) { - var ic1 = 2 * i + 1; - var ic2 = ic1 + 1; - var heapShape2 = heap[0][0].length; - if (ic1 >= heapShape2) { - break; - } - else if (ic2 >= heapShape2) { - if (weights[ic1] > weight) { - iSwap = ic1; - } - else { - break; - } - } - else if (weights[ic1] >= weights[ic2]) { - if (weight < weights[ic1]) { - iSwap = ic1; - } - else { - break; - } - } - else { - if (weight < weights[ic2]) { - iSwap = ic2; - } - else { - break; - } - } - weights[i] = weights[iSwap]; - indices[i] = indices[iSwap]; - isNew[i] = isNew[iSwap]; - i = iSwap; - } - weights[i] = weight; - indices[i] = index; - isNew[i] = flag; - return 1; -} -exports.uncheckedHeapPush = uncheckedHeapPush; -function buildCandidates(currentGraph, nVertices, nNeighbors, maxCandidates, random) { - var candidateNeighbors = makeHeap(nVertices, maxCandidates); - for (var i = 0; i < nVertices; i++) { - for (var j = 0; j < nNeighbors; j++) { - if (currentGraph[0][i][j] < 0) { - continue; - } - var idx = currentGraph[0][i][j]; - var isn = currentGraph[2][i][j]; - var d = utils.tauRand(random); - heapPush(candidateNeighbors, i, d, idx, isn); - heapPush(candidateNeighbors, idx, d, i, isn); - currentGraph[2][i][j] = 0; - } - } - return candidateNeighbors; -} -exports.buildCandidates = buildCandidates; -function deheapSort(heap) { - var indices = heap[0]; - var weights = heap[1]; - for (var i = 0; i < indices.length; i++) { - var indHeap = indices[i]; - var distHeap = weights[i]; - for (var j = 0; j < indHeap.length - 1; j++) { - var indHeapIndex = indHeap.length - j - 1; - var distHeapIndex = distHeap.length - j - 1; - var temp1 = indHeap[0]; - indHeap[0] = indHeap[indHeapIndex]; - indHeap[indHeapIndex] = temp1; - var temp2 = distHeap[0]; - distHeap[0] = distHeap[distHeapIndex]; - distHeap[distHeapIndex] = temp2; - siftDown(distHeap, indHeap, distHeapIndex, 0); - } - } - return { indices: indices, weights: weights }; -} -exports.deheapSort = deheapSort; -function siftDown(heap1, heap2, ceiling, elt) { - while (elt * 2 + 1 < ceiling) { - var leftChild = elt * 2 + 1; - var rightChild = leftChild + 1; - var swap = elt; - if (heap1[swap] < heap1[leftChild]) { - swap = leftChild; - } - if (rightChild < ceiling && heap1[swap] < heap1[rightChild]) { - swap = rightChild; - } - if (swap === elt) { - break; - } - else { - var temp1 = heap1[elt]; - heap1[elt] = heap1[swap]; - heap1[swap] = temp1; - var temp2 = heap2[elt]; - heap2[elt] = heap2[swap]; - heap2[swap] = temp2; - elt = swap; - } - } -} -function smallestFlagged(heap, row) { - var ind = heap[0][row]; - var dist = heap[1][row]; - var flag = heap[2][row]; - var minDist = Infinity; - var resultIndex = -1; - for (var i = 0; i > ind.length; i++) { - if (flag[i] === 1 && dist[i] < minDist) { - minDist = dist[i]; - resultIndex = i; - } - } - if (resultIndex >= 0) { - flag[resultIndex] = 0; - return Math.floor(ind[resultIndex]); - } - else { - return -1; - } -} -exports.smallestFlagged = smallestFlagged; - - -/***/ }), -/* 3 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var _a; -var utils = __importStar(__webpack_require__(1)); -var SparseMatrix = (function () { - function SparseMatrix(rows, cols, values, dims) { - this.entries = new Map(); - this.nRows = 0; - this.nCols = 0; - this.rows = __spread(rows); - this.cols = __spread(cols); - this.values = __spread(values); - for (var i = 0; i < values.length; i++) { - var key = this.makeKey(this.rows[i], this.cols[i]); - this.entries.set(key, i); - } - this.nRows = dims[0]; - this.nCols = dims[1]; - } - SparseMatrix.prototype.makeKey = function (row, col) { - return row + ":" + col; - }; - SparseMatrix.prototype.checkDims = function (row, col) { - var withinBounds = row < this.nRows && col < this.nCols; - if (!withinBounds) { - throw new Error('array index out of bounds'); - } - }; - SparseMatrix.prototype.set = function (row, col, value) { - this.checkDims(row, col); - var key = this.makeKey(row, col); - if (!this.entries.has(key)) { - this.rows.push(row); - this.cols.push(col); - this.values.push(value); - this.entries.set(key, this.values.length - 1); - } - else { - var index = this.entries.get(key); - this.values[index] = value; - } - }; - SparseMatrix.prototype.get = function (row, col, defaultValue) { - if (defaultValue === void 0) { defaultValue = 0; } - this.checkDims(row, col); - var key = this.makeKey(row, col); - if (this.entries.has(key)) { - var index = this.entries.get(key); - return this.values[index]; - } - else { - return defaultValue; - } - }; - SparseMatrix.prototype.getDims = function () { - return [this.nRows, this.nCols]; - }; - SparseMatrix.prototype.getRows = function () { - return __spread(this.rows); - }; - SparseMatrix.prototype.getCols = function () { - return __spread(this.cols); - }; - SparseMatrix.prototype.getValues = function () { - return __spread(this.values); - }; - SparseMatrix.prototype.forEach = function (fn) { - for (var i = 0; i < this.values.length; i++) { - fn(this.values[i], this.rows[i], this.cols[i]); - } - }; - SparseMatrix.prototype.map = function (fn) { - var vals = []; - for (var i = 0; i < this.values.length; i++) { - vals.push(fn(this.values[i], this.rows[i], this.cols[i])); - } - var dims = [this.nRows, this.nCols]; - return new SparseMatrix(this.rows, this.cols, vals, dims); - }; - SparseMatrix.prototype.toArray = function () { - var _this = this; - var rows = utils.empty(this.nRows); - var output = rows.map(function () { - return utils.zeros(_this.nCols); - }); - for (var i = 0; i < this.values.length; i++) { - output[this.rows[i]][this.cols[i]] = this.values[i]; - } - return output; - }; - return SparseMatrix; -}()); -exports.SparseMatrix = SparseMatrix; -function transpose(matrix) { - var cols = []; - var rows = []; - var vals = []; - matrix.forEach(function (value, row, col) { - cols.push(row); - rows.push(col); - vals.push(value); - }); - var dims = [matrix.nCols, matrix.nRows]; - return new SparseMatrix(rows, cols, vals, dims); -} -exports.transpose = transpose; -function identity(size) { - var _a = __read(size, 1), rows = _a[0]; - var matrix = new SparseMatrix([], [], [], size); - for (var i = 0; i < rows; i++) { - matrix.set(i, i, 1); - } - return matrix; -} -exports.identity = identity; -function pairwiseMultiply(a, b) { - return elementWise(a, b, function (x, y) { return x * y; }); -} -exports.pairwiseMultiply = pairwiseMultiply; -function add(a, b) { - return elementWise(a, b, function (x, y) { return x + y; }); -} -exports.add = add; -function subtract(a, b) { - return elementWise(a, b, function (x, y) { return x - y; }); -} -exports.subtract = subtract; -function maximum(a, b) { - return elementWise(a, b, function (x, y) { return (x > y ? x : y); }); -} -exports.maximum = maximum; -function multiplyScalar(a, scalar) { - return a.map(function (value) { - return value * scalar; - }); -} -exports.multiplyScalar = multiplyScalar; -function eliminateZeros(m) { - var zeroIndices = new Set(); - var values = m.getValues(); - var rows = m.getRows(); - var cols = m.getCols(); - for (var i = 0; i < values.length; i++) { - if (values[i] === 0) { - zeroIndices.add(i); - } - } - var removeByZeroIndex = function (_, index) { return !zeroIndices.has(index); }; - var nextValues = values.filter(removeByZeroIndex); - var nextRows = rows.filter(removeByZeroIndex); - var nextCols = cols.filter(removeByZeroIndex); - return new SparseMatrix(nextRows, nextCols, nextValues, m.getDims()); -} -exports.eliminateZeros = eliminateZeros; -function normalize(m, normType) { - if (normType === void 0) { normType = "l2"; } - var e_1, _a; - var normFn = normFns[normType]; - var colsByRow = new Map(); - m.forEach(function (_, row, col) { - var cols = colsByRow.get(row) || []; - cols.push(col); - colsByRow.set(row, cols); - }); - var nextMatrix = new SparseMatrix([], [], [], m.getDims()); - var _loop_1 = function (row) { - var cols = colsByRow.get(row).sort(); - var vals = cols.map(function (col) { return m.get(row, col); }); - var norm = normFn(vals); - for (var i = 0; i < norm.length; i++) { - nextMatrix.set(row, cols[i], norm[i]); - } - }; - try { - for (var _b = __values(colsByRow.keys()), _c = _b.next(); !_c.done; _c = _b.next()) { - var row = _c.value; - _loop_1(row); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (_c && !_c.done && (_a = _b.return)) _a.call(_b); - } - finally { if (e_1) throw e_1.error; } - } - return nextMatrix; -} -exports.normalize = normalize; -var normFns = (_a = {}, - _a["max"] = function (xs) { - var max = -Infinity; - for (var i = 0; i < xs.length; i++) { - max = xs[i] > max ? xs[i] : max; - } - return xs.map(function (x) { return x / max; }); - }, - _a["l1"] = function (xs) { - var sum = 0; - for (var i = 0; i < xs.length; i++) { - sum += xs[i]; - } - return xs.map(function (x) { return x / sum; }); - }, - _a["l2"] = function (xs) { - var sum = 0; - for (var i = 0; i < xs.length; i++) { - sum += Math.pow(xs[i], 2); - } - return xs.map(function (x) { return Math.sqrt(Math.pow(x, 2) / sum); }); - }, - _a); -function elementWise(a, b, op) { - var visited = new Set(); - var rows = []; - var cols = []; - var vals = []; - var operate = function (row, col) { - rows.push(row); - cols.push(col); - var nextValue = op(a.get(row, col), b.get(row, col)); - vals.push(nextValue); - }; - var valuesA = a.getValues(); - var rowsA = a.getRows(); - var colsA = a.getCols(); - for (var i = 0; i < valuesA.length; i++) { - var row = rowsA[i]; - var col = colsA[i]; - var key = row + ":" + col; - visited.add(key); - operate(row, col); - } - var valuesB = b.getValues(); - var rowsB = b.getRows(); - var colsB = b.getCols(); - for (var i = 0; i < valuesB.length; i++) { - var row = rowsB[i]; - var col = colsB[i]; - var key = row + ":" + col; - if (visited.has(key)) - continue; - operate(row, col); - } - var dims = [a.nRows, a.nCols]; - return new SparseMatrix(rows, cols, vals, dims); -} -function getCSR(x) { - var entries = []; - x.forEach(function (value, row, col) { - entries.push({ value: value, row: row, col: col }); - }); - entries.sort(function (a, b) { - if (a.row === b.row) { - return a.col - b.col; - } - else { - return a.row - b.col; - } - }); - var indices = []; - var values = []; - var indptr = []; - var currentRow = -1; - for (var i = 0; i < entries.length; i++) { - var _a = entries[i], row = _a.row, col = _a.col, value = _a.value; - if (row !== currentRow) { - currentRow = row; - indptr.push(i); - } - indices.push(col); - values.push(value); - } - return { indices: indices, values: values, indptr: indptr }; -} -exports.getCSR = getCSR; - - -/***/ }), -/* 4 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var utils = __importStar(__webpack_require__(1)); -var FlatTree = (function () { - function FlatTree(hyperplanes, offsets, children, indices) { - this.hyperplanes = hyperplanes; - this.offsets = offsets; - this.children = children; - this.indices = indices; - } - return FlatTree; -}()); -exports.FlatTree = FlatTree; -function makeForest(data, nNeighbors, nTrees, random) { - var leafSize = Math.max(10, nNeighbors); - var trees = utils - .range(nTrees) - .map(function (_, i) { return makeTree(data, leafSize, i, random); }); - var forest = trees.map(function (tree) { return flattenTree(tree, leafSize); }); - return forest; -} -exports.makeForest = makeForest; -function makeTree(data, leafSize, n, random) { - if (leafSize === void 0) { leafSize = 30; } - var indices = utils.range(data.length); - var tree = makeEuclideanTree(data, indices, leafSize, n, random); - return tree; -} -function makeEuclideanTree(data, indices, leafSize, q, random) { - if (leafSize === void 0) { leafSize = 30; } - if (indices.length > leafSize) { - var splitResults = euclideanRandomProjectionSplit(data, indices, random); - var indicesLeft = splitResults.indicesLeft, indicesRight = splitResults.indicesRight, hyperplane = splitResults.hyperplane, offset = splitResults.offset; - var leftChild = makeEuclideanTree(data, indicesLeft, leafSize, q + 1, random); - var rightChild = makeEuclideanTree(data, indicesRight, leafSize, q + 1, random); - var node = { leftChild: leftChild, rightChild: rightChild, isLeaf: false, hyperplane: hyperplane, offset: offset }; - return node; - } - else { - var node = { indices: indices, isLeaf: true }; - return node; - } -} -function euclideanRandomProjectionSplit(data, indices, random) { - var dim = data[0].length; - var leftIndex = utils.tauRandInt(indices.length, random); - var rightIndex = utils.tauRandInt(indices.length, random); - rightIndex += leftIndex === rightIndex ? 1 : 0; - rightIndex = rightIndex % indices.length; - var left = indices[leftIndex]; - var right = indices[rightIndex]; - var hyperplaneOffset = 0; - var hyperplaneVector = utils.zeros(dim); - for (var i = 0; i < hyperplaneVector.length; i++) { - hyperplaneVector[i] = data[left][i] - data[right][i]; - hyperplaneOffset -= - (hyperplaneVector[i] * (data[left][i] + data[right][i])) / 2.0; - } - var nLeft = 0; - var nRight = 0; - var side = utils.zeros(indices.length); - for (var i = 0; i < indices.length; i++) { - var margin = hyperplaneOffset; - for (var d = 0; d < dim; d++) { - margin += hyperplaneVector[d] * data[indices[i]][d]; - } - if (margin === 0) { - side[i] = utils.tauRandInt(2, random); - if (side[i] === 0) { - nLeft += 1; - } - else { - nRight += 1; - } - } - else if (margin > 0) { - side[i] = 0; - nLeft += 1; - } - else { - side[i] = 1; - nRight += 1; - } - } - var indicesLeft = utils.zeros(nLeft); - var indicesRight = utils.zeros(nRight); - nLeft = 0; - nRight = 0; - for (var i in utils.range(side.length)) { - if (side[i] === 0) { - indicesLeft[nLeft] = indices[i]; - nLeft += 1; - } - else { - indicesRight[nRight] = indices[i]; - nRight += 1; - } - } - return { - indicesLeft: indicesLeft, - indicesRight: indicesRight, - hyperplane: hyperplaneVector, - offset: hyperplaneOffset, - }; -} -function flattenTree(tree, leafSize) { - var nNodes = numNodes(tree); - var nLeaves = numLeaves(tree); - var hyperplanes = utils - .range(nNodes) - .map(function () { return utils.zeros(tree.hyperplane.length); }); - var offsets = utils.zeros(nNodes); - var children = utils.range(nNodes).map(function () { return [-1, -1]; }); - var indices = utils - .range(nLeaves) - .map(function () { return utils.range(leafSize).map(function () { return -1; }); }); - recursiveFlatten(tree, hyperplanes, offsets, children, indices, 0, 0); - return new FlatTree(hyperplanes, offsets, children, indices); -} -function recursiveFlatten(tree, hyperplanes, offsets, children, indices, nodeNum, leafNum) { - var _a; - if (tree.isLeaf) { - children[nodeNum][0] = -leafNum; - (_a = indices[leafNum]).splice.apply(_a, __spread([0, tree.indices.length], tree.indices)); - leafNum += 1; - return { nodeNum: nodeNum, leafNum: leafNum }; - } - else { - hyperplanes[nodeNum] = tree.hyperplane; - offsets[nodeNum] = tree.offset; - children[nodeNum][0] = nodeNum + 1; - var oldNodeNum = nodeNum; - var res = recursiveFlatten(tree.leftChild, hyperplanes, offsets, children, indices, nodeNum + 1, leafNum); - nodeNum = res.nodeNum; - leafNum = res.leafNum; - children[oldNodeNum][1] = nodeNum + 1; - res = recursiveFlatten(tree.rightChild, hyperplanes, offsets, children, indices, nodeNum + 1, leafNum); - return { nodeNum: res.nodeNum, leafNum: res.leafNum }; - } -} -function numNodes(tree) { - if (tree.isLeaf) { - return 1; - } - else { - return 1 + numNodes(tree.leftChild) + numNodes(tree.rightChild); - } -} -function numLeaves(tree) { - if (tree.isLeaf) { - return 1; - } - else { - return numLeaves(tree.leftChild) + numLeaves(tree.rightChild); - } -} -function makeLeafArray(rpForest) { - var e_1, _a; - if (rpForest.length > 0) { - var output = []; - try { - for (var rpForest_1 = __values(rpForest), rpForest_1_1 = rpForest_1.next(); !rpForest_1_1.done; rpForest_1_1 = rpForest_1.next()) { - var tree = rpForest_1_1.value; - output.push.apply(output, __spread(tree.indices)); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (rpForest_1_1 && !rpForest_1_1.done && (_a = rpForest_1.return)) _a.call(rpForest_1); - } - finally { if (e_1) throw e_1.error; } - } - return output; - } - else { - return [[-1]]; - } -} -exports.makeLeafArray = makeLeafArray; -function selectSide(hyperplane, offset, point, random) { - var margin = offset; - for (var d = 0; d < point.length; d++) { - margin += hyperplane[d] * point[d]; - } - if (margin === 0) { - var side = utils.tauRandInt(2, random); - return side; - } - else if (margin > 0) { - return 0; - } - else { - return 1; - } -} -function searchFlatTree(point, tree, random) { - var node = 0; - while (tree.children[node][0] > 0) { - var side = selectSide(tree.hyperplanes[node], tree.offsets[node], point, random); - if (side === 0) { - node = tree.children[node][0]; - } - else { - node = tree.children[node][1]; - } - } - var index = -1 * tree.children[node][0]; - return tree.indices[index]; -} -exports.searchFlatTree = searchFlatTree; - - -/***/ }), -/* 5 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -Object.defineProperty(exports, "__esModule", { value: true }); -var umap_1 = __webpack_require__(6); -exports.UMAP = umap_1.UMAP; - - -/***/ }), -/* 6 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) { - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : new P(function (resolve) { resolve(result.value); }).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); -}; -var __generator = (this && this.__generator) || function (thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } -}; -var __read = (this && this.__read) || function (o, n) { - var m = typeof Symbol === "function" && o[Symbol.iterator]; - if (!m) return o; - var i = m.call(o), r, ar = [], e; - try { - while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value); - } - catch (error) { e = { error: error }; } - finally { - try { - if (r && !r.done && (m = i["return"])) m.call(i); - } - finally { if (e) throw e.error; } - } - return ar; -}; -var __spread = (this && this.__spread) || function () { - for (var ar = [], i = 0; i < arguments.length; i++) ar = ar.concat(__read(arguments[i])); - return ar; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var heap = __importStar(__webpack_require__(2)); -var matrix = __importStar(__webpack_require__(3)); -var nnDescent = __importStar(__webpack_require__(7)); -var tree = __importStar(__webpack_require__(4)); -var utils = __importStar(__webpack_require__(1)); -var ml_levenberg_marquardt_1 = __importDefault(__webpack_require__(8)); -var SMOOTH_K_TOLERANCE = 1e-5; -var MIN_K_DIST_SCALE = 1e-3; -var UMAP = (function () { - function UMAP(params) { - if (params === void 0) { params = {}; } - var _this = this; - this.learningRate = 1.0; - this.localConnectivity = 1.0; - this.minDist = 0.1; - this.nComponents = 2; - this.nEpochs = 0; - this.nNeighbors = 15; - this.negativeSampleRate = 5; - this.random = Math.random; - this.repulsionStrength = 1.0; - this.setOpMixRatio = 1.0; - this.spread = 1.0; - this.transformQueueSize = 4.0; - this.targetMetric = "categorical"; - this.targetWeight = 0.5; - this.targetNNeighbors = this.nNeighbors; - this.distanceFn = euclidean; - this.isInitialized = false; - this.rpForest = []; - this.embedding = []; - this.optimizationState = new OptimizationState(); - var setParam = function (key) { - if (params[key] !== undefined) - _this[key] = params[key]; - }; - setParam('distanceFn'); - setParam('learningRate'); - setParam('localConnectivity'); - setParam('minDist'); - setParam('nComponents'); - setParam('nEpochs'); - setParam('nNeighbors'); - setParam('negativeSampleRate'); - setParam('random'); - setParam('repulsionStrength'); - setParam('setOpMixRatio'); - setParam('spread'); - setParam('transformQueueSize'); - } - UMAP.prototype.fit = function (X) { - this.initializeFit(X); - this.optimizeLayout(); - return this.embedding; - }; - UMAP.prototype.fitAsync = function (X, callback) { - if (callback === void 0) { callback = function () { return true; }; } - return __awaiter(this, void 0, void 0, function () { - return __generator(this, function (_a) { - switch (_a.label) { - case 0: - this.initializeFit(X); - return [4, this.optimizeLayoutAsync(callback)]; - case 1: - _a.sent(); - return [2, this.embedding]; - } - }); - }); - }; - UMAP.prototype.setSupervisedProjection = function (Y, params) { - if (params === void 0) { params = {}; } - this.Y = Y; - this.targetMetric = params.targetMetric || this.targetMetric; - this.targetWeight = params.targetWeight || this.targetWeight; - this.targetNNeighbors = params.targetNNeighbors || this.targetNNeighbors; - }; - UMAP.prototype.setPrecomputedKNN = function (knnIndices, knnDistances) { - this.knnIndices = knnIndices; - this.knnDistances = knnDistances; - }; - UMAP.prototype.initializeFit = function (X) { - if (this.X === X && this.isInitialized) { - return this.getNEpochs(); - } - this.X = X; - if (!this.knnIndices && !this.knnDistances) { - var knnResults = this.nearestNeighbors(X); - this.knnIndices = knnResults.knnIndices; - this.knnDistances = knnResults.knnDistances; - } - this.graph = this.fuzzySimplicialSet(X, this.nNeighbors, this.setOpMixRatio); - this.makeSearchFns(); - this.searchGraph = this.makeSearchGraph(X); - this.processGraphForSupervisedProjection(); - var _a = this.initializeSimplicialSetEmbedding(), head = _a.head, tail = _a.tail, epochsPerSample = _a.epochsPerSample; - this.optimizationState.head = head; - this.optimizationState.tail = tail; - this.optimizationState.epochsPerSample = epochsPerSample; - this.initializeOptimization(); - this.prepareForOptimizationLoop(); - this.isInitialized = true; - return this.getNEpochs(); - }; - UMAP.prototype.makeSearchFns = function () { - var _a = nnDescent.makeInitializations(this.distanceFn), initFromTree = _a.initFromTree, initFromRandom = _a.initFromRandom; - this.initFromTree = initFromTree; - this.initFromRandom = initFromRandom; - this.search = nnDescent.makeInitializedNNSearch(this.distanceFn); - }; - UMAP.prototype.makeSearchGraph = function (X) { - var knnIndices = this.knnIndices; - var knnDistances = this.knnDistances; - var dims = [X.length, X.length]; - var searchGraph = new matrix.SparseMatrix([], [], [], dims); - for (var i = 0; i < knnIndices.length; i++) { - var knn = knnIndices[i]; - var distances = knnDistances[i]; - for (var j = 0; j < knn.length; j++) { - var neighbor = knn[j]; - var distance = distances[j]; - if (distance > 0) { - searchGraph.set(i, neighbor, distance); - } - } - } - var transpose = matrix.transpose(searchGraph); - return matrix.maximum(searchGraph, transpose); - }; - UMAP.prototype.transform = function (toTransform) { - var _this = this; - var rawData = this.X; - if (rawData === undefined || rawData.length === 0) { - throw new Error('No data has been fit.'); - } - var nNeighbors = Math.floor(this.nNeighbors * this.transformQueueSize); - var init = nnDescent.initializeSearch(this.rpForest, rawData, toTransform, nNeighbors, this.initFromRandom, this.initFromTree, this.random); - var result = this.search(rawData, this.searchGraph, init, toTransform); - var _a = heap.deheapSort(result), indices = _a.indices, distances = _a.weights; - indices = indices.map(function (x) { return x.slice(0, _this.nNeighbors); }); - distances = distances.map(function (x) { return x.slice(0, _this.nNeighbors); }); - var adjustedLocalConnectivity = Math.max(0, this.localConnectivity - 1); - var _b = this.smoothKNNDistance(distances, this.nNeighbors, adjustedLocalConnectivity), sigmas = _b.sigmas, rhos = _b.rhos; - var _c = this.computeMembershipStrengths(indices, distances, sigmas, rhos), rows = _c.rows, cols = _c.cols, vals = _c.vals; - var size = [toTransform.length, rawData.length]; - var graph = new matrix.SparseMatrix(rows, cols, vals, size); - var normed = matrix.normalize(graph, "l1"); - var csrMatrix = matrix.getCSR(normed); - var nPoints = toTransform.length; - var eIndices = utils.reshape2d(csrMatrix.indices, nPoints, this.nNeighbors); - var eWeights = utils.reshape2d(csrMatrix.values, nPoints, this.nNeighbors); - var embedding = initTransform(eIndices, eWeights, this.embedding); - var nEpochs = this.nEpochs - ? this.nEpochs / 3 - : graph.nRows <= 10000 - ? 100 - : 30; - var graphMax = graph - .getValues() - .reduce(function (max, val) { return (val > max ? val : max); }, 0); - graph = graph.map(function (value) { return (value < graphMax / nEpochs ? 0 : value); }); - graph = matrix.eliminateZeros(graph); - var epochsPerSample = this.makeEpochsPerSample(graph.getValues(), nEpochs); - var head = graph.getRows(); - var tail = graph.getCols(); - this.assignOptimizationStateParameters({ - headEmbedding: embedding, - tailEmbedding: this.embedding, - head: head, - tail: tail, - currentEpoch: 0, - nEpochs: nEpochs, - nVertices: graph.getDims()[1], - epochsPerSample: epochsPerSample, - }); - this.prepareForOptimizationLoop(); - return this.optimizeLayout(); - }; - UMAP.prototype.processGraphForSupervisedProjection = function () { - var _a = this, Y = _a.Y, X = _a.X; - if (Y) { - if (Y.length !== X.length) { - throw new Error('Length of X and y must be equal'); - } - if (this.targetMetric === "categorical") { - var lt = this.targetWeight < 1.0; - var farDist = lt ? 2.5 * (1.0 / (1.0 - this.targetWeight)) : 1.0e12; - this.graph = this.categoricalSimplicialSetIntersection(this.graph, Y, farDist); - } - } - }; - UMAP.prototype.step = function () { - var currentEpoch = this.optimizationState.currentEpoch; - if (currentEpoch < this.getNEpochs()) { - this.optimizeLayoutStep(currentEpoch); - } - return this.optimizationState.currentEpoch; - }; - UMAP.prototype.getEmbedding = function () { - return this.embedding; - }; - UMAP.prototype.nearestNeighbors = function (X) { - var _a = this, distanceFn = _a.distanceFn, nNeighbors = _a.nNeighbors; - var log2 = function (n) { return Math.log(n) / Math.log(2); }; - var metricNNDescent = nnDescent.makeNNDescent(distanceFn, this.random); - var round = function (n) { - return n === 0.5 ? 0 : Math.round(n); - }; - var nTrees = 5 + Math.floor(round(Math.pow(X.length, 0.5) / 20.0)); - var nIters = Math.max(5, Math.floor(Math.round(log2(X.length)))); - this.rpForest = tree.makeForest(X, nNeighbors, nTrees, this.random); - var leafArray = tree.makeLeafArray(this.rpForest); - var _b = metricNNDescent(X, leafArray, nNeighbors, nIters), indices = _b.indices, weights = _b.weights; - return { knnIndices: indices, knnDistances: weights }; - }; - UMAP.prototype.fuzzySimplicialSet = function (X, nNeighbors, setOpMixRatio) { - if (setOpMixRatio === void 0) { setOpMixRatio = 1.0; } - var _a = this, _b = _a.knnIndices, knnIndices = _b === void 0 ? [] : _b, _c = _a.knnDistances, knnDistances = _c === void 0 ? [] : _c, localConnectivity = _a.localConnectivity; - var _d = this.smoothKNNDistance(knnDistances, nNeighbors, localConnectivity), sigmas = _d.sigmas, rhos = _d.rhos; - var _e = this.computeMembershipStrengths(knnIndices, knnDistances, sigmas, rhos), rows = _e.rows, cols = _e.cols, vals = _e.vals; - var size = [X.length, X.length]; - var sparseMatrix = new matrix.SparseMatrix(rows, cols, vals, size); - var transpose = matrix.transpose(sparseMatrix); - var prodMatrix = matrix.pairwiseMultiply(sparseMatrix, transpose); - var a = matrix.subtract(matrix.add(sparseMatrix, transpose), prodMatrix); - var b = matrix.multiplyScalar(a, setOpMixRatio); - var c = matrix.multiplyScalar(prodMatrix, 1.0 - setOpMixRatio); - var result = matrix.add(b, c); - return result; - }; - UMAP.prototype.categoricalSimplicialSetIntersection = function (simplicialSet, target, farDist, unknownDist) { - if (unknownDist === void 0) { unknownDist = 1.0; } - var intersection = fastIntersection(simplicialSet, target, unknownDist, farDist); - intersection = matrix.eliminateZeros(intersection); - return resetLocalConnectivity(intersection); - }; - UMAP.prototype.smoothKNNDistance = function (distances, k, localConnectivity, nIter, bandwidth) { - if (localConnectivity === void 0) { localConnectivity = 1.0; } - if (nIter === void 0) { nIter = 64; } - if (bandwidth === void 0) { bandwidth = 1.0; } - var target = (Math.log(k) / Math.log(2)) * bandwidth; - var rho = utils.zeros(distances.length); - var result = utils.zeros(distances.length); - for (var i = 0; i < distances.length; i++) { - var lo = 0.0; - var hi = Infinity; - var mid = 1.0; - var ithDistances = distances[i]; - var nonZeroDists = ithDistances.filter(function (d) { return d > 0.0; }); - if (nonZeroDists.length >= localConnectivity) { - var index = Math.floor(localConnectivity); - var interpolation = localConnectivity - index; - if (index > 0) { - rho[i] = nonZeroDists[index - 1]; - if (interpolation > SMOOTH_K_TOLERANCE) { - rho[i] += - interpolation * (nonZeroDists[index] - nonZeroDists[index - 1]); - } - } - else { - rho[i] = interpolation * nonZeroDists[0]; - } - } - else if (nonZeroDists.length > 0) { - rho[i] = utils.max(nonZeroDists); - } - for (var n = 0; n < nIter; n++) { - var psum = 0.0; - for (var j = 1; j < distances[i].length; j++) { - var d = distances[i][j] - rho[i]; - if (d > 0) { - psum += Math.exp(-(d / mid)); - } - else { - psum += 1.0; - } - } - if (Math.abs(psum - target) < SMOOTH_K_TOLERANCE) { - break; - } - if (psum > target) { - hi = mid; - mid = (lo + hi) / 2.0; - } - else { - lo = mid; - if (hi === Infinity) { - mid *= 2; - } - else { - mid = (lo + hi) / 2.0; - } - } - } - result[i] = mid; - if (rho[i] > 0.0) { - var meanIthDistances = utils.mean(ithDistances); - if (result[i] < MIN_K_DIST_SCALE * meanIthDistances) { - result[i] = MIN_K_DIST_SCALE * meanIthDistances; - } - } - else { - var meanDistances = utils.mean(distances.map(utils.mean)); - if (result[i] < MIN_K_DIST_SCALE * meanDistances) { - result[i] = MIN_K_DIST_SCALE * meanDistances; - } - } - } - return { sigmas: result, rhos: rho }; - }; - UMAP.prototype.computeMembershipStrengths = function (knnIndices, knnDistances, sigmas, rhos) { - var nSamples = knnIndices.length; - var nNeighbors = knnIndices[0].length; - var rows = utils.zeros(nSamples * nNeighbors); - var cols = utils.zeros(nSamples * nNeighbors); - var vals = utils.zeros(nSamples * nNeighbors); - for (var i = 0; i < nSamples; i++) { - for (var j = 0; j < nNeighbors; j++) { - var val = 0; - if (knnIndices[i][j] === -1) { - continue; - } - if (knnIndices[i][j] === i) { - val = 0.0; - } - else if (knnDistances[i][j] - rhos[i] <= 0.0) { - val = 1.0; - } - else { - val = Math.exp(-((knnDistances[i][j] - rhos[i]) / sigmas[i])); - } - rows[i * nNeighbors + j] = i; - cols[i * nNeighbors + j] = knnIndices[i][j]; - vals[i * nNeighbors + j] = val; - } - } - return { rows: rows, cols: cols, vals: vals }; - }; - UMAP.prototype.initializeSimplicialSetEmbedding = function () { - var _this = this; - var nEpochs = this.getNEpochs(); - var nComponents = this.nComponents; - var graphValues = this.graph.getValues(); - var graphMax = 0; - for (var i = 0; i < graphValues.length; i++) { - var value = graphValues[i]; - if (graphMax < graphValues[i]) { - graphMax = value; - } - } - var graph = this.graph.map(function (value) { - if (value < graphMax / nEpochs) { - return 0; - } - else { - return value; - } - }); - this.embedding = utils.zeros(graph.nRows).map(function () { - return utils.zeros(nComponents).map(function () { - return utils.tauRand(_this.random) * 20 + -10; - }); - }); - var weights = []; - var head = []; - var tail = []; - for (var i = 0; i < graph.nRows; i++) { - for (var j = 0; j < graph.nCols; j++) { - var value = graph.get(i, j); - if (value) { - weights.push(value); - tail.push(i); - head.push(j); - } - } - } - var epochsPerSample = this.makeEpochsPerSample(weights, nEpochs); - return { head: head, tail: tail, epochsPerSample: epochsPerSample }; - }; - UMAP.prototype.makeEpochsPerSample = function (weights, nEpochs) { - var result = utils.filled(weights.length, -1.0); - var max = utils.max(weights); - var nSamples = weights.map(function (w) { return (w / max) * nEpochs; }); - nSamples.forEach(function (n, i) { - if (n > 0) - result[i] = nEpochs / nSamples[i]; - }); - return result; - }; - UMAP.prototype.assignOptimizationStateParameters = function (state) { - Object.assign(this.optimizationState, state); - }; - UMAP.prototype.prepareForOptimizationLoop = function () { - var _a = this, repulsionStrength = _a.repulsionStrength, learningRate = _a.learningRate, negativeSampleRate = _a.negativeSampleRate; - var _b = this.optimizationState, epochsPerSample = _b.epochsPerSample, headEmbedding = _b.headEmbedding, tailEmbedding = _b.tailEmbedding; - var dim = headEmbedding[0].length; - var moveOther = headEmbedding.length === tailEmbedding.length; - var epochsPerNegativeSample = epochsPerSample.map(function (e) { return e / negativeSampleRate; }); - var epochOfNextNegativeSample = __spread(epochsPerNegativeSample); - var epochOfNextSample = __spread(epochsPerSample); - this.assignOptimizationStateParameters({ - epochOfNextSample: epochOfNextSample, - epochOfNextNegativeSample: epochOfNextNegativeSample, - epochsPerNegativeSample: epochsPerNegativeSample, - moveOther: moveOther, - initialAlpha: learningRate, - alpha: learningRate, - gamma: repulsionStrength, - dim: dim, - }); - }; - UMAP.prototype.initializeOptimization = function () { - var headEmbedding = this.embedding; - var tailEmbedding = this.embedding; - var _a = this.optimizationState, head = _a.head, tail = _a.tail, epochsPerSample = _a.epochsPerSample; - var nEpochs = this.getNEpochs(); - var nVertices = this.graph.nCols; - var _b = findABParams(this.spread, this.minDist), a = _b.a, b = _b.b; - this.assignOptimizationStateParameters({ - headEmbedding: headEmbedding, - tailEmbedding: tailEmbedding, - head: head, - tail: tail, - epochsPerSample: epochsPerSample, - a: a, - b: b, - nEpochs: nEpochs, - nVertices: nVertices, - }); - }; - UMAP.prototype.optimizeLayoutStep = function (n) { - var optimizationState = this.optimizationState; - var head = optimizationState.head, tail = optimizationState.tail, headEmbedding = optimizationState.headEmbedding, tailEmbedding = optimizationState.tailEmbedding, epochsPerSample = optimizationState.epochsPerSample, epochOfNextSample = optimizationState.epochOfNextSample, epochOfNextNegativeSample = optimizationState.epochOfNextNegativeSample, epochsPerNegativeSample = optimizationState.epochsPerNegativeSample, moveOther = optimizationState.moveOther, initialAlpha = optimizationState.initialAlpha, alpha = optimizationState.alpha, gamma = optimizationState.gamma, a = optimizationState.a, b = optimizationState.b, dim = optimizationState.dim, nEpochs = optimizationState.nEpochs, nVertices = optimizationState.nVertices; - var clipValue = 4.0; - for (var i = 0; i < epochsPerSample.length; i++) { - if (epochOfNextSample[i] > n) { - continue; - } - var j = head[i]; - var k = tail[i]; - var current = headEmbedding[j]; - var other = tailEmbedding[k]; - var distSquared = rDist(current, other); - var gradCoeff = 0; - if (distSquared > 0) { - gradCoeff = -2.0 * a * b * Math.pow(distSquared, b - 1.0); - gradCoeff /= a * Math.pow(distSquared, b) + 1.0; - } - for (var d = 0; d < dim; d++) { - var gradD = clip(gradCoeff * (current[d] - other[d]), clipValue); - current[d] += gradD * alpha; - if (moveOther) { - other[d] += -gradD * alpha; - } - } - epochOfNextSample[i] += epochsPerSample[i]; - var nNegSamples = Math.floor((n - epochOfNextNegativeSample[i]) / epochsPerNegativeSample[i]); - for (var p = 0; p < nNegSamples; p++) { - var k_1 = utils.tauRandInt(nVertices, this.random); - var other_1 = tailEmbedding[k_1]; - var distSquared_1 = rDist(current, other_1); - var gradCoeff_1 = 0.0; - if (distSquared_1 > 0.0) { - gradCoeff_1 = 2.0 * gamma * b; - gradCoeff_1 /= - (0.001 + distSquared_1) * (a * Math.pow(distSquared_1, b) + 1); - } - else if (j === k_1) { - continue; - } - for (var d = 0; d < dim; d++) { - var gradD = 4.0; - if (gradCoeff_1 > 0.0) { - gradD = clip(gradCoeff_1 * (current[d] - other_1[d]), clipValue); - } - current[d] += gradD * alpha; - } - } - epochOfNextNegativeSample[i] += nNegSamples * epochsPerNegativeSample[i]; - } - optimizationState.alpha = initialAlpha * (1.0 - n / nEpochs); - optimizationState.currentEpoch += 1; - return headEmbedding; - }; - UMAP.prototype.optimizeLayoutAsync = function (epochCallback) { - var _this = this; - if (epochCallback === void 0) { epochCallback = function () { return true; }; } - return new Promise(function (resolve, reject) { - var step = function () { return __awaiter(_this, void 0, void 0, function () { - var _a, nEpochs, currentEpoch, epochCompleted, shouldStop, isFinished; - return __generator(this, function (_b) { - try { - _a = this.optimizationState, nEpochs = _a.nEpochs, currentEpoch = _a.currentEpoch; - this.embedding = this.optimizeLayoutStep(currentEpoch); - epochCompleted = this.optimizationState.currentEpoch; - shouldStop = epochCallback(epochCompleted) === false; - isFinished = epochCompleted === nEpochs; - if (!shouldStop && !isFinished) { - step(); - } - else { - return [2, resolve(isFinished)]; - } - } - catch (err) { - reject(err); - } - return [2]; - }); - }); }; - step(); - }); - }; - UMAP.prototype.optimizeLayout = function (epochCallback) { - if (epochCallback === void 0) { epochCallback = function () { return true; }; } - var isFinished = false; - var embedding = []; - while (!isFinished) { - var _a = this.optimizationState, nEpochs = _a.nEpochs, currentEpoch = _a.currentEpoch; - embedding = this.optimizeLayoutStep(currentEpoch); - var epochCompleted = this.optimizationState.currentEpoch; - var shouldStop = epochCallback(epochCompleted) === false; - isFinished = epochCompleted === nEpochs || shouldStop; - } - return embedding; - }; - UMAP.prototype.getNEpochs = function () { - var graph = this.graph; - if (this.nEpochs > 0) { - return this.nEpochs; - } - var length = graph.nRows; - if (length <= 2500) { - return 500; - } - else if (length <= 5000) { - return 400; - } - else if (length <= 7500) { - return 300; - } - else { - return 200; - } - }; - return UMAP; -}()); -exports.UMAP = UMAP; -function euclidean(x, y) { - var result = 0; - for (var i = 0; i < x.length; i++) { - result += Math.pow((x[i] - y[i]), 2); - } - return Math.sqrt(result); -} -exports.euclidean = euclidean; -function cosine(x, y) { - var result = 0.0; - var normX = 0.0; - var normY = 0.0; - for (var i = 0; i < x.length; i++) { - result += x[i] * y[i]; - normX += Math.pow(x[i], 2); - normY += Math.pow(y[i], 2); - } - if (normX === 0 && normY === 0) { - return 0; - } - else if (normX === 0 || normY === 0) { - return 1.0; - } - else { - return 1.0 - result / Math.sqrt(normX * normY); - } -} -exports.cosine = cosine; -var OptimizationState = (function () { - function OptimizationState() { - this.currentEpoch = 0; - this.headEmbedding = []; - this.tailEmbedding = []; - this.head = []; - this.tail = []; - this.epochsPerSample = []; - this.epochOfNextSample = []; - this.epochOfNextNegativeSample = []; - this.epochsPerNegativeSample = []; - this.moveOther = true; - this.initialAlpha = 1.0; - this.alpha = 1.0; - this.gamma = 1.0; - this.a = 1.5769434603113077; - this.b = 0.8950608779109733; - this.dim = 2; - this.nEpochs = 500; - this.nVertices = 0; - } - return OptimizationState; -}()); -function clip(x, clipValue) { - if (x > clipValue) - return clipValue; - else if (x < -clipValue) - return -clipValue; - else - return x; -} -function rDist(x, y) { - var result = 0.0; - for (var i = 0; i < x.length; i++) { - result += Math.pow(x[i] - y[i], 2); - } - return result; -} -function findABParams(spread, minDist) { - var curve = function (_a) { - var _b = __read(_a, 2), a = _b[0], b = _b[1]; - return function (x) { - return 1.0 / (1.0 + a * Math.pow(x, (2 * b))); - }; - }; - var xv = utils - .linear(0, spread * 3, 300) - .map(function (val) { return (val < minDist ? 1.0 : val); }); - var yv = utils.zeros(xv.length).map(function (val, index) { - var gte = xv[index] >= minDist; - return gte ? Math.exp(-(xv[index] - minDist) / spread) : val; - }); - var initialValues = [0.5, 0.5]; - var data = { x: xv, y: yv }; - var options = { - damping: 1.5, - initialValues: initialValues, - gradientDifference: 10e-2, - maxIterations: 100, - errorTolerance: 10e-3, - }; - var parameterValues = ml_levenberg_marquardt_1.default(data, curve, options).parameterValues; - var _a = __read(parameterValues, 2), a = _a[0], b = _a[1]; - return { a: a, b: b }; -} -exports.findABParams = findABParams; -function fastIntersection(graph, target, unknownDist, farDist) { - if (unknownDist === void 0) { unknownDist = 1.0; } - if (farDist === void 0) { farDist = 5.0; } - return graph.map(function (value, row, col) { - if (target[row] === -1 || target[col] === -1) { - return value * Math.exp(-unknownDist); - } - else if (target[row] !== target[col]) { - return value * Math.exp(-farDist); - } - else { - return value; - } - }); -} -exports.fastIntersection = fastIntersection; -function resetLocalConnectivity(simplicialSet) { - simplicialSet = matrix.normalize(simplicialSet, "max"); - var transpose = matrix.transpose(simplicialSet); - var prodMatrix = matrix.pairwiseMultiply(transpose, simplicialSet); - simplicialSet = matrix.add(simplicialSet, matrix.subtract(transpose, prodMatrix)); - return matrix.eliminateZeros(simplicialSet); -} -exports.resetLocalConnectivity = resetLocalConnectivity; -function initTransform(indices, weights, embedding) { - var result = utils - .zeros(indices.length) - .map(function (z) { return utils.zeros(embedding[0].length); }); - for (var i = 0; i < indices.length; i++) { - for (var j = 0; j < indices[0].length; j++) { - for (var d = 0; d < embedding[0].length; d++) { - var a = indices[i][j]; - result[i][d] += weights[i][j] * embedding[a][d]; - } - } - } - return result; -} -exports.initTransform = initTransform; - - -/***/ }), -/* 7 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - -var __values = (this && this.__values) || function (o) { - var m = typeof Symbol === "function" && o[Symbol.iterator], i = 0; - if (m) return m.call(o); - return { - next: function () { - if (o && i >= o.length) o = void 0; - return { value: o && o[i++], done: !o }; - } - }; -}; -var __importStar = (this && this.__importStar) || function (mod) { - if (mod && mod.__esModule) return mod; - var result = {}; - if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; - result["default"] = mod; - return result; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -var heap = __importStar(__webpack_require__(2)); -var matrix = __importStar(__webpack_require__(3)); -var tree = __importStar(__webpack_require__(4)); -var utils = __importStar(__webpack_require__(1)); -function makeNNDescent(distanceFn, random) { - return function nNDescent(data, leafArray, nNeighbors, nIters, maxCandidates, delta, rho, rpTreeInit) { - if (nIters === void 0) { nIters = 10; } - if (maxCandidates === void 0) { maxCandidates = 50; } - if (delta === void 0) { delta = 0.001; } - if (rho === void 0) { rho = 0.5; } - if (rpTreeInit === void 0) { rpTreeInit = true; } - var nVertices = data.length; - var currentGraph = heap.makeHeap(data.length, nNeighbors); - for (var i = 0; i < data.length; i++) { - var indices = heap.rejectionSample(nNeighbors, data.length, random); - for (var j = 0; j < indices.length; j++) { - var d = distanceFn(data[i], data[indices[j]]); - heap.heapPush(currentGraph, i, d, indices[j], 1); - heap.heapPush(currentGraph, indices[j], d, i, 1); - } - } - if (rpTreeInit) { - for (var n = 0; n < leafArray.length; n++) { - for (var i = 0; i < leafArray[n].length; i++) { - if (leafArray[n][i] < 0) { - break; - } - for (var j = i + 1; j < leafArray[n].length; j++) { - if (leafArray[n][j] < 0) { - break; - } - var d = distanceFn(data[leafArray[n][i]], data[leafArray[n][j]]); - heap.heapPush(currentGraph, leafArray[n][i], d, leafArray[n][j], 1); - heap.heapPush(currentGraph, leafArray[n][j], d, leafArray[n][i], 1); - } - } - } - } - for (var n = 0; n < nIters; n++) { - var candidateNeighbors = heap.buildCandidates(currentGraph, nVertices, nNeighbors, maxCandidates, random); - var c = 0; - for (var i = 0; i < nVertices; i++) { - for (var j = 0; j < maxCandidates; j++) { - var p = Math.floor(candidateNeighbors[0][i][j]); - if (p < 0 || utils.tauRand(random) < rho) { - continue; - } - for (var k = 0; k < maxCandidates; k++) { - var q = Math.floor(candidateNeighbors[0][i][k]); - var cj = candidateNeighbors[2][i][j]; - var ck = candidateNeighbors[2][i][k]; - if (q < 0 || (!cj && !ck)) { - continue; - } - var d = distanceFn(data[p], data[q]); - c += heap.heapPush(currentGraph, p, d, q, 1); - c += heap.heapPush(currentGraph, q, d, p, 1); - } - } - } - if (c <= delta * nNeighbors * data.length) { - break; - } - } - var sorted = heap.deheapSort(currentGraph); - return sorted; - }; -} -exports.makeNNDescent = makeNNDescent; -function makeInitializations(distanceFn) { - function initFromRandom(nNeighbors, data, queryPoints, _heap, random) { - for (var i = 0; i < queryPoints.length; i++) { - var indices = utils.rejectionSample(nNeighbors, data.length, random); - for (var j = 0; j < indices.length; j++) { - if (indices[j] < 0) { - continue; - } - var d = distanceFn(data[indices[j]], queryPoints[i]); - heap.heapPush(_heap, i, d, indices[j], 1); - } - } - } - function initFromTree(_tree, data, queryPoints, _heap, random) { - for (var i = 0; i < queryPoints.length; i++) { - var indices = tree.searchFlatTree(queryPoints[i], _tree, random); - for (var j = 0; j < indices.length; j++) { - if (indices[j] < 0) { - return; - } - var d = distanceFn(data[indices[j]], queryPoints[i]); - heap.heapPush(_heap, i, d, indices[j], 1); - } - } - return; - } - return { initFromRandom: initFromRandom, initFromTree: initFromTree }; -} -exports.makeInitializations = makeInitializations; -function makeInitializedNNSearch(distanceFn) { - return function nnSearchFn(data, graph, initialization, queryPoints) { - var e_1, _a; - var _b = matrix.getCSR(graph), indices = _b.indices, indptr = _b.indptr; - for (var i = 0; i < queryPoints.length; i++) { - var tried = new Set(initialization[0][i]); - while (true) { - var vertex = heap.smallestFlagged(initialization, i); - if (vertex === -1) { - break; - } - var candidates = indices.slice(indptr[vertex], indptr[vertex + 1]); - try { - for (var candidates_1 = __values(candidates), candidates_1_1 = candidates_1.next(); !candidates_1_1.done; candidates_1_1 = candidates_1.next()) { - var candidate = candidates_1_1.value; - if (candidate === vertex || - candidate === -1 || - tried.has(candidate)) { - continue; - } - var d = distanceFn(data[candidate], queryPoints[i]); - heap.uncheckedHeapPush(initialization, i, d, candidate, 1); - tried.add(candidate); - } - } - catch (e_1_1) { e_1 = { error: e_1_1 }; } - finally { - try { - if (candidates_1_1 && !candidates_1_1.done && (_a = candidates_1.return)) _a.call(candidates_1); - } - finally { if (e_1) throw e_1.error; } - } - } - } - return initialization; - }; -} -exports.makeInitializedNNSearch = makeInitializedNNSearch; -function initializeSearch(forest, data, queryPoints, nNeighbors, initFromRandom, initFromTree, random) { - var e_2, _a; - var results = heap.makeHeap(queryPoints.length, nNeighbors); - initFromRandom(nNeighbors, data, queryPoints, results, random); - if (forest) { - try { - for (var forest_1 = __values(forest), forest_1_1 = forest_1.next(); !forest_1_1.done; forest_1_1 = forest_1.next()) { - var tree_1 = forest_1_1.value; - initFromTree(tree_1, data, queryPoints, results, random); - } - } - catch (e_2_1) { e_2 = { error: e_2_1 }; } - finally { - try { - if (forest_1_1 && !forest_1_1.done && (_a = forest_1.return)) _a.call(forest_1); - } - finally { if (e_2) throw e_2.error; } - } - } - return results; -} -exports.initializeSearch = initializeSearch; - - -/***/ }), -/* 8 */ -/***/ (function(module, exports, __webpack_require__) { - -"use strict"; - - -var mlMatrix = __webpack_require__(9); - -/** - * Calculate current error - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} parameters - Array of current parameter values - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @return {number} - */ -function errorCalculation( - data, - parameters, - parameterizedFunction -) { - var error = 0; - const func = parameterizedFunction(parameters); - - for (var i = 0; i < data.x.length; i++) { - error += Math.abs(data.y[i] - func(data.x[i])); - } - - return error; -} - -/** - * Difference of the matrix function over the parameters - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} evaluatedData - Array of previous evaluated function values - * @param {Array} params - Array of previous parameter values - * @param {number} gradientDifference - Adjustment for decrease the damping parameter - * @param {function} paramFunction - The parameters and returns a function with the independent variable as a parameter - * @return {Matrix} - */ -function gradientFunction( - data, - evaluatedData, - params, - gradientDifference, - paramFunction -) { - const n = params.length; - const m = data.x.length; - - var ans = new Array(n); - - for (var param = 0; param < n; param++) { - ans[param] = new Array(m); - var auxParams = params.concat(); - auxParams[param] += gradientDifference; - var funcParam = paramFunction(auxParams); - - for (var point = 0; point < m; point++) { - ans[param][point] = evaluatedData[point] - funcParam(data.x[point]); - } - } - return new mlMatrix.Matrix(ans); -} - -/** - * Matrix function over the samples - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} evaluatedData - Array of previous evaluated function values - * @return {Matrix} - */ -function matrixFunction(data, evaluatedData) { - const m = data.x.length; - - var ans = new Array(m); - - for (var point = 0; point < m; point++) { - ans[point] = data.y[point] - evaluatedData[point]; - } - - return new mlMatrix.Matrix([ans]); -} - -/** - * Iteration for Levenberg-Marquardt - * @ignore - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {Array} params - Array of previous parameter values - * @param {number} damping - Levenberg-Marquardt parameter - * @param {number} gradientDifference - Adjustment for decrease the damping parameter - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @return {Array} - */ -function step( - data, - params, - damping, - gradientDifference, - parameterizedFunction -) { - var identity = mlMatrix.Matrix.eye(params.length).mul( - damping * gradientDifference * gradientDifference - ); - - var l = data.x.length; - var evaluatedData = new Array(l); - const func = parameterizedFunction(params); - for (var i = 0; i < l; i++) { - evaluatedData[i] = func(data.x[i]); - } - var gradientFunc = gradientFunction( - data, - evaluatedData, - params, - gradientDifference, - parameterizedFunction - ); - var matrixFunc = matrixFunction(data, evaluatedData).transposeView(); - var inverseMatrix = mlMatrix.inverse( - identity.add(gradientFunc.mmul(gradientFunc.transposeView())) - ); - params = new mlMatrix.Matrix([params]); - params = params.sub( - inverseMatrix - .mmul(gradientFunc) - .mmul(matrixFunc) - .mul(gradientDifference) - .transposeView() - ); - - return params.to1DArray(); -} - -/** - * Curve fitting algorithm - * @param {{x:Array, y:Array}} data - Array of points to fit in the format [x1, x2, ... ], [y1, y2, ... ] - * @param {function} parameterizedFunction - The parameters and returns a function with the independent variable as a parameter - * @param {object} [options] - Options object - * @param {number} [options.damping] - Levenberg-Marquardt parameter - * @param {number} [options.gradientDifference = 10e-2] - Adjustment for decrease the damping parameter - * @param {Array} [options.initialValues] - Array of initial parameter values - * @param {number} [options.maxIterations = 100] - Maximum of allowed iterations - * @param {number} [options.errorTolerance = 10e-3] - Minimum uncertainty allowed for each point - * @return {{parameterValues: Array, parameterError: number, iterations: number}} - */ -function levenbergMarquardt( - data, - parameterizedFunction, - options = {} -) { - let { - maxIterations = 100, - gradientDifference = 10e-2, - damping = 0, - errorTolerance = 10e-3, - initialValues - } = options; - - if (damping <= 0) { - throw new Error('The damping option must be a positive number'); - } else if (!data.x || !data.y) { - throw new Error('The data parameter must have x and y elements'); - } else if ( - !Array.isArray(data.x) || - data.x.length < 2 || - !Array.isArray(data.y) || - data.y.length < 2 - ) { - throw new Error( - 'The data parameter elements must be an array with more than 2 points' - ); - } else { - let dataLen = data.x.length; - if (dataLen !== data.y.length) { - throw new Error('The data parameter elements must have the same size'); - } - } - - var parameters = - initialValues || new Array(parameterizedFunction.length).fill(1); - - if (!Array.isArray(parameters)) { - throw new Error('initialValues must be an array'); - } - - var error = errorCalculation(data, parameters, parameterizedFunction); - - var converged = error <= errorTolerance; - - for ( - var iteration = 0; - iteration < maxIterations && !converged; - iteration++ - ) { - parameters = step( - data, - parameters, - damping, - gradientDifference, - parameterizedFunction - ); - error = errorCalculation(data, parameters, parameterizedFunction); - converged = error <= errorTolerance; - } - - return { - parameterValues: parameters, - parameterError: error, - iterations: iteration - }; -} - -module.exports = levenbergMarquardt; - - -/***/ }), -/* 9 */ -/***/ (function(module, __webpack_exports__, __webpack_require__) { - -"use strict"; -__webpack_require__.r(__webpack_exports__); - -// EXTERNAL MODULE: ./node_modules/is-any-array/src/index.js -var src = __webpack_require__(0); -var src_default = /*#__PURE__*/__webpack_require__.n(src); - -// CONCATENATED MODULE: ./node_modules/ml-array-max/lib-es6/index.js - - -/** - * Computes the maximum of the given values - * @param {Array} input - * @return {number} - */ - -function lib_es6_max(input) { - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } - - if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var max = input[0]; - - for (var i = 1; i < input.length; i++) { - if (input[i] > max) max = input[i]; - } - - return max; -} - -/* harmony default export */ var lib_es6 = (lib_es6_max); - -// CONCATENATED MODULE: ./node_modules/ml-array-min/lib-es6/index.js - - -/** - * Computes the minimum of the given values - * @param {Array} input - * @return {number} - */ - -function lib_es6_min(input) { - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } - - if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var min = input[0]; - - for (var i = 1; i < input.length; i++) { - if (input[i] < min) min = input[i]; - } - - return min; -} - -/* harmony default export */ var ml_array_min_lib_es6 = (lib_es6_min); - -// CONCATENATED MODULE: ./node_modules/ml-array-rescale/lib-es6/index.js - - - - -function rescale(input) { - var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {}; - - if (!src_default()(input)) { - throw new TypeError('input must be an array'); - } else if (input.length === 0) { - throw new TypeError('input must not be empty'); - } - - var output; - - if (options.output !== undefined) { - if (!src_default()(options.output)) { - throw new TypeError('output option must be an array if specified'); - } - - output = options.output; - } else { - output = new Array(input.length); - } - - var currentMin = ml_array_min_lib_es6(input); - var currentMax = lib_es6(input); - - if (currentMin === currentMax) { - throw new RangeError('minimum and maximum input values are equal. Cannot rescale a constant array'); - } - - var _options$min = options.min, - minValue = _options$min === void 0 ? options.autoMinMax ? currentMin : 0 : _options$min, - _options$max = options.max, - maxValue = _options$max === void 0 ? options.autoMinMax ? currentMax : 1 : _options$max; - - if (minValue >= maxValue) { - throw new RangeError('min option must be smaller than max option'); - } - - var factor = (maxValue - minValue) / (currentMax - currentMin); - - for (var i = 0; i < input.length; i++) { - output[i] = (input[i] - currentMin) * factor + minValue; - } - - return output; -} - -/* harmony default export */ var ml_array_rescale_lib_es6 = (rescale); - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/lu.js - - -/** - * @class LuDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/LuDecomposition.cs - * @param {Matrix} matrix - */ -class lu_LuDecomposition { - constructor(matrix) { - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - - var lu = matrix.clone(); - var rows = lu.rows; - var columns = lu.columns; - var pivotVector = new Array(rows); - var pivotSign = 1; - var i, j, k, p, s, t, v; - var LUcolj, kmax; - - for (i = 0; i < rows; i++) { - pivotVector[i] = i; - } - - LUcolj = new Array(rows); - - for (j = 0; j < columns; j++) { - for (i = 0; i < rows; i++) { - LUcolj[i] = lu.get(i, j); - } - - for (i = 0; i < rows; i++) { - kmax = Math.min(i, j); - s = 0; - for (k = 0; k < kmax; k++) { - s += lu.get(i, k) * LUcolj[k]; - } - LUcolj[i] -= s; - lu.set(i, j, LUcolj[i]); - } - - p = j; - for (i = j + 1; i < rows; i++) { - if (Math.abs(LUcolj[i]) > Math.abs(LUcolj[p])) { - p = i; - } - } - - if (p !== j) { - for (k = 0; k < columns; k++) { - t = lu.get(p, k); - lu.set(p, k, lu.get(j, k)); - lu.set(j, k, t); - } - - v = pivotVector[p]; - pivotVector[p] = pivotVector[j]; - pivotVector[j] = v; - - pivotSign = -pivotSign; - } - - if (j < rows && lu.get(j, j) !== 0) { - for (i = j + 1; i < rows; i++) { - lu.set(i, j, lu.get(i, j) / lu.get(j, j)); - } - } - } - - this.LU = lu; - this.pivotVector = pivotVector; - this.pivotSign = pivotSign; - } - - /** - * - * @return {boolean} - */ - isSingular() { - var data = this.LU; - var col = data.columns; - for (var j = 0; j < col; j++) { - if (data[j][j] === 0) { - return true; - } - } - return false; - } - - /** - * - * @param {Matrix} value - * @return {Matrix} - */ - solve(value) { - value = matrix_Matrix.checkMatrix(value); - - var lu = this.LU; - var rows = lu.rows; - - if (rows !== value.rows) { - throw new Error('Invalid matrix dimensions'); - } - if (this.isSingular()) { - throw new Error('LU matrix is singular'); - } - - var count = value.columns; - var X = value.subMatrixRow(this.pivotVector, 0, count - 1); - var columns = lu.columns; - var i, j, k; - - for (k = 0; k < columns; k++) { - for (i = k + 1; i < columns; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * lu[i][k]; - } - } - } - for (k = columns - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - X[k][j] /= lu[k][k]; - } - for (i = 0; i < k; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * lu[i][k]; - } - } - } - return X; - } - - /** - * - * @return {number} - */ - get determinant() { - var data = this.LU; - if (!data.isSquare()) { - throw new Error('Matrix must be square'); - } - var determinant = this.pivotSign; - var col = data.columns; - for (var j = 0; j < col; j++) { - determinant *= data[j][j]; - } - return determinant; - } - - /** - * - * @return {Matrix} - */ - get lowerTriangularMatrix() { - var data = this.LU; - var rows = data.rows; - var columns = data.columns; - var X = new matrix_Matrix(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - if (i > j) { - X[i][j] = data[i][j]; - } else if (i === j) { - X[i][j] = 1; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Matrix} - */ - get upperTriangularMatrix() { - var data = this.LU; - var rows = data.rows; - var columns = data.columns; - var X = new matrix_Matrix(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - if (i <= j) { - X[i][j] = data[i][j]; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Array} - */ - get pivotPermutationVector() { - return this.pivotVector.slice(); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/util.js -function hypotenuse(a, b) { - var r = 0; - if (Math.abs(a) > Math.abs(b)) { - r = b / a; - return Math.abs(a) * Math.sqrt(1 + r * r); - } - if (b !== 0) { - r = a / b; - return Math.abs(b) * Math.sqrt(1 + r * r); - } - return 0; -} - -function getFilled2DArray(rows, columns, value) { - var array = new Array(rows); - for (var i = 0; i < rows; i++) { - array[i] = new Array(columns); - for (var j = 0; j < columns; j++) { - array[i][j] = value; - } - } - return array; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/svd.js - - - - -/** - * @class SingularValueDecomposition - * @see https://github.com/accord-net/framework/blob/development/Sources/Accord.Math/Decompositions/SingularValueDecomposition.cs - * @param {Matrix} value - * @param {object} [options] - * @param {boolean} [options.computeLeftSingularVectors=true] - * @param {boolean} [options.computeRightSingularVectors=true] - * @param {boolean} [options.autoTranspose=false] - */ -class svd_SingularValueDecomposition { - constructor(value, options = {}) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var m = value.rows; - var n = value.columns; - - const { - computeLeftSingularVectors = true, - computeRightSingularVectors = true, - autoTranspose = false - } = options; - - var wantu = Boolean(computeLeftSingularVectors); - var wantv = Boolean(computeRightSingularVectors); - - var swapped = false; - var a; - if (m < n) { - if (!autoTranspose) { - a = value.clone(); - // eslint-disable-next-line no-console - console.warn( - 'Computing SVD on a matrix with more columns than rows. Consider enabling autoTranspose' - ); - } else { - a = value.transpose(); - m = a.rows; - n = a.columns; - swapped = true; - var aux = wantu; - wantu = wantv; - wantv = aux; - } - } else { - a = value.clone(); - } - - var nu = Math.min(m, n); - var ni = Math.min(m + 1, n); - var s = new Array(ni); - var U = getFilled2DArray(m, nu, 0); - var V = getFilled2DArray(n, n, 0); - - var e = new Array(n); - var work = new Array(m); - - var si = new Array(ni); - for (let i = 0; i < ni; i++) si[i] = i; - - var nct = Math.min(m - 1, n); - var nrt = Math.max(0, Math.min(n - 2, m)); - var mrc = Math.max(nct, nrt); - - for (let k = 0; k < mrc; k++) { - if (k < nct) { - s[k] = 0; - for (let i = k; i < m; i++) { - s[k] = hypotenuse(s[k], a[i][k]); - } - if (s[k] !== 0) { - if (a[k][k] < 0) { - s[k] = -s[k]; - } - for (let i = k; i < m; i++) { - a[i][k] /= s[k]; - } - a[k][k] += 1; - } - s[k] = -s[k]; - } - - for (let j = k + 1; j < n; j++) { - if (k < nct && s[k] !== 0) { - let t = 0; - for (let i = k; i < m; i++) { - t += a[i][k] * a[i][j]; - } - t = -t / a[k][k]; - for (let i = k; i < m; i++) { - a[i][j] += t * a[i][k]; - } - } - e[j] = a[k][j]; - } - - if (wantu && k < nct) { - for (let i = k; i < m; i++) { - U[i][k] = a[i][k]; - } - } - - if (k < nrt) { - e[k] = 0; - for (let i = k + 1; i < n; i++) { - e[k] = hypotenuse(e[k], e[i]); - } - if (e[k] !== 0) { - if (e[k + 1] < 0) { - e[k] = 0 - e[k]; - } - for (let i = k + 1; i < n; i++) { - e[i] /= e[k]; - } - e[k + 1] += 1; - } - e[k] = -e[k]; - if (k + 1 < m && e[k] !== 0) { - for (let i = k + 1; i < m; i++) { - work[i] = 0; - } - for (let i = k + 1; i < m; i++) { - for (let j = k + 1; j < n; j++) { - work[i] += e[j] * a[i][j]; - } - } - for (let j = k + 1; j < n; j++) { - let t = -e[j] / e[k + 1]; - for (let i = k + 1; i < m; i++) { - a[i][j] += t * work[i]; - } - } - } - if (wantv) { - for (let i = k + 1; i < n; i++) { - V[i][k] = e[i]; - } - } - } - } - - let p = Math.min(n, m + 1); - if (nct < n) { - s[nct] = a[nct][nct]; - } - if (m < p) { - s[p - 1] = 0; - } - if (nrt + 1 < p) { - e[nrt] = a[nrt][p - 1]; - } - e[p - 1] = 0; - - if (wantu) { - for (let j = nct; j < nu; j++) { - for (let i = 0; i < m; i++) { - U[i][j] = 0; - } - U[j][j] = 1; - } - for (let k = nct - 1; k >= 0; k--) { - if (s[k] !== 0) { - for (let j = k + 1; j < nu; j++) { - let t = 0; - for (let i = k; i < m; i++) { - t += U[i][k] * U[i][j]; - } - t = -t / U[k][k]; - for (let i = k; i < m; i++) { - U[i][j] += t * U[i][k]; - } - } - for (let i = k; i < m; i++) { - U[i][k] = -U[i][k]; - } - U[k][k] = 1 + U[k][k]; - for (let i = 0; i < k - 1; i++) { - U[i][k] = 0; - } - } else { - for (let i = 0; i < m; i++) { - U[i][k] = 0; - } - U[k][k] = 1; - } - } - } - - if (wantv) { - for (let k = n - 1; k >= 0; k--) { - if (k < nrt && e[k] !== 0) { - for (let j = k + 1; j < n; j++) { - let t = 0; - for (let i = k + 1; i < n; i++) { - t += V[i][k] * V[i][j]; - } - t = -t / V[k + 1][k]; - for (let i = k + 1; i < n; i++) { - V[i][j] += t * V[i][k]; - } - } - } - for (let i = 0; i < n; i++) { - V[i][k] = 0; - } - V[k][k] = 1; - } - } - - var pp = p - 1; - var iter = 0; - var eps = Number.EPSILON; - while (p > 0) { - let k, kase; - for (k = p - 2; k >= -1; k--) { - if (k === -1) { - break; - } - const alpha = - Number.MIN_VALUE + eps * Math.abs(s[k] + Math.abs(s[k + 1])); - if (Math.abs(e[k]) <= alpha || Number.isNaN(e[k])) { - e[k] = 0; - break; - } - } - if (k === p - 2) { - kase = 4; - } else { - let ks; - for (ks = p - 1; ks >= k; ks--) { - if (ks === k) { - break; - } - let t = - (ks !== p ? Math.abs(e[ks]) : 0) + - (ks !== k + 1 ? Math.abs(e[ks - 1]) : 0); - if (Math.abs(s[ks]) <= eps * t) { - s[ks] = 0; - break; - } - } - if (ks === k) { - kase = 3; - } else if (ks === p - 1) { - kase = 1; - } else { - kase = 2; - k = ks; - } - } - - k++; - - switch (kase) { - case 1: { - let f = e[p - 2]; - e[p - 2] = 0; - for (let j = p - 2; j >= k; j--) { - let t = hypotenuse(s[j], f); - let cs = s[j] / t; - let sn = f / t; - s[j] = t; - if (j !== k) { - f = -sn * e[j - 1]; - e[j - 1] = cs * e[j - 1]; - } - if (wantv) { - for (let i = 0; i < n; i++) { - t = cs * V[i][j] + sn * V[i][p - 1]; - V[i][p - 1] = -sn * V[i][j] + cs * V[i][p - 1]; - V[i][j] = t; - } - } - } - break; - } - case 2: { - let f = e[k - 1]; - e[k - 1] = 0; - for (let j = k; j < p; j++) { - let t = hypotenuse(s[j], f); - let cs = s[j] / t; - let sn = f / t; - s[j] = t; - f = -sn * e[j]; - e[j] = cs * e[j]; - if (wantu) { - for (let i = 0; i < m; i++) { - t = cs * U[i][j] + sn * U[i][k - 1]; - U[i][k - 1] = -sn * U[i][j] + cs * U[i][k - 1]; - U[i][j] = t; - } - } - } - break; - } - case 3: { - const scale = Math.max( - Math.abs(s[p - 1]), - Math.abs(s[p - 2]), - Math.abs(e[p - 2]), - Math.abs(s[k]), - Math.abs(e[k]) - ); - const sp = s[p - 1] / scale; - const spm1 = s[p - 2] / scale; - const epm1 = e[p - 2] / scale; - const sk = s[k] / scale; - const ek = e[k] / scale; - const b = ((spm1 + sp) * (spm1 - sp) + epm1 * epm1) / 2; - const c = sp * epm1 * (sp * epm1); - let shift = 0; - if (b !== 0 || c !== 0) { - if (b < 0) { - shift = 0 - Math.sqrt(b * b + c); - } else { - shift = Math.sqrt(b * b + c); - } - shift = c / (b + shift); - } - let f = (sk + sp) * (sk - sp) + shift; - let g = sk * ek; - for (let j = k; j < p - 1; j++) { - let t = hypotenuse(f, g); - if (t === 0) t = Number.MIN_VALUE; - let cs = f / t; - let sn = g / t; - if (j !== k) { - e[j - 1] = t; - } - f = cs * s[j] + sn * e[j]; - e[j] = cs * e[j] - sn * s[j]; - g = sn * s[j + 1]; - s[j + 1] = cs * s[j + 1]; - if (wantv) { - for (let i = 0; i < n; i++) { - t = cs * V[i][j] + sn * V[i][j + 1]; - V[i][j + 1] = -sn * V[i][j] + cs * V[i][j + 1]; - V[i][j] = t; - } - } - t = hypotenuse(f, g); - if (t === 0) t = Number.MIN_VALUE; - cs = f / t; - sn = g / t; - s[j] = t; - f = cs * e[j] + sn * s[j + 1]; - s[j + 1] = -sn * e[j] + cs * s[j + 1]; - g = sn * e[j + 1]; - e[j + 1] = cs * e[j + 1]; - if (wantu && j < m - 1) { - for (let i = 0; i < m; i++) { - t = cs * U[i][j] + sn * U[i][j + 1]; - U[i][j + 1] = -sn * U[i][j] + cs * U[i][j + 1]; - U[i][j] = t; - } - } - } - e[p - 2] = f; - iter = iter + 1; - break; - } - case 4: { - if (s[k] <= 0) { - s[k] = s[k] < 0 ? -s[k] : 0; - if (wantv) { - for (let i = 0; i <= pp; i++) { - V[i][k] = -V[i][k]; - } - } - } - while (k < pp) { - if (s[k] >= s[k + 1]) { - break; - } - let t = s[k]; - s[k] = s[k + 1]; - s[k + 1] = t; - if (wantv && k < n - 1) { - for (let i = 0; i < n; i++) { - t = V[i][k + 1]; - V[i][k + 1] = V[i][k]; - V[i][k] = t; - } - } - if (wantu && k < m - 1) { - for (let i = 0; i < m; i++) { - t = U[i][k + 1]; - U[i][k + 1] = U[i][k]; - U[i][k] = t; - } - } - k++; - } - iter = 0; - p--; - break; - } - // no default - } - } - - if (swapped) { - var tmp = V; - V = U; - U = tmp; - } - - this.m = m; - this.n = n; - this.s = s; - this.U = U; - this.V = V; - } - - /** - * Solve a problem of least square (Ax=b) by using the SVD. Useful when A is singular. When A is not singular, it would be better to use qr.solve(value). - * Example : We search to approximate x, with A matrix shape m*n, x vector size n, b vector size m (m > n). We will use : - * var svd = SingularValueDecomposition(A); - * var x = svd.solve(b); - * @param {Matrix} value - Matrix 1D which is the vector b (in the equation Ax = b) - * @return {Matrix} - The vector x - */ - solve(value) { - var Y = value; - var e = this.threshold; - var scols = this.s.length; - var Ls = matrix_Matrix.zeros(scols, scols); - - for (let i = 0; i < scols; i++) { - if (Math.abs(this.s[i]) <= e) { - Ls[i][i] = 0; - } else { - Ls[i][i] = 1 / this.s[i]; - } - } - - var U = this.U; - var V = this.rightSingularVectors; - - var VL = V.mmul(Ls); - var vrows = V.rows; - var urows = U.length; - var VLU = matrix_Matrix.zeros(vrows, urows); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < urows; j++) { - let sum = 0; - for (let k = 0; k < scols; k++) { - sum += VL[i][k] * U[j][k]; - } - VLU[i][j] = sum; - } - } - - return VLU.mmul(Y); - } - - /** - * - * @param {Array} value - * @return {Matrix} - */ - solveForDiagonal(value) { - return this.solve(matrix_Matrix.diag(value)); - } - - /** - * Get the inverse of the matrix. We compute the inverse of a matrix using SVD when this matrix is singular or ill-conditioned. Example : - * var svd = SingularValueDecomposition(A); - * var inverseA = svd.inverse(); - * @return {Matrix} - The approximation of the inverse of the matrix - */ - inverse() { - var V = this.V; - var e = this.threshold; - var vrows = V.length; - var vcols = V[0].length; - var X = new matrix_Matrix(vrows, this.s.length); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < vcols; j++) { - if (Math.abs(this.s[j]) > e) { - X[i][j] = V[i][j] / this.s[j]; - } else { - X[i][j] = 0; - } - } - } - - var U = this.U; - - var urows = U.length; - var ucols = U[0].length; - var Y = new matrix_Matrix(vrows, urows); - - for (let i = 0; i < vrows; i++) { - for (let j = 0; j < urows; j++) { - let sum = 0; - for (let k = 0; k < ucols; k++) { - sum += X[i][k] * U[j][k]; - } - Y[i][j] = sum; - } - } - - return Y; - } - - /** - * - * @return {number} - */ - get condition() { - return this.s[0] / this.s[Math.min(this.m, this.n) - 1]; - } - - /** - * - * @return {number} - */ - get norm2() { - return this.s[0]; - } - - /** - * - * @return {number} - */ - get rank() { - var tol = Math.max(this.m, this.n) * this.s[0] * Number.EPSILON; - var r = 0; - var s = this.s; - for (var i = 0, ii = s.length; i < ii; i++) { - if (s[i] > tol) { - r++; - } - } - return r; - } - - /** - * - * @return {Array} - */ - get diagonal() { - return this.s; - } - - /** - * - * @return {number} - */ - get threshold() { - return Number.EPSILON / 2 * Math.max(this.m, this.n) * this.s[0]; - } - - /** - * - * @return {Matrix} - */ - get leftSingularVectors() { - if (!matrix_Matrix.isMatrix(this.U)) { - this.U = new matrix_Matrix(this.U); - } - return this.U; - } - - /** - * - * @return {Matrix} - */ - get rightSingularVectors() { - if (!matrix_Matrix.isMatrix(this.V)) { - this.V = new matrix_Matrix(this.V); - } - return this.V; - } - - /** - * - * @return {Matrix} - */ - get diagonalMatrix() { - return matrix_Matrix.diag(this.s); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/util.js - - -/** - * @private - * Check that a row index is not out of bounds - * @param {Matrix} matrix - * @param {number} index - * @param {boolean} [outer] - */ -function checkRowIndex(matrix, index, outer) { - var max = outer ? matrix.rows : matrix.rows - 1; - if (index < 0 || index > max) { - throw new RangeError('Row index out of range'); - } -} - -/** - * @private - * Check that a column index is not out of bounds - * @param {Matrix} matrix - * @param {number} index - * @param {boolean} [outer] - */ -function checkColumnIndex(matrix, index, outer) { - var max = outer ? matrix.columns : matrix.columns - 1; - if (index < 0 || index > max) { - throw new RangeError('Column index out of range'); - } -} - -/** - * @private - * Check that the provided vector is an array with the right length - * @param {Matrix} matrix - * @param {Array|Matrix} vector - * @return {Array} - * @throws {RangeError} - */ -function checkRowVector(matrix, vector) { - if (vector.to1DArray) { - vector = vector.to1DArray(); - } - if (vector.length !== matrix.columns) { - throw new RangeError( - 'vector size must be the same as the number of columns' - ); - } - return vector; -} - -/** - * @private - * Check that the provided vector is an array with the right length - * @param {Matrix} matrix - * @param {Array|Matrix} vector - * @return {Array} - * @throws {RangeError} - */ -function checkColumnVector(matrix, vector) { - if (vector.to1DArray) { - vector = vector.to1DArray(); - } - if (vector.length !== matrix.rows) { - throw new RangeError('vector size must be the same as the number of rows'); - } - return vector; -} - -function checkIndices(matrix, rowIndices, columnIndices) { - return { - row: checkRowIndices(matrix, rowIndices), - column: checkColumnIndices(matrix, columnIndices) - }; -} - -function checkRowIndices(matrix, rowIndices) { - if (typeof rowIndices !== 'object') { - throw new TypeError('unexpected type for row indices'); - } - - var rowOut = rowIndices.some((r) => { - return r < 0 || r >= matrix.rows; - }); - - if (rowOut) { - throw new RangeError('row indices are out of range'); - } - - if (!Array.isArray(rowIndices)) rowIndices = Array.from(rowIndices); - - return rowIndices; -} - -function checkColumnIndices(matrix, columnIndices) { - if (typeof columnIndices !== 'object') { - throw new TypeError('unexpected type for column indices'); - } - - var columnOut = columnIndices.some((c) => { - return c < 0 || c >= matrix.columns; - }); - - if (columnOut) { - throw new RangeError('column indices are out of range'); - } - if (!Array.isArray(columnIndices)) columnIndices = Array.from(columnIndices); - - return columnIndices; -} - -function checkRange(matrix, startRow, endRow, startColumn, endColumn) { - if (arguments.length !== 5) { - throw new RangeError('expected 4 arguments'); - } - checkNumber('startRow', startRow); - checkNumber('endRow', endRow); - checkNumber('startColumn', startColumn); - checkNumber('endColumn', endColumn); - if ( - startRow > endRow || - startColumn > endColumn || - startRow < 0 || - startRow >= matrix.rows || - endRow < 0 || - endRow >= matrix.rows || - startColumn < 0 || - startColumn >= matrix.columns || - endColumn < 0 || - endColumn >= matrix.columns - ) { - throw new RangeError('Submatrix indices are out of range'); - } -} - -function getRange(from, to) { - var arr = new Array(to - from + 1); - for (var i = 0; i < arr.length; i++) { - arr[i] = from + i; - } - return arr; -} - -function sumByRow(matrix) { - var sum = matrix_Matrix.zeros(matrix.rows, 1); - for (var i = 0; i < matrix.rows; ++i) { - for (var j = 0; j < matrix.columns; ++j) { - sum.set(i, 0, sum.get(i, 0) + matrix.get(i, j)); - } - } - return sum; -} - -function sumByColumn(matrix) { - var sum = matrix_Matrix.zeros(1, matrix.columns); - for (var i = 0; i < matrix.rows; ++i) { - for (var j = 0; j < matrix.columns; ++j) { - sum.set(0, j, sum.get(0, j) + matrix.get(i, j)); - } - } - return sum; -} - -function sumAll(matrix) { - var v = 0; - for (var i = 0; i < matrix.rows; i++) { - for (var j = 0; j < matrix.columns; j++) { - v += matrix.get(i, j); - } - } - return v; -} - -function checkNumber(name, value) { - if (typeof value !== 'number') { - throw new TypeError(`${name} must be a number`); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/base.js - - - -class base_BaseView extends AbstractMatrix() { - constructor(matrix, rows, columns) { - super(); - this.matrix = matrix; - this.rows = rows; - this.columns = columns; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/transpose.js - - -class transpose_MatrixTransposeView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.columns, matrix.rows); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(columnIndex, rowIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(columnIndex, rowIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/row.js - - -class row_MatrixRowView extends base_BaseView { - constructor(matrix, row) { - super(matrix, 1, matrix.columns); - this.row = row; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.row, columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.row, columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/sub.js - - - - -class sub_MatrixSubView extends base_BaseView { - constructor(matrix, startRow, endRow, startColumn, endColumn) { - checkRange(matrix, startRow, endRow, startColumn, endColumn); - super(matrix, endRow - startRow + 1, endColumn - startColumn + 1); - this.startRow = startRow; - this.startColumn = startColumn; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set( - this.startRow + rowIndex, - this.startColumn + columnIndex, - value - ); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get( - this.startRow + rowIndex, - this.startColumn + columnIndex - ); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/selection.js - - - - -class selection_MatrixSelectionView extends base_BaseView { - constructor(matrix, rowIndices, columnIndices) { - var indices = checkIndices(matrix, rowIndices, columnIndices); - super(matrix, indices.row.length, indices.column.length); - this.rowIndices = indices.row; - this.columnIndices = indices.column; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set( - this.rowIndices[rowIndex], - this.columnIndices[columnIndex], - value - ); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get( - this.rowIndices[rowIndex], - this.columnIndices[columnIndex] - ); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/rowSelection.js - - - - -class rowSelection_MatrixRowSelectionView extends base_BaseView { - constructor(matrix, rowIndices) { - rowIndices = checkRowIndices(matrix, rowIndices); - super(matrix, rowIndices.length, matrix.columns); - this.rowIndices = rowIndices; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.rowIndices[rowIndex], columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.rowIndices[rowIndex], columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/columnSelection.js - - - - -class columnSelection_MatrixColumnSelectionView extends base_BaseView { - constructor(matrix, columnIndices) { - columnIndices = checkColumnIndices(matrix, columnIndices); - super(matrix, matrix.rows, columnIndices.length); - this.columnIndices = columnIndices; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.columnIndices[columnIndex], value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(rowIndex, this.columnIndices[columnIndex]); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/column.js - - -class column_MatrixColumnView extends base_BaseView { - constructor(matrix, column) { - super(matrix, matrix.rows, 1); - this.column = column; - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.column, value); - return this; - } - - get(rowIndex) { - return this.matrix.get(rowIndex, this.column); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/flipRow.js - - -class flipRow_MatrixFlipRowView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.rows, matrix.columns); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(this.rows - rowIndex - 1, columnIndex, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(this.rows - rowIndex - 1, columnIndex); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/views/flipColumn.js - - -class flipColumn_MatrixFlipColumnView extends base_BaseView { - constructor(matrix) { - super(matrix, matrix.rows, matrix.columns); - } - - set(rowIndex, columnIndex, value) { - this.matrix.set(rowIndex, this.columns - columnIndex - 1, value); - return this; - } - - get(rowIndex, columnIndex) { - return this.matrix.get(rowIndex, this.columns - columnIndex - 1); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/abstractMatrix.js - - - - - - - - - - - - - - - -function AbstractMatrix(superCtor) { - if (superCtor === undefined) superCtor = Object; - - /** - * Real matrix - * @class Matrix - * @param {number|Array|Matrix} nRows - Number of rows of the new matrix, - * 2D array containing the data or Matrix instance to clone - * @param {number} [nColumns] - Number of columns of the new matrix - */ - class Matrix extends superCtor { - static get [Symbol.species]() { - return this; - } - - /** - * Constructs a Matrix with the chosen dimensions from a 1D array - * @param {number} newRows - Number of rows - * @param {number} newColumns - Number of columns - * @param {Array} newData - A 1D array containing data for the matrix - * @return {Matrix} - The new matrix - */ - static from1DArray(newRows, newColumns, newData) { - var length = newRows * newColumns; - if (length !== newData.length) { - throw new RangeError('Data length does not match given dimensions'); - } - var newMatrix = new this(newRows, newColumns); - for (var row = 0; row < newRows; row++) { - for (var column = 0; column < newColumns; column++) { - newMatrix.set(row, column, newData[row * newColumns + column]); - } - } - return newMatrix; - } - - /** - * Creates a row vector, a matrix with only one row. - * @param {Array} newData - A 1D array containing data for the vector - * @return {Matrix} - The new matrix - */ - static rowVector(newData) { - var vector = new this(1, newData.length); - for (var i = 0; i < newData.length; i++) { - vector.set(0, i, newData[i]); - } - return vector; - } - - /** - * Creates a column vector, a matrix with only one column. - * @param {Array} newData - A 1D array containing data for the vector - * @return {Matrix} - The new matrix - */ - static columnVector(newData) { - var vector = new this(newData.length, 1); - for (var i = 0; i < newData.length; i++) { - vector.set(i, 0, newData[i]); - } - return vector; - } - - /** - * Creates an empty matrix with the given dimensions. Values will be undefined. Same as using new Matrix(rows, columns). - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static empty(rows, columns) { - return new this(rows, columns); - } - - /** - * Creates a matrix with the given dimensions. Values will be set to zero. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static zeros(rows, columns) { - return this.empty(rows, columns).fill(0); - } - - /** - * Creates a matrix with the given dimensions. Values will be set to one. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @return {Matrix} - The new matrix - */ - static ones(rows, columns) { - return this.empty(rows, columns).fill(1); - } - - /** - * Creates a matrix with the given dimensions. Values will be randomly set. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @param {function} [rng=Math.random] - Random number generator - * @return {Matrix} The new matrix - */ - static rand(rows, columns, rng) { - if (rng === undefined) rng = Math.random; - var matrix = this.empty(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - matrix.set(i, j, rng()); - } - } - return matrix; - } - - /** - * Creates a matrix with the given dimensions. Values will be random integers. - * @param {number} rows - Number of rows - * @param {number} columns - Number of columns - * @param {number} [maxValue=1000] - Maximum value - * @param {function} [rng=Math.random] - Random number generator - * @return {Matrix} The new matrix - */ - static randInt(rows, columns, maxValue, rng) { - if (maxValue === undefined) maxValue = 1000; - if (rng === undefined) rng = Math.random; - var matrix = this.empty(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - var value = Math.floor(rng() * maxValue); - matrix.set(i, j, value); - } - } - return matrix; - } - - /** - * Creates an identity matrix with the given dimension. Values of the diagonal will be 1 and others will be 0. - * @param {number} rows - Number of rows - * @param {number} [columns=rows] - Number of columns - * @param {number} [value=1] - Value to fill the diagonal with - * @return {Matrix} - The new identity matrix - */ - static eye(rows, columns, value) { - if (columns === undefined) columns = rows; - if (value === undefined) value = 1; - var min = Math.min(rows, columns); - var matrix = this.zeros(rows, columns); - for (var i = 0; i < min; i++) { - matrix.set(i, i, value); - } - return matrix; - } - - /** - * Creates a diagonal matrix based on the given array. - * @param {Array} data - Array containing the data for the diagonal - * @param {number} [rows] - Number of rows (Default: data.length) - * @param {number} [columns] - Number of columns (Default: rows) - * @return {Matrix} - The new diagonal matrix - */ - static diag(data, rows, columns) { - var l = data.length; - if (rows === undefined) rows = l; - if (columns === undefined) columns = rows; - var min = Math.min(l, rows, columns); - var matrix = this.zeros(rows, columns); - for (var i = 0; i < min; i++) { - matrix.set(i, i, data[i]); - } - return matrix; - } - - /** - * Returns a matrix whose elements are the minimum between matrix1 and matrix2 - * @param {Matrix} matrix1 - * @param {Matrix} matrix2 - * @return {Matrix} - */ - static min(matrix1, matrix2) { - matrix1 = this.checkMatrix(matrix1); - matrix2 = this.checkMatrix(matrix2); - var rows = matrix1.rows; - var columns = matrix1.columns; - var result = new this(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - result.set(i, j, Math.min(matrix1.get(i, j), matrix2.get(i, j))); - } - } - return result; - } - - /** - * Returns a matrix whose elements are the maximum between matrix1 and matrix2 - * @param {Matrix} matrix1 - * @param {Matrix} matrix2 - * @return {Matrix} - */ - static max(matrix1, matrix2) { - matrix1 = this.checkMatrix(matrix1); - matrix2 = this.checkMatrix(matrix2); - var rows = matrix1.rows; - var columns = matrix1.columns; - var result = new this(rows, columns); - for (var i = 0; i < rows; i++) { - for (var j = 0; j < columns; j++) { - result.set(i, j, Math.max(matrix1.get(i, j), matrix2.get(i, j))); - } - } - return result; - } - - /** - * Check that the provided value is a Matrix and tries to instantiate one if not - * @param {*} value - The value to check - * @return {Matrix} - */ - static checkMatrix(value) { - return Matrix.isMatrix(value) ? value : new this(value); - } - - /** - * Returns true if the argument is a Matrix, false otherwise - * @param {*} value - The value to check - * @return {boolean} - */ - static isMatrix(value) { - return (value != null) && (value.klass === 'Matrix'); - } - - /** - * @prop {number} size - The number of elements in the matrix. - */ - get size() { - return this.rows * this.columns; - } - - /** - * Applies a callback for each element of the matrix. The function is called in the matrix (this) context. - * @param {function} callback - Function that will be called with two parameters : i (row) and j (column) - * @return {Matrix} this - */ - apply(callback) { - if (typeof callback !== 'function') { - throw new TypeError('callback must be a function'); - } - var ii = this.rows; - var jj = this.columns; - for (var i = 0; i < ii; i++) { - for (var j = 0; j < jj; j++) { - callback.call(this, i, j); - } - } - return this; - } - - /** - * Returns a new 1D array filled row by row with the matrix values - * @return {Array} - */ - to1DArray() { - var array = new Array(this.size); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - array[i * this.columns + j] = this.get(i, j); - } - } - return array; - } - - /** - * Returns a 2D array containing a copy of the data - * @return {Array} - */ - to2DArray() { - var copy = new Array(this.rows); - for (var i = 0; i < this.rows; i++) { - copy[i] = new Array(this.columns); - for (var j = 0; j < this.columns; j++) { - copy[i][j] = this.get(i, j); - } - } - return copy; - } - - /** - * @return {boolean} true if the matrix has one row - */ - isRowVector() { - return this.rows === 1; - } - - /** - * @return {boolean} true if the matrix has one column - */ - isColumnVector() { - return this.columns === 1; - } - - /** - * @return {boolean} true if the matrix has one row or one column - */ - isVector() { - return (this.rows === 1) || (this.columns === 1); - } - - /** - * @return {boolean} true if the matrix has the same number of rows and columns - */ - isSquare() { - return this.rows === this.columns; - } - - /** - * @return {boolean} true if the matrix is square and has the same values on both sides of the diagonal - */ - isSymmetric() { - if (this.isSquare()) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j <= i; j++) { - if (this.get(i, j) !== this.get(j, i)) { - return false; - } - } - } - return true; - } - return false; - } - - /** - * Sets a given element of the matrix. mat.set(3,4,1) is equivalent to mat[3][4]=1 - * @abstract - * @param {number} rowIndex - Index of the row - * @param {number} columnIndex - Index of the column - * @param {number} value - The new value for the element - * @return {Matrix} this - */ - set(rowIndex, columnIndex, value) { // eslint-disable-line no-unused-vars - throw new Error('set method is unimplemented'); - } - - /** - * Returns the given element of the matrix. mat.get(3,4) is equivalent to matrix[3][4] - * @abstract - * @param {number} rowIndex - Index of the row - * @param {number} columnIndex - Index of the column - * @return {number} - */ - get(rowIndex, columnIndex) { // eslint-disable-line no-unused-vars - throw new Error('get method is unimplemented'); - } - - /** - * Creates a new matrix that is a repetition of the current matrix. New matrix has rowRep times the number of - * rows of the matrix, and colRep times the number of columns of the matrix - * @param {number} rowRep - Number of times the rows should be repeated - * @param {number} colRep - Number of times the columns should be re - * @return {Matrix} - * @example - * var matrix = new Matrix([[1,2]]); - * matrix.repeat(2); // [[1,2],[1,2]] - */ - repeat(rowRep, colRep) { - rowRep = rowRep || 1; - colRep = colRep || 1; - var matrix = new this.constructor[Symbol.species](this.rows * rowRep, this.columns * colRep); - for (var i = 0; i < rowRep; i++) { - for (var j = 0; j < colRep; j++) { - matrix.setSubMatrix(this, this.rows * i, this.columns * j); - } - } - return matrix; - } - - /** - * Fills the matrix with a given value. All elements will be set to this value. - * @param {number} value - New value - * @return {Matrix} this - */ - fill(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, value); - } - } - return this; - } - - /** - * Negates the matrix. All elements will be multiplied by (-1) - * @return {Matrix} this - */ - neg() { - return this.mulS(-1); - } - - /** - * Returns a new array from the given row index - * @param {number} index - Row index - * @return {Array} - */ - getRow(index) { - checkRowIndex(this, index); - var row = new Array(this.columns); - for (var i = 0; i < this.columns; i++) { - row[i] = this.get(index, i); - } - return row; - } - - /** - * Returns a new row vector from the given row index - * @param {number} index - Row index - * @return {Matrix} - */ - getRowVector(index) { - return this.constructor.rowVector(this.getRow(index)); - } - - /** - * Sets a row at the given index - * @param {number} index - Row index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - setRow(index, array) { - checkRowIndex(this, index); - array = checkRowVector(this, array); - for (var i = 0; i < this.columns; i++) { - this.set(index, i, array[i]); - } - return this; - } - - /** - * Swaps two rows - * @param {number} row1 - First row index - * @param {number} row2 - Second row index - * @return {Matrix} this - */ - swapRows(row1, row2) { - checkRowIndex(this, row1); - checkRowIndex(this, row2); - for (var i = 0; i < this.columns; i++) { - var temp = this.get(row1, i); - this.set(row1, i, this.get(row2, i)); - this.set(row2, i, temp); - } - return this; - } - - /** - * Returns a new array from the given column index - * @param {number} index - Column index - * @return {Array} - */ - getColumn(index) { - checkColumnIndex(this, index); - var column = new Array(this.rows); - for (var i = 0; i < this.rows; i++) { - column[i] = this.get(i, index); - } - return column; - } - - /** - * Returns a new column vector from the given column index - * @param {number} index - Column index - * @return {Matrix} - */ - getColumnVector(index) { - return this.constructor.columnVector(this.getColumn(index)); - } - - /** - * Sets a column at the given index - * @param {number} index - Column index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - setColumn(index, array) { - checkColumnIndex(this, index); - array = checkColumnVector(this, array); - for (var i = 0; i < this.rows; i++) { - this.set(i, index, array[i]); - } - return this; - } - - /** - * Swaps two columns - * @param {number} column1 - First column index - * @param {number} column2 - Second column index - * @return {Matrix} this - */ - swapColumns(column1, column2) { - checkColumnIndex(this, column1); - checkColumnIndex(this, column2); - for (var i = 0; i < this.rows; i++) { - var temp = this.get(i, column1); - this.set(i, column1, this.get(i, column2)); - this.set(i, column2, temp); - } - return this; - } - - /** - * Adds the values of a vector to each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - addRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) + vector[j]); - } - } - return this; - } - - /** - * Subtracts the values of a vector from each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - subRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) - vector[j]); - } - } - return this; - } - - /** - * Multiplies the values of a vector with each row - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - mulRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) * vector[j]); - } - } - return this; - } - - /** - * Divides the values of each row by those of a vector - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - divRowVector(vector) { - vector = checkRowVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) / vector[j]); - } - } - return this; - } - - /** - * Adds the values of a vector to each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - addColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) + vector[i]); - } - } - return this; - } - - /** - * Subtracts the values of a vector from each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - subColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) - vector[i]); - } - } - return this; - } - - /** - * Multiplies the values of a vector with each column - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - mulColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) * vector[i]); - } - } - return this; - } - - /** - * Divides the values of each column by those of a vector - * @param {Array|Matrix} vector - Array or vector - * @return {Matrix} this - */ - divColumnVector(vector) { - vector = checkColumnVector(this, vector); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) / vector[i]); - } - } - return this; - } - - /** - * Multiplies the values of a row with a scalar - * @param {number} index - Row index - * @param {number} value - * @return {Matrix} this - */ - mulRow(index, value) { - checkRowIndex(this, index); - for (var i = 0; i < this.columns; i++) { - this.set(index, i, this.get(index, i) * value); - } - return this; - } - - /** - * Multiplies the values of a column with a scalar - * @param {number} index - Column index - * @param {number} value - * @return {Matrix} this - */ - mulColumn(index, value) { - checkColumnIndex(this, index); - for (var i = 0; i < this.rows; i++) { - this.set(i, index, this.get(i, index) * value); - } - return this; - } - - /** - * Returns the maximum value of the matrix - * @return {number} - */ - max() { - var v = this.get(0, 0); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) > v) { - v = this.get(i, j); - } - } - } - return v; - } - - /** - * Returns the index of the maximum value - * @return {Array} - */ - maxIndex() { - var v = this.get(0, 0); - var idx = [0, 0]; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) > v) { - v = this.get(i, j); - idx[0] = i; - idx[1] = j; - } - } - } - return idx; - } - - /** - * Returns the minimum value of the matrix - * @return {number} - */ - min() { - var v = this.get(0, 0); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) < v) { - v = this.get(i, j); - } - } - } - return v; - } - - /** - * Returns the index of the minimum value - * @return {Array} - */ - minIndex() { - var v = this.get(0, 0); - var idx = [0, 0]; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - if (this.get(i, j) < v) { - v = this.get(i, j); - idx[0] = i; - idx[1] = j; - } - } - } - return idx; - } - - /** - * Returns the maximum value of one row - * @param {number} row - Row index - * @return {number} - */ - maxRow(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) > v) { - v = this.get(row, i); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one row - * @param {number} row - Row index - * @return {Array} - */ - maxRowIndex(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - var idx = [row, 0]; - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) > v) { - v = this.get(row, i); - idx[1] = i; - } - } - return idx; - } - - /** - * Returns the minimum value of one row - * @param {number} row - Row index - * @return {number} - */ - minRow(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) < v) { - v = this.get(row, i); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one row - * @param {number} row - Row index - * @return {Array} - */ - minRowIndex(row) { - checkRowIndex(this, row); - var v = this.get(row, 0); - var idx = [row, 0]; - for (var i = 1; i < this.columns; i++) { - if (this.get(row, i) < v) { - v = this.get(row, i); - idx[1] = i; - } - } - return idx; - } - - /** - * Returns the maximum value of one column - * @param {number} column - Column index - * @return {number} - */ - maxColumn(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) > v) { - v = this.get(i, column); - } - } - return v; - } - - /** - * Returns the index of the maximum value of one column - * @param {number} column - Column index - * @return {Array} - */ - maxColumnIndex(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - var idx = [0, column]; - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) > v) { - v = this.get(i, column); - idx[0] = i; - } - } - return idx; - } - - /** - * Returns the minimum value of one column - * @param {number} column - Column index - * @return {number} - */ - minColumn(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) < v) { - v = this.get(i, column); - } - } - return v; - } - - /** - * Returns the index of the minimum value of one column - * @param {number} column - Column index - * @return {Array} - */ - minColumnIndex(column) { - checkColumnIndex(this, column); - var v = this.get(0, column); - var idx = [0, column]; - for (var i = 1; i < this.rows; i++) { - if (this.get(i, column) < v) { - v = this.get(i, column); - idx[0] = i; - } - } - return idx; - } - - /** - * Returns an array containing the diagonal values of the matrix - * @return {Array} - */ - diag() { - var min = Math.min(this.rows, this.columns); - var diag = new Array(min); - for (var i = 0; i < min; i++) { - diag[i] = this.get(i, i); - } - return diag; - } - - /** - * Returns the sum by the argument given, if no argument given, - * it returns the sum of all elements of the matrix. - * @param {string} by - sum by 'row' or 'column'. - * @return {Matrix|number} - */ - sum(by) { - switch (by) { - case 'row': - return sumByRow(this); - case 'column': - return sumByColumn(this); - default: - return sumAll(this); - } - } - - /** - * Returns the mean of all elements of the matrix - * @return {number} - */ - mean() { - return this.sum() / this.size; - } - - /** - * Returns the product of all elements of the matrix - * @return {number} - */ - prod() { - var prod = 1; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - prod *= this.get(i, j); - } - } - return prod; - } - - /** - * Returns the norm of a matrix. - * @param {string} type - "frobenius" (default) or "max" return resp. the Frobenius norm and the max norm. - * @return {number} - */ - norm(type = 'frobenius') { - var result = 0; - if (type === 'max') { - return this.max(); - } else if (type === 'frobenius') { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - result = result + this.get(i, j) * this.get(i, j); - } - } - return Math.sqrt(result); - } else { - throw new RangeError(`unknown norm type: ${type}`); - } - } - - /** - * Computes the cumulative sum of the matrix elements (in place, row by row) - * @return {Matrix} this - */ - cumulativeSum() { - var sum = 0; - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - sum += this.get(i, j); - this.set(i, j, sum); - } - } - return this; - } - - /** - * Computes the dot (scalar) product between the matrix and another - * @param {Matrix} vector2 vector - * @return {number} - */ - dot(vector2) { - if (Matrix.isMatrix(vector2)) vector2 = vector2.to1DArray(); - var vector1 = this.to1DArray(); - if (vector1.length !== vector2.length) { - throw new RangeError('vectors do not have the same size'); - } - var dot = 0; - for (var i = 0; i < vector1.length; i++) { - dot += vector1[i] * vector2[i]; - } - return dot; - } - - /** - * Returns the matrix product between this and other - * @param {Matrix} other - * @return {Matrix} - */ - mmul(other) { - other = this.constructor.checkMatrix(other); - if (this.columns !== other.rows) { - // eslint-disable-next-line no-console - console.warn('Number of columns of left matrix are not equal to number of rows of right matrix.'); - } - - var m = this.rows; - var n = this.columns; - var p = other.columns; - - var result = new this.constructor[Symbol.species](m, p); - - var Bcolj = new Array(n); - for (var j = 0; j < p; j++) { - for (var k = 0; k < n; k++) { - Bcolj[k] = other.get(k, j); - } - - for (var i = 0; i < m; i++) { - var s = 0; - for (k = 0; k < n; k++) { - s += this.get(i, k) * Bcolj[k]; - } - - result.set(i, j, s); - } - } - return result; - } - - strassen2x2(other) { - var result = new this.constructor[Symbol.species](2, 2); - const a11 = this.get(0, 0); - const b11 = other.get(0, 0); - const a12 = this.get(0, 1); - const b12 = other.get(0, 1); - const a21 = this.get(1, 0); - const b21 = other.get(1, 0); - const a22 = this.get(1, 1); - const b22 = other.get(1, 1); - - // Compute intermediate values. - const m1 = (a11 + a22) * (b11 + b22); - const m2 = (a21 + a22) * b11; - const m3 = a11 * (b12 - b22); - const m4 = a22 * (b21 - b11); - const m5 = (a11 + a12) * b22; - const m6 = (a21 - a11) * (b11 + b12); - const m7 = (a12 - a22) * (b21 + b22); - - // Combine intermediate values into the output. - const c00 = m1 + m4 - m5 + m7; - const c01 = m3 + m5; - const c10 = m2 + m4; - const c11 = m1 - m2 + m3 + m6; - - result.set(0, 0, c00); - result.set(0, 1, c01); - result.set(1, 0, c10); - result.set(1, 1, c11); - return result; - } - - strassen3x3(other) { - var result = new this.constructor[Symbol.species](3, 3); - - const a00 = this.get(0, 0); - const a01 = this.get(0, 1); - const a02 = this.get(0, 2); - const a10 = this.get(1, 0); - const a11 = this.get(1, 1); - const a12 = this.get(1, 2); - const a20 = this.get(2, 0); - const a21 = this.get(2, 1); - const a22 = this.get(2, 2); - - const b00 = other.get(0, 0); - const b01 = other.get(0, 1); - const b02 = other.get(0, 2); - const b10 = other.get(1, 0); - const b11 = other.get(1, 1); - const b12 = other.get(1, 2); - const b20 = other.get(2, 0); - const b21 = other.get(2, 1); - const b22 = other.get(2, 2); - - const m1 = (a00 + a01 + a02 - a10 - a11 - a21 - a22) * b11; - const m2 = (a00 - a10) * (-b01 + b11); - const m3 = a11 * (-b00 + b01 + b10 - b11 - b12 - b20 + b22); - const m4 = (-a00 + a10 + a11) * (b00 - b01 + b11); - const m5 = (a10 + a11) * (-b00 + b01); - const m6 = a00 * b00; - const m7 = (-a00 + a20 + a21) * (b00 - b02 + b12); - const m8 = (-a00 + a20) * (b02 - b12); - const m9 = (a20 + a21) * (-b00 + b02); - const m10 = (a00 + a01 + a02 - a11 - a12 - a20 - a21) * b12; - const m11 = a21 * (-b00 + b02 + b10 - b11 - b12 - b20 + b21); - const m12 = (-a02 + a21 + a22) * (b11 + b20 - b21); - const m13 = (a02 - a22) * (b11 - b21); - const m14 = a02 * b20; - const m15 = (a21 + a22) * (-b20 + b21); - const m16 = (-a02 + a11 + a12) * (b12 + b20 - b22); - const m17 = (a02 - a12) * (b12 - b22); - const m18 = (a11 + a12) * (-b20 + b22); - const m19 = a01 * b10; - const m20 = a12 * b21; - const m21 = a10 * b02; - const m22 = a20 * b01; - const m23 = a22 * b22; - - const c00 = m6 + m14 + m19; - const c01 = m1 + m4 + m5 + m6 + m12 + m14 + m15; - const c02 = m6 + m7 + m9 + m10 + m14 + m16 + m18; - const c10 = m2 + m3 + m4 + m6 + m14 + m16 + m17; - const c11 = m2 + m4 + m5 + m6 + m20; - const c12 = m14 + m16 + m17 + m18 + m21; - const c20 = m6 + m7 + m8 + m11 + m12 + m13 + m14; - const c21 = m12 + m13 + m14 + m15 + m22; - const c22 = m6 + m7 + m8 + m9 + m23; - - result.set(0, 0, c00); - result.set(0, 1, c01); - result.set(0, 2, c02); - result.set(1, 0, c10); - result.set(1, 1, c11); - result.set(1, 2, c12); - result.set(2, 0, c20); - result.set(2, 1, c21); - result.set(2, 2, c22); - return result; - } - - /** - * Returns the matrix product between x and y. More efficient than mmul(other) only when we multiply squared matrix and when the size of the matrix is > 1000. - * @param {Matrix} y - * @return {Matrix} - */ - mmulStrassen(y) { - var x = this.clone(); - var r1 = x.rows; - var c1 = x.columns; - var r2 = y.rows; - var c2 = y.columns; - if (c1 !== r2) { - // eslint-disable-next-line no-console - console.warn(`Multiplying ${r1} x ${c1} and ${r2} x ${c2} matrix: dimensions do not match.`); - } - - // Put a matrix into the top left of a matrix of zeros. - // `rows` and `cols` are the dimensions of the output matrix. - function embed(mat, rows, cols) { - var r = mat.rows; - var c = mat.columns; - if ((r === rows) && (c === cols)) { - return mat; - } else { - var resultat = Matrix.zeros(rows, cols); - resultat = resultat.setSubMatrix(mat, 0, 0); - return resultat; - } - } - - - // Make sure both matrices are the same size. - // This is exclusively for simplicity: - // this algorithm can be implemented with matrices of different sizes. - - var r = Math.max(r1, r2); - var c = Math.max(c1, c2); - x = embed(x, r, c); - y = embed(y, r, c); - - // Our recursive multiplication function. - function blockMult(a, b, rows, cols) { - // For small matrices, resort to naive multiplication. - if (rows <= 512 || cols <= 512) { - return a.mmul(b); // a is equivalent to this - } - - // Apply dynamic padding. - if ((rows % 2 === 1) && (cols % 2 === 1)) { - a = embed(a, rows + 1, cols + 1); - b = embed(b, rows + 1, cols + 1); - } else if (rows % 2 === 1) { - a = embed(a, rows + 1, cols); - b = embed(b, rows + 1, cols); - } else if (cols % 2 === 1) { - a = embed(a, rows, cols + 1); - b = embed(b, rows, cols + 1); - } - - var halfRows = parseInt(a.rows / 2, 10); - var halfCols = parseInt(a.columns / 2, 10); - // Subdivide input matrices. - var a11 = a.subMatrix(0, halfRows - 1, 0, halfCols - 1); - var b11 = b.subMatrix(0, halfRows - 1, 0, halfCols - 1); - - var a12 = a.subMatrix(0, halfRows - 1, halfCols, a.columns - 1); - var b12 = b.subMatrix(0, halfRows - 1, halfCols, b.columns - 1); - - var a21 = a.subMatrix(halfRows, a.rows - 1, 0, halfCols - 1); - var b21 = b.subMatrix(halfRows, b.rows - 1, 0, halfCols - 1); - - var a22 = a.subMatrix(halfRows, a.rows - 1, halfCols, a.columns - 1); - var b22 = b.subMatrix(halfRows, b.rows - 1, halfCols, b.columns - 1); - - // Compute intermediate values. - var m1 = blockMult(Matrix.add(a11, a22), Matrix.add(b11, b22), halfRows, halfCols); - var m2 = blockMult(Matrix.add(a21, a22), b11, halfRows, halfCols); - var m3 = blockMult(a11, Matrix.sub(b12, b22), halfRows, halfCols); - var m4 = blockMult(a22, Matrix.sub(b21, b11), halfRows, halfCols); - var m5 = blockMult(Matrix.add(a11, a12), b22, halfRows, halfCols); - var m6 = blockMult(Matrix.sub(a21, a11), Matrix.add(b11, b12), halfRows, halfCols); - var m7 = blockMult(Matrix.sub(a12, a22), Matrix.add(b21, b22), halfRows, halfCols); - - // Combine intermediate values into the output. - var c11 = Matrix.add(m1, m4); - c11.sub(m5); - c11.add(m7); - var c12 = Matrix.add(m3, m5); - var c21 = Matrix.add(m2, m4); - var c22 = Matrix.sub(m1, m2); - c22.add(m3); - c22.add(m6); - - // Crop output to the desired size (undo dynamic padding). - var resultat = Matrix.zeros(2 * c11.rows, 2 * c11.columns); - resultat = resultat.setSubMatrix(c11, 0, 0); - resultat = resultat.setSubMatrix(c12, c11.rows, 0); - resultat = resultat.setSubMatrix(c21, 0, c11.columns); - resultat = resultat.setSubMatrix(c22, c11.rows, c11.columns); - return resultat.subMatrix(0, rows - 1, 0, cols - 1); - } - return blockMult(x, y, r, c); - } - - /** - * Returns a row-by-row scaled matrix - * @param {number} [min=0] - Minimum scaled value - * @param {number} [max=1] - Maximum scaled value - * @return {Matrix} - The scaled matrix - */ - scaleRows(min, max) { - min = min === undefined ? 0 : min; - max = max === undefined ? 1 : max; - if (min >= max) { - throw new RangeError('min should be strictly smaller than max'); - } - var newMatrix = this.constructor.empty(this.rows, this.columns); - for (var i = 0; i < this.rows; i++) { - var scaled = ml_array_rescale_lib_es6(this.getRow(i), { min, max }); - newMatrix.setRow(i, scaled); - } - return newMatrix; - } - - /** - * Returns a new column-by-column scaled matrix - * @param {number} [min=0] - Minimum scaled value - * @param {number} [max=1] - Maximum scaled value - * @return {Matrix} - The new scaled matrix - * @example - * var matrix = new Matrix([[1,2],[-1,0]]); - * var scaledMatrix = matrix.scaleColumns(); // [[1,1],[0,0]] - */ - scaleColumns(min, max) { - min = min === undefined ? 0 : min; - max = max === undefined ? 1 : max; - if (min >= max) { - throw new RangeError('min should be strictly smaller than max'); - } - var newMatrix = this.constructor.empty(this.rows, this.columns); - for (var i = 0; i < this.columns; i++) { - var scaled = ml_array_rescale_lib_es6(this.getColumn(i), { - min: min, - max: max - }); - newMatrix.setColumn(i, scaled); - } - return newMatrix; - } - - - /** - * Returns the Kronecker product (also known as tensor product) between this and other - * See https://en.wikipedia.org/wiki/Kronecker_product - * @param {Matrix} other - * @return {Matrix} - */ - kroneckerProduct(other) { - other = this.constructor.checkMatrix(other); - - var m = this.rows; - var n = this.columns; - var p = other.rows; - var q = other.columns; - - var result = new this.constructor[Symbol.species](m * p, n * q); - for (var i = 0; i < m; i++) { - for (var j = 0; j < n; j++) { - for (var k = 0; k < p; k++) { - for (var l = 0; l < q; l++) { - result[p * i + k][q * j + l] = this.get(i, j) * other.get(k, l); - } - } - } - } - return result; - } - - /** - * Transposes the matrix and returns a new one containing the result - * @return {Matrix} - */ - transpose() { - var result = new this.constructor[Symbol.species](this.columns, this.rows); - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - result.set(j, i, this.get(i, j)); - } - } - return result; - } - - /** - * Sorts the rows (in place) - * @param {function} compareFunction - usual Array.prototype.sort comparison function - * @return {Matrix} this - */ - sortRows(compareFunction) { - if (compareFunction === undefined) compareFunction = compareNumbers; - for (var i = 0; i < this.rows; i++) { - this.setRow(i, this.getRow(i).sort(compareFunction)); - } - return this; - } - - /** - * Sorts the columns (in place) - * @param {function} compareFunction - usual Array.prototype.sort comparison function - * @return {Matrix} this - */ - sortColumns(compareFunction) { - if (compareFunction === undefined) compareFunction = compareNumbers; - for (var i = 0; i < this.columns; i++) { - this.setColumn(i, this.getColumn(i).sort(compareFunction)); - } - return this; - } - - /** - * Returns a subset of the matrix - * @param {number} startRow - First row index - * @param {number} endRow - Last row index - * @param {number} startColumn - First column index - * @param {number} endColumn - Last column index - * @return {Matrix} - */ - subMatrix(startRow, endRow, startColumn, endColumn) { - checkRange(this, startRow, endRow, startColumn, endColumn); - var newMatrix = new this.constructor[Symbol.species](endRow - startRow + 1, endColumn - startColumn + 1); - for (var i = startRow; i <= endRow; i++) { - for (var j = startColumn; j <= endColumn; j++) { - newMatrix[i - startRow][j - startColumn] = this.get(i, j); - } - } - return newMatrix; - } - - /** - * Returns a subset of the matrix based on an array of row indices - * @param {Array} indices - Array containing the row indices - * @param {number} [startColumn = 0] - First column index - * @param {number} [endColumn = this.columns-1] - Last column index - * @return {Matrix} - */ - subMatrixRow(indices, startColumn, endColumn) { - if (startColumn === undefined) startColumn = 0; - if (endColumn === undefined) endColumn = this.columns - 1; - if ((startColumn > endColumn) || (startColumn < 0) || (startColumn >= this.columns) || (endColumn < 0) || (endColumn >= this.columns)) { - throw new RangeError('Argument out of range'); - } - - var newMatrix = new this.constructor[Symbol.species](indices.length, endColumn - startColumn + 1); - for (var i = 0; i < indices.length; i++) { - for (var j = startColumn; j <= endColumn; j++) { - if (indices[i] < 0 || indices[i] >= this.rows) { - throw new RangeError(`Row index out of range: ${indices[i]}`); - } - newMatrix.set(i, j - startColumn, this.get(indices[i], j)); - } - } - return newMatrix; - } - - /** - * Returns a subset of the matrix based on an array of column indices - * @param {Array} indices - Array containing the column indices - * @param {number} [startRow = 0] - First row index - * @param {number} [endRow = this.rows-1] - Last row index - * @return {Matrix} - */ - subMatrixColumn(indices, startRow, endRow) { - if (startRow === undefined) startRow = 0; - if (endRow === undefined) endRow = this.rows - 1; - if ((startRow > endRow) || (startRow < 0) || (startRow >= this.rows) || (endRow < 0) || (endRow >= this.rows)) { - throw new RangeError('Argument out of range'); - } - - var newMatrix = new this.constructor[Symbol.species](endRow - startRow + 1, indices.length); - for (var i = 0; i < indices.length; i++) { - for (var j = startRow; j <= endRow; j++) { - if (indices[i] < 0 || indices[i] >= this.columns) { - throw new RangeError(`Column index out of range: ${indices[i]}`); - } - newMatrix.set(j - startRow, i, this.get(j, indices[i])); - } - } - return newMatrix; - } - - /** - * Set a part of the matrix to the given sub-matrix - * @param {Matrix|Array< Array >} matrix - The source matrix from which to extract values. - * @param {number} startRow - The index of the first row to set - * @param {number} startColumn - The index of the first column to set - * @return {Matrix} - */ - setSubMatrix(matrix, startRow, startColumn) { - matrix = this.constructor.checkMatrix(matrix); - var endRow = startRow + matrix.rows - 1; - var endColumn = startColumn + matrix.columns - 1; - checkRange(this, startRow, endRow, startColumn, endColumn); - for (var i = 0; i < matrix.rows; i++) { - for (var j = 0; j < matrix.columns; j++) { - this[startRow + i][startColumn + j] = matrix.get(i, j); - } - } - return this; - } - - /** - * Return a new matrix based on a selection of rows and columns - * @param {Array} rowIndices - The row indices to select. Order matters and an index can be more than once. - * @param {Array} columnIndices - The column indices to select. Order matters and an index can be use more than once. - * @return {Matrix} The new matrix - */ - selection(rowIndices, columnIndices) { - var indices = checkIndices(this, rowIndices, columnIndices); - var newMatrix = new this.constructor[Symbol.species](rowIndices.length, columnIndices.length); - for (var i = 0; i < indices.row.length; i++) { - var rowIndex = indices.row[i]; - for (var j = 0; j < indices.column.length; j++) { - var columnIndex = indices.column[j]; - newMatrix[i][j] = this.get(rowIndex, columnIndex); - } - } - return newMatrix; - } - - /** - * Returns the trace of the matrix (sum of the diagonal elements) - * @return {number} - */ - trace() { - var min = Math.min(this.rows, this.columns); - var trace = 0; - for (var i = 0; i < min; i++) { - trace += this.get(i, i); - } - return trace; - } - - /* - Matrix views - */ - - /** - * Returns a view of the transposition of the matrix - * @return {MatrixTransposeView} - */ - transposeView() { - return new transpose_MatrixTransposeView(this); - } - - /** - * Returns a view of the row vector with the given index - * @param {number} row - row index of the vector - * @return {MatrixRowView} - */ - rowView(row) { - checkRowIndex(this, row); - return new row_MatrixRowView(this, row); - } - - /** - * Returns a view of the column vector with the given index - * @param {number} column - column index of the vector - * @return {MatrixColumnView} - */ - columnView(column) { - checkColumnIndex(this, column); - return new column_MatrixColumnView(this, column); - } - - /** - * Returns a view of the matrix flipped in the row axis - * @return {MatrixFlipRowView} - */ - flipRowView() { - return new flipRow_MatrixFlipRowView(this); - } - - /** - * Returns a view of the matrix flipped in the column axis - * @return {MatrixFlipColumnView} - */ - flipColumnView() { - return new flipColumn_MatrixFlipColumnView(this); - } - - /** - * Returns a view of a submatrix giving the index boundaries - * @param {number} startRow - first row index of the submatrix - * @param {number} endRow - last row index of the submatrix - * @param {number} startColumn - first column index of the submatrix - * @param {number} endColumn - last column index of the submatrix - * @return {MatrixSubView} - */ - subMatrixView(startRow, endRow, startColumn, endColumn) { - return new sub_MatrixSubView(this, startRow, endRow, startColumn, endColumn); - } - - /** - * Returns a view of the cross of the row indices and the column indices - * @example - * // resulting vector is [[2], [2]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).selectionView([0, 0], [1]) - * @param {Array} rowIndices - * @param {Array} columnIndices - * @return {MatrixSelectionView} - */ - selectionView(rowIndices, columnIndices) { - return new selection_MatrixSelectionView(this, rowIndices, columnIndices); - } - - /** - * Returns a view of the row indices - * @example - * // resulting vector is [[1,2,3], [1,2,3]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).rowSelectionView([0, 0]) - * @param {Array} rowIndices - * @return {MatrixRowSelectionView} - */ - rowSelectionView(rowIndices) { - return new rowSelection_MatrixRowSelectionView(this, rowIndices); - } - - /** - * Returns a view of the column indices - * @example - * // resulting vector is [[2, 2], [5, 5]] - * var matrix = new Matrix([[1,2,3], [4,5,6]]).columnSelectionView([1, 1]) - * @param {Array} columnIndices - * @return {MatrixColumnSelectionView} - */ - columnSelectionView(columnIndices) { - return new columnSelection_MatrixColumnSelectionView(this, columnIndices); - } - - - /** - * Calculates and returns the determinant of a matrix as a Number - * @example - * new Matrix([[1,2,3], [4,5,6]]).det() - * @return {number} - */ - det() { - if (this.isSquare()) { - var a, b, c, d; - if (this.columns === 2) { - // 2 x 2 matrix - a = this.get(0, 0); - b = this.get(0, 1); - c = this.get(1, 0); - d = this.get(1, 1); - - return a * d - (b * c); - } else if (this.columns === 3) { - // 3 x 3 matrix - var subMatrix0, subMatrix1, subMatrix2; - subMatrix0 = this.selectionView([1, 2], [1, 2]); - subMatrix1 = this.selectionView([1, 2], [0, 2]); - subMatrix2 = this.selectionView([1, 2], [0, 1]); - a = this.get(0, 0); - b = this.get(0, 1); - c = this.get(0, 2); - - return a * subMatrix0.det() - b * subMatrix1.det() + c * subMatrix2.det(); - } else { - // general purpose determinant using the LU decomposition - return new lu_LuDecomposition(this).determinant; - } - } else { - throw Error('Determinant can only be calculated for a square matrix.'); - } - } - - /** - * Returns inverse of a matrix if it exists or the pseudoinverse - * @param {number} threshold - threshold for taking inverse of singular values (default = 1e-15) - * @return {Matrix} the (pseudo)inverted matrix. - */ - pseudoInverse(threshold) { - if (threshold === undefined) threshold = Number.EPSILON; - var svdSolution = new svd_SingularValueDecomposition(this, { autoTranspose: true }); - - var U = svdSolution.leftSingularVectors; - var V = svdSolution.rightSingularVectors; - var s = svdSolution.diagonal; - - for (var i = 0; i < s.length; i++) { - if (Math.abs(s[i]) > threshold) { - s[i] = 1.0 / s[i]; - } else { - s[i] = 0.0; - } - } - - // convert list to diagonal - s = this.constructor[Symbol.species].diag(s); - return V.mmul(s.mmul(U.transposeView())); - } - - /** - * Creates an exact and independent copy of the matrix - * @return {Matrix} - */ - clone() { - var newMatrix = new this.constructor[Symbol.species](this.rows, this.columns); - for (var row = 0; row < this.rows; row++) { - for (var column = 0; column < this.columns; column++) { - newMatrix.set(row, column, this.get(row, column)); - } - } - return newMatrix; - } - } - - Matrix.prototype.klass = 'Matrix'; - - function compareNumbers(a, b) { - return a - b; - } - - /* - Synonyms - */ - - Matrix.random = Matrix.rand; - Matrix.diagonal = Matrix.diag; - Matrix.prototype.diagonal = Matrix.prototype.diag; - Matrix.identity = Matrix.eye; - Matrix.prototype.negate = Matrix.prototype.neg; - Matrix.prototype.tensorProduct = Matrix.prototype.kroneckerProduct; - Matrix.prototype.determinant = Matrix.prototype.det; - - /* - Add dynamically instance and static methods for mathematical operations - */ - - var inplaceOperator = ` -(function %name%(value) { - if (typeof value === 'number') return this.%name%S(value); - return this.%name%M(value); -}) -`; - - var inplaceOperatorScalar = ` -(function %name%S(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) %op% value); - } - } - return this; -}) -`; - - var inplaceOperatorMatrix = ` -(function %name%M(matrix) { - matrix = this.constructor.checkMatrix(matrix); - if (this.rows !== matrix.rows || - this.columns !== matrix.columns) { - throw new RangeError('Matrices dimensions must be equal'); - } - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, this.get(i, j) %op% matrix.get(i, j)); - } - } - return this; -}) -`; - - var staticOperator = ` -(function %name%(matrix, value) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(value); -}) -`; - - var inplaceMethod = ` -(function %name%() { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j))); - } - } - return this; -}) -`; - - var staticMethod = ` -(function %name%(matrix) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(); -}) -`; - - var inplaceMethodWithArgs = ` -(function %name%(%args%) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), %args%)); - } - } - return this; -}) -`; - - var staticMethodWithArgs = ` -(function %name%(matrix, %args%) { - var newMatrix = new this[Symbol.species](matrix); - return newMatrix.%name%(%args%); -}) -`; - - - var inplaceMethodWithOneArgScalar = ` -(function %name%S(value) { - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), value)); - } - } - return this; -}) -`; - var inplaceMethodWithOneArgMatrix = ` -(function %name%M(matrix) { - matrix = this.constructor.checkMatrix(matrix); - if (this.rows !== matrix.rows || - this.columns !== matrix.columns) { - throw new RangeError('Matrices dimensions must be equal'); - } - for (var i = 0; i < this.rows; i++) { - for (var j = 0; j < this.columns; j++) { - this.set(i, j, %method%(this.get(i, j), matrix.get(i, j))); - } - } - return this; -}) -`; - - var inplaceMethodWithOneArg = ` -(function %name%(value) { - if (typeof value === 'number') return this.%name%S(value); - return this.%name%M(value); -}) -`; - - var staticMethodWithOneArg = staticMethodWithArgs; - - var operators = [ - // Arithmetic operators - ['+', 'add'], - ['-', 'sub', 'subtract'], - ['*', 'mul', 'multiply'], - ['/', 'div', 'divide'], - ['%', 'mod', 'modulus'], - // Bitwise operators - ['&', 'and'], - ['|', 'or'], - ['^', 'xor'], - ['<<', 'leftShift'], - ['>>', 'signPropagatingRightShift'], - ['>>>', 'rightShift', 'zeroFillRightShift'] - ]; - - var i; - var eval2 = eval; // eslint-disable-line no-eval - for (var operator of operators) { - var inplaceOp = eval2(fillTemplateFunction(inplaceOperator, { name: operator[1], op: operator[0] })); - var inplaceOpS = eval2(fillTemplateFunction(inplaceOperatorScalar, { name: `${operator[1]}S`, op: operator[0] })); - var inplaceOpM = eval2(fillTemplateFunction(inplaceOperatorMatrix, { name: `${operator[1]}M`, op: operator[0] })); - var staticOp = eval2(fillTemplateFunction(staticOperator, { name: operator[1] })); - for (i = 1; i < operator.length; i++) { - Matrix.prototype[operator[i]] = inplaceOp; - Matrix.prototype[`${operator[i]}S`] = inplaceOpS; - Matrix.prototype[`${operator[i]}M`] = inplaceOpM; - Matrix[operator[i]] = staticOp; - } - } - - var methods = [['~', 'not']]; - - [ - 'abs', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atanh', 'cbrt', 'ceil', - 'clz32', 'cos', 'cosh', 'exp', 'expm1', 'floor', 'fround', 'log', 'log1p', - 'log10', 'log2', 'round', 'sign', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc' - ].forEach(function (mathMethod) { - methods.push([`Math.${mathMethod}`, mathMethod]); - }); - - for (var method of methods) { - var inplaceMeth = eval2(fillTemplateFunction(inplaceMethod, { name: method[1], method: method[0] })); - var staticMeth = eval2(fillTemplateFunction(staticMethod, { name: method[1] })); - for (i = 1; i < method.length; i++) { - Matrix.prototype[method[i]] = inplaceMeth; - Matrix[method[i]] = staticMeth; - } - } - - var methodsWithArgs = [['Math.pow', 1, 'pow']]; - - for (var methodWithArg of methodsWithArgs) { - var args = 'arg0'; - for (i = 1; i < methodWithArg[1]; i++) { - args += `, arg${i}`; - } - if (methodWithArg[1] !== 1) { - var inplaceMethWithArgs = eval2(fillTemplateFunction(inplaceMethodWithArgs, { - name: methodWithArg[2], - method: methodWithArg[0], - args: args - })); - var staticMethWithArgs = eval2(fillTemplateFunction(staticMethodWithArgs, { name: methodWithArg[2], args: args })); - for (i = 2; i < methodWithArg.length; i++) { - Matrix.prototype[methodWithArg[i]] = inplaceMethWithArgs; - Matrix[methodWithArg[i]] = staticMethWithArgs; - } - } else { - var tmplVar = { - name: methodWithArg[2], - args: args, - method: methodWithArg[0] - }; - var inplaceMethod2 = eval2(fillTemplateFunction(inplaceMethodWithOneArg, tmplVar)); - var inplaceMethodS = eval2(fillTemplateFunction(inplaceMethodWithOneArgScalar, tmplVar)); - var inplaceMethodM = eval2(fillTemplateFunction(inplaceMethodWithOneArgMatrix, tmplVar)); - var staticMethod2 = eval2(fillTemplateFunction(staticMethodWithOneArg, tmplVar)); - for (i = 2; i < methodWithArg.length; i++) { - Matrix.prototype[methodWithArg[i]] = inplaceMethod2; - Matrix.prototype[`${methodWithArg[i]}M`] = inplaceMethodM; - Matrix.prototype[`${methodWithArg[i]}S`] = inplaceMethodS; - Matrix[methodWithArg[i]] = staticMethod2; - } - } - } - - function fillTemplateFunction(template, values) { - for (var value in values) { - template = template.replace(new RegExp(`%${value}%`, 'g'), values[value]); - } - return template; - } - - return Matrix; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/matrix.js - - - -class matrix_Matrix extends AbstractMatrix(Array) { - constructor(nRows, nColumns) { - var i; - if (arguments.length === 1 && typeof nRows === 'number') { - return new Array(nRows); - } - if (matrix_Matrix.isMatrix(nRows)) { - return nRows.clone(); - } else if (Number.isInteger(nRows) && nRows > 0) { - // Create an empty matrix - super(nRows); - if (Number.isInteger(nColumns) && nColumns > 0) { - for (i = 0; i < nRows; i++) { - this[i] = new Array(nColumns); - } - } else { - throw new TypeError('nColumns must be a positive integer'); - } - } else if (Array.isArray(nRows)) { - // Copy the values from the 2D array - const matrix = nRows; - nRows = matrix.length; - nColumns = matrix[0].length; - if (typeof nColumns !== 'number' || nColumns === 0) { - throw new TypeError( - 'Data must be a 2D array with at least one element' - ); - } - super(nRows); - for (i = 0; i < nRows; i++) { - if (matrix[i].length !== nColumns) { - throw new RangeError('Inconsistent array dimensions'); - } - this[i] = [].concat(matrix[i]); - } - } else { - throw new TypeError( - 'First argument must be a positive number or an array' - ); - } - this.rows = nRows; - this.columns = nColumns; - return this; - } - - set(rowIndex, columnIndex, value) { - this[rowIndex][columnIndex] = value; - return this; - } - - get(rowIndex, columnIndex) { - return this[rowIndex][columnIndex]; - } - - /** - * Removes a row from the given index - * @param {number} index - Row index - * @return {Matrix} this - */ - removeRow(index) { - checkRowIndex(this, index); - if (this.rows === 1) { - throw new RangeError('A matrix cannot have less than one row'); - } - this.splice(index, 1); - this.rows -= 1; - return this; - } - - /** - * Adds a row at the given index - * @param {number} [index = this.rows] - Row index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - addRow(index, array) { - if (array === undefined) { - array = index; - index = this.rows; - } - checkRowIndex(this, index, true); - array = checkRowVector(this, array, true); - this.splice(index, 0, array); - this.rows += 1; - return this; - } - - /** - * Removes a column from the given index - * @param {number} index - Column index - * @return {Matrix} this - */ - removeColumn(index) { - checkColumnIndex(this, index); - if (this.columns === 1) { - throw new RangeError('A matrix cannot have less than one column'); - } - for (var i = 0; i < this.rows; i++) { - this[i].splice(index, 1); - } - this.columns -= 1; - return this; - } - - /** - * Adds a column at the given index - * @param {number} [index = this.columns] - Column index - * @param {Array|Matrix} array - Array or vector - * @return {Matrix} this - */ - addColumn(index, array) { - if (typeof array === 'undefined') { - array = index; - index = this.columns; - } - checkColumnIndex(this, index, true); - array = checkColumnVector(this, array); - for (var i = 0; i < this.rows; i++) { - this[i].splice(index, 0, array[i]); - } - this.columns += 1; - return this; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/WrapperMatrix1D.js - - - -class WrapperMatrix1D_WrapperMatrix1D extends AbstractMatrix() { - /** - * @class WrapperMatrix1D - * @param {Array} data - * @param {object} [options] - * @param {object} [options.rows = 1] - */ - constructor(data, options = {}) { - const { rows = 1 } = options; - - if (data.length % rows !== 0) { - throw new Error('the data length is not divisible by the number of rows'); - } - super(); - this.rows = rows; - this.columns = data.length / rows; - this.data = data; - } - - set(rowIndex, columnIndex, value) { - var index = this._calculateIndex(rowIndex, columnIndex); - this.data[index] = value; - return this; - } - - get(rowIndex, columnIndex) { - var index = this._calculateIndex(rowIndex, columnIndex); - return this.data[index]; - } - - _calculateIndex(row, column) { - return row * this.columns + column; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/WrapperMatrix2D.js - - - -class WrapperMatrix2D_WrapperMatrix2D extends AbstractMatrix() { - /** - * @class WrapperMatrix2D - * @param {Array>} data - */ - constructor(data) { - super(); - this.data = data; - this.rows = data.length; - this.columns = data[0].length; - } - - set(rowIndex, columnIndex, value) { - this.data[rowIndex][columnIndex] = value; - return this; - } - - get(rowIndex, columnIndex) { - return this.data[rowIndex][columnIndex]; - } - - static get [Symbol.species]() { - return matrix_Matrix; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/wrap/wrap.js - - - -/** - * @param {Array>|Array} array - * @param {object} [options] - * @param {object} [options.rows = 1] - * @return {WrapperMatrix1D|WrapperMatrix2D} - */ -function wrap(array, options) { - if (Array.isArray(array)) { - if (array[0] && Array.isArray(array[0])) { - return new WrapperMatrix2D_WrapperMatrix2D(array); - } else { - return new WrapperMatrix1D_WrapperMatrix1D(array, options); - } - } else { - throw new Error('the argument is not an array'); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/qr.js - - - - -/** - * @class QrDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/QrDecomposition.cs - * @param {Matrix} value - */ -class qr_QrDecomposition { - constructor(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var qr = value.clone(); - var m = value.rows; - var n = value.columns; - var rdiag = new Array(n); - var i, j, k, s; - - for (k = 0; k < n; k++) { - var nrm = 0; - for (i = k; i < m; i++) { - nrm = hypotenuse(nrm, qr.get(i, k)); - } - if (nrm !== 0) { - if (qr.get(k, k) < 0) { - nrm = -nrm; - } - for (i = k; i < m; i++) { - qr.set(i, k, qr.get(i, k) / nrm); - } - qr.set(k, k, qr.get(k, k) + 1); - for (j = k + 1; j < n; j++) { - s = 0; - for (i = k; i < m; i++) { - s += qr.get(i, k) * qr.get(i, j); - } - s = -s / qr.get(k, k); - for (i = k; i < m; i++) { - qr.set(i, j, qr.get(i, j) + s * qr.get(i, k)); - } - } - } - rdiag[k] = -nrm; - } - - this.QR = qr; - this.Rdiag = rdiag; - } - - /** - * Solve a problem of least square (Ax=b) by using the QR decomposition. Useful when A is rectangular, but not working when A is singular. - * Example : We search to approximate x, with A matrix shape m*n, x vector size n, b vector size m (m > n). We will use : - * var qr = QrDecomposition(A); - * var x = qr.solve(b); - * @param {Matrix} value - Matrix 1D which is the vector b (in the equation Ax = b) - * @return {Matrix} - The vector x - */ - solve(value) { - value = matrix_Matrix.checkMatrix(value); - - var qr = this.QR; - var m = qr.rows; - - if (value.rows !== m) { - throw new Error('Matrix row dimensions must agree'); - } - if (!this.isFullRank()) { - throw new Error('Matrix is rank deficient'); - } - - var count = value.columns; - var X = value.clone(); - var n = qr.columns; - var i, j, k, s; - - for (k = 0; k < n; k++) { - for (j = 0; j < count; j++) { - s = 0; - for (i = k; i < m; i++) { - s += qr[i][k] * X[i][j]; - } - s = -s / qr[k][k]; - for (i = k; i < m; i++) { - X[i][j] += s * qr[i][k]; - } - } - } - for (k = n - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - X[k][j] /= this.Rdiag[k]; - } - for (i = 0; i < k; i++) { - for (j = 0; j < count; j++) { - X[i][j] -= X[k][j] * qr[i][k]; - } - } - } - - return X.subMatrix(0, n - 1, 0, count - 1); - } - - /** - * - * @return {boolean} - */ - isFullRank() { - var columns = this.QR.columns; - for (var i = 0; i < columns; i++) { - if (this.Rdiag[i] === 0) { - return false; - } - } - return true; - } - - /** - * - * @return {Matrix} - */ - get upperTriangularMatrix() { - var qr = this.QR; - var n = qr.columns; - var X = new matrix_Matrix(n, n); - var i, j; - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - if (i < j) { - X[i][j] = qr[i][j]; - } else if (i === j) { - X[i][j] = this.Rdiag[i]; - } else { - X[i][j] = 0; - } - } - } - return X; - } - - /** - * - * @return {Matrix} - */ - get orthogonalMatrix() { - var qr = this.QR; - var rows = qr.rows; - var columns = qr.columns; - var X = new matrix_Matrix(rows, columns); - var i, j, k, s; - - for (k = columns - 1; k >= 0; k--) { - for (i = 0; i < rows; i++) { - X[i][k] = 0; - } - X[k][k] = 1; - for (j = k; j < columns; j++) { - if (qr[k][k] !== 0) { - s = 0; - for (i = k; i < rows; i++) { - s += qr[i][k] * X[i][j]; - } - - s = -s / qr[k][k]; - - for (i = k; i < rows; i++) { - X[i][j] += s * qr[i][k]; - } - } - } - } - return X; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/decompositions.js - - - - - - -/** - * Computes the inverse of a Matrix - * @param {Matrix} matrix - * @param {boolean} [useSVD=false] - * @return {Matrix} - */ -function inverse(matrix, useSVD = false) { - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - if (useSVD) { - return new svd_SingularValueDecomposition(matrix).inverse(); - } else { - return solve(matrix, matrix_Matrix.eye(matrix.rows)); - } -} - -/** - * - * @param {Matrix} leftHandSide - * @param {Matrix} rightHandSide - * @param {boolean} [useSVD = false] - * @return {Matrix} - */ -function solve(leftHandSide, rightHandSide, useSVD = false) { - leftHandSide = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(leftHandSide); - rightHandSide = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(rightHandSide); - if (useSVD) { - return new svd_SingularValueDecomposition(leftHandSide).solve(rightHandSide); - } else { - return leftHandSide.isSquare() - ? new lu_LuDecomposition(leftHandSide).solve(rightHandSide) - : new qr_QrDecomposition(leftHandSide).solve(rightHandSide); - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/linearDependencies.js - - - - - -// function used by rowsDependencies -function xrange(n, exception) { - var range = []; - for (var i = 0; i < n; i++) { - if (i !== exception) { - range.push(i); - } - } - return range; -} - -// function used by rowsDependencies -function dependenciesOneRow( - error, - matrix, - index, - thresholdValue = 10e-10, - thresholdError = 10e-10 -) { - if (error > thresholdError) { - return new Array(matrix.rows + 1).fill(0); - } else { - var returnArray = matrix.addRow(index, [0]); - for (var i = 0; i < returnArray.rows; i++) { - if (Math.abs(returnArray.get(i, 0)) < thresholdValue) { - returnArray.set(i, 0, 0); - } - } - return returnArray.to1DArray(); - } -} - -/** - * Creates a matrix which represents the dependencies between rows. - * If a row is a linear combination of others rows, the result will be a row with the coefficients of this combination. - * For example : for A = [[2, 0, 0, 1], [0, 1, 6, 0], [0, 3, 0, 1], [0, 0, 1, 0], [0, 1, 2, 0]], the result will be [[0, 0, 0, 0, 0], [0, 0, 0, 4, 1], [0, 0, 0, 0, 0], [0, 0.25, 0, 0, -0.25], [0, 1, 0, -4, 0]] - * @param {Matrix} matrix - * @param {Object} [options] includes thresholdValue and thresholdError. - * @param {number} [options.thresholdValue = 10e-10] If an absolute value is inferior to this threshold, it will equals zero. - * @param {number} [options.thresholdError = 10e-10] If the error is inferior to that threshold, the linear combination found is accepted and the row is dependent from other rows. - * @return {Matrix} the matrix which represents the dependencies between rows. - */ - -function linearDependencies(matrix, options = {}) { - const { thresholdValue = 10e-10, thresholdError = 10e-10 } = options; - - var n = matrix.rows; - var results = new matrix_Matrix(n, n); - - for (var i = 0; i < n; i++) { - var b = matrix_Matrix.columnVector(matrix.getRow(i)); - var Abis = matrix.subMatrixRow(xrange(n, i)).transposeView(); - var svd = new svd_SingularValueDecomposition(Abis); - var x = svd.solve(b); - var error = lib_es6( - matrix_Matrix.sub(b, Abis.mmul(x)) - .abs() - .to1DArray() - ); - results.setRow( - i, - dependenciesOneRow(error, x, i, thresholdValue, thresholdError) - ); - } - return results; -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/evd.js - - - - -/** - * @class EigenvalueDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/EigenvalueDecomposition.cs - * @param {Matrix} matrix - * @param {object} [options] - * @param {boolean} [options.assumeSymmetric=false] - */ -class evd_EigenvalueDecomposition { - constructor(matrix, options = {}) { - const { assumeSymmetric = false } = options; - - matrix = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(matrix); - if (!matrix.isSquare()) { - throw new Error('Matrix is not a square matrix'); - } - - var n = matrix.columns; - var V = getFilled2DArray(n, n, 0); - var d = new Array(n); - var e = new Array(n); - var value = matrix; - var i, j; - - var isSymmetric = false; - if (assumeSymmetric) { - isSymmetric = true; - } else { - isSymmetric = matrix.isSymmetric(); - } - - if (isSymmetric) { - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - V[i][j] = value.get(i, j); - } - } - tred2(n, e, d, V); - tql2(n, e, d, V); - } else { - var H = getFilled2DArray(n, n, 0); - var ort = new Array(n); - for (j = 0; j < n; j++) { - for (i = 0; i < n; i++) { - H[i][j] = value.get(i, j); - } - } - orthes(n, H, ort, V); - hqr2(n, e, d, V, H); - } - - this.n = n; - this.e = e; - this.d = d; - this.V = V; - } - - /** - * - * @return {Array} - */ - get realEigenvalues() { - return this.d; - } - - /** - * - * @return {Array} - */ - get imaginaryEigenvalues() { - return this.e; - } - - /** - * - * @return {Matrix} - */ - get eigenvectorMatrix() { - if (!matrix_Matrix.isMatrix(this.V)) { - this.V = new matrix_Matrix(this.V); - } - return this.V; - } - - /** - * - * @return {Matrix} - */ - get diagonalMatrix() { - var n = this.n; - var e = this.e; - var d = this.d; - var X = new matrix_Matrix(n, n); - var i, j; - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - X[i][j] = 0; - } - X[i][i] = d[i]; - if (e[i] > 0) { - X[i][i + 1] = e[i]; - } else if (e[i] < 0) { - X[i][i - 1] = e[i]; - } - } - return X; - } -} - -function tred2(n, e, d, V) { - var f, g, h, i, j, k, hh, scale; - - for (j = 0; j < n; j++) { - d[j] = V[n - 1][j]; - } - - for (i = n - 1; i > 0; i--) { - scale = 0; - h = 0; - for (k = 0; k < i; k++) { - scale = scale + Math.abs(d[k]); - } - - if (scale === 0) { - e[i] = d[i - 1]; - for (j = 0; j < i; j++) { - d[j] = V[i - 1][j]; - V[i][j] = 0; - V[j][i] = 0; - } - } else { - for (k = 0; k < i; k++) { - d[k] /= scale; - h += d[k] * d[k]; - } - - f = d[i - 1]; - g = Math.sqrt(h); - if (f > 0) { - g = -g; - } - - e[i] = scale * g; - h = h - f * g; - d[i - 1] = f - g; - for (j = 0; j < i; j++) { - e[j] = 0; - } - - for (j = 0; j < i; j++) { - f = d[j]; - V[j][i] = f; - g = e[j] + V[j][j] * f; - for (k = j + 1; k <= i - 1; k++) { - g += V[k][j] * d[k]; - e[k] += V[k][j] * f; - } - e[j] = g; - } - - f = 0; - for (j = 0; j < i; j++) { - e[j] /= h; - f += e[j] * d[j]; - } - - hh = f / (h + h); - for (j = 0; j < i; j++) { - e[j] -= hh * d[j]; - } - - for (j = 0; j < i; j++) { - f = d[j]; - g = e[j]; - for (k = j; k <= i - 1; k++) { - V[k][j] -= f * e[k] + g * d[k]; - } - d[j] = V[i - 1][j]; - V[i][j] = 0; - } - } - d[i] = h; - } - - for (i = 0; i < n - 1; i++) { - V[n - 1][i] = V[i][i]; - V[i][i] = 1; - h = d[i + 1]; - if (h !== 0) { - for (k = 0; k <= i; k++) { - d[k] = V[k][i + 1] / h; - } - - for (j = 0; j <= i; j++) { - g = 0; - for (k = 0; k <= i; k++) { - g += V[k][i + 1] * V[k][j]; - } - for (k = 0; k <= i; k++) { - V[k][j] -= g * d[k]; - } - } - } - - for (k = 0; k <= i; k++) { - V[k][i + 1] = 0; - } - } - - for (j = 0; j < n; j++) { - d[j] = V[n - 1][j]; - V[n - 1][j] = 0; - } - - V[n - 1][n - 1] = 1; - e[0] = 0; -} - -function tql2(n, e, d, V) { - var g, h, i, j, k, l, m, p, r, dl1, c, c2, c3, el1, s, s2, iter; - - for (i = 1; i < n; i++) { - e[i - 1] = e[i]; - } - - e[n - 1] = 0; - - var f = 0; - var tst1 = 0; - var eps = Number.EPSILON; - - for (l = 0; l < n; l++) { - tst1 = Math.max(tst1, Math.abs(d[l]) + Math.abs(e[l])); - m = l; - while (m < n) { - if (Math.abs(e[m]) <= eps * tst1) { - break; - } - m++; - } - - if (m > l) { - iter = 0; - do { - iter = iter + 1; - - g = d[l]; - p = (d[l + 1] - g) / (2 * e[l]); - r = hypotenuse(p, 1); - if (p < 0) { - r = -r; - } - - d[l] = e[l] / (p + r); - d[l + 1] = e[l] * (p + r); - dl1 = d[l + 1]; - h = g - d[l]; - for (i = l + 2; i < n; i++) { - d[i] -= h; - } - - f = f + h; - - p = d[m]; - c = 1; - c2 = c; - c3 = c; - el1 = e[l + 1]; - s = 0; - s2 = 0; - for (i = m - 1; i >= l; i--) { - c3 = c2; - c2 = c; - s2 = s; - g = c * e[i]; - h = c * p; - r = hypotenuse(p, e[i]); - e[i + 1] = s * r; - s = e[i] / r; - c = p / r; - p = c * d[i] - s * g; - d[i + 1] = h + s * (c * g + s * d[i]); - - for (k = 0; k < n; k++) { - h = V[k][i + 1]; - V[k][i + 1] = s * V[k][i] + c * h; - V[k][i] = c * V[k][i] - s * h; - } - } - - p = -s * s2 * c3 * el1 * e[l] / dl1; - e[l] = s * p; - d[l] = c * p; - } while (Math.abs(e[l]) > eps * tst1); - } - d[l] = d[l] + f; - e[l] = 0; - } - - for (i = 0; i < n - 1; i++) { - k = i; - p = d[i]; - for (j = i + 1; j < n; j++) { - if (d[j] < p) { - k = j; - p = d[j]; - } - } - - if (k !== i) { - d[k] = d[i]; - d[i] = p; - for (j = 0; j < n; j++) { - p = V[j][i]; - V[j][i] = V[j][k]; - V[j][k] = p; - } - } - } -} - -function orthes(n, H, ort, V) { - var low = 0; - var high = n - 1; - var f, g, h, i, j, m; - var scale; - - for (m = low + 1; m <= high - 1; m++) { - scale = 0; - for (i = m; i <= high; i++) { - scale = scale + Math.abs(H[i][m - 1]); - } - - if (scale !== 0) { - h = 0; - for (i = high; i >= m; i--) { - ort[i] = H[i][m - 1] / scale; - h += ort[i] * ort[i]; - } - - g = Math.sqrt(h); - if (ort[m] > 0) { - g = -g; - } - - h = h - ort[m] * g; - ort[m] = ort[m] - g; - - for (j = m; j < n; j++) { - f = 0; - for (i = high; i >= m; i--) { - f += ort[i] * H[i][j]; - } - - f = f / h; - for (i = m; i <= high; i++) { - H[i][j] -= f * ort[i]; - } - } - - for (i = 0; i <= high; i++) { - f = 0; - for (j = high; j >= m; j--) { - f += ort[j] * H[i][j]; - } - - f = f / h; - for (j = m; j <= high; j++) { - H[i][j] -= f * ort[j]; - } - } - - ort[m] = scale * ort[m]; - H[m][m - 1] = scale * g; - } - } - - for (i = 0; i < n; i++) { - for (j = 0; j < n; j++) { - V[i][j] = i === j ? 1 : 0; - } - } - - for (m = high - 1; m >= low + 1; m--) { - if (H[m][m - 1] !== 0) { - for (i = m + 1; i <= high; i++) { - ort[i] = H[i][m - 1]; - } - - for (j = m; j <= high; j++) { - g = 0; - for (i = m; i <= high; i++) { - g += ort[i] * V[i][j]; - } - - g = g / ort[m] / H[m][m - 1]; - for (i = m; i <= high; i++) { - V[i][j] += g * ort[i]; - } - } - } - } -} - -function hqr2(nn, e, d, V, H) { - var n = nn - 1; - var low = 0; - var high = nn - 1; - var eps = Number.EPSILON; - var exshift = 0; - var norm = 0; - var p = 0; - var q = 0; - var r = 0; - var s = 0; - var z = 0; - var iter = 0; - var i, j, k, l, m, t, w, x, y; - var ra, sa, vr, vi; - var notlast, cdivres; - - for (i = 0; i < nn; i++) { - if (i < low || i > high) { - d[i] = H[i][i]; - e[i] = 0; - } - - for (j = Math.max(i - 1, 0); j < nn; j++) { - norm = norm + Math.abs(H[i][j]); - } - } - - while (n >= low) { - l = n; - while (l > low) { - s = Math.abs(H[l - 1][l - 1]) + Math.abs(H[l][l]); - if (s === 0) { - s = norm; - } - if (Math.abs(H[l][l - 1]) < eps * s) { - break; - } - l--; - } - - if (l === n) { - H[n][n] = H[n][n] + exshift; - d[n] = H[n][n]; - e[n] = 0; - n--; - iter = 0; - } else if (l === n - 1) { - w = H[n][n - 1] * H[n - 1][n]; - p = (H[n - 1][n - 1] - H[n][n]) / 2; - q = p * p + w; - z = Math.sqrt(Math.abs(q)); - H[n][n] = H[n][n] + exshift; - H[n - 1][n - 1] = H[n - 1][n - 1] + exshift; - x = H[n][n]; - - if (q >= 0) { - z = p >= 0 ? p + z : p - z; - d[n - 1] = x + z; - d[n] = d[n - 1]; - if (z !== 0) { - d[n] = x - w / z; - } - e[n - 1] = 0; - e[n] = 0; - x = H[n][n - 1]; - s = Math.abs(x) + Math.abs(z); - p = x / s; - q = z / s; - r = Math.sqrt(p * p + q * q); - p = p / r; - q = q / r; - - for (j = n - 1; j < nn; j++) { - z = H[n - 1][j]; - H[n - 1][j] = q * z + p * H[n][j]; - H[n][j] = q * H[n][j] - p * z; - } - - for (i = 0; i <= n; i++) { - z = H[i][n - 1]; - H[i][n - 1] = q * z + p * H[i][n]; - H[i][n] = q * H[i][n] - p * z; - } - - for (i = low; i <= high; i++) { - z = V[i][n - 1]; - V[i][n - 1] = q * z + p * V[i][n]; - V[i][n] = q * V[i][n] - p * z; - } - } else { - d[n - 1] = x + p; - d[n] = x + p; - e[n - 1] = z; - e[n] = -z; - } - - n = n - 2; - iter = 0; - } else { - x = H[n][n]; - y = 0; - w = 0; - if (l < n) { - y = H[n - 1][n - 1]; - w = H[n][n - 1] * H[n - 1][n]; - } - - if (iter === 10) { - exshift += x; - for (i = low; i <= n; i++) { - H[i][i] -= x; - } - s = Math.abs(H[n][n - 1]) + Math.abs(H[n - 1][n - 2]); - x = y = 0.75 * s; - w = -0.4375 * s * s; - } - - if (iter === 30) { - s = (y - x) / 2; - s = s * s + w; - if (s > 0) { - s = Math.sqrt(s); - if (y < x) { - s = -s; - } - s = x - w / ((y - x) / 2 + s); - for (i = low; i <= n; i++) { - H[i][i] -= s; - } - exshift += s; - x = y = w = 0.964; - } - } - - iter = iter + 1; - - m = n - 2; - while (m >= l) { - z = H[m][m]; - r = x - z; - s = y - z; - p = (r * s - w) / H[m + 1][m] + H[m][m + 1]; - q = H[m + 1][m + 1] - z - r - s; - r = H[m + 2][m + 1]; - s = Math.abs(p) + Math.abs(q) + Math.abs(r); - p = p / s; - q = q / s; - r = r / s; - if (m === l) { - break; - } - if ( - Math.abs(H[m][m - 1]) * (Math.abs(q) + Math.abs(r)) < - eps * - (Math.abs(p) * - (Math.abs(H[m - 1][m - 1]) + - Math.abs(z) + - Math.abs(H[m + 1][m + 1]))) - ) { - break; - } - m--; - } - - for (i = m + 2; i <= n; i++) { - H[i][i - 2] = 0; - if (i > m + 2) { - H[i][i - 3] = 0; - } - } - - for (k = m; k <= n - 1; k++) { - notlast = k !== n - 1; - if (k !== m) { - p = H[k][k - 1]; - q = H[k + 1][k - 1]; - r = notlast ? H[k + 2][k - 1] : 0; - x = Math.abs(p) + Math.abs(q) + Math.abs(r); - if (x !== 0) { - p = p / x; - q = q / x; - r = r / x; - } - } - - if (x === 0) { - break; - } - - s = Math.sqrt(p * p + q * q + r * r); - if (p < 0) { - s = -s; - } - - if (s !== 0) { - if (k !== m) { - H[k][k - 1] = -s * x; - } else if (l !== m) { - H[k][k - 1] = -H[k][k - 1]; - } - - p = p + s; - x = p / s; - y = q / s; - z = r / s; - q = q / p; - r = r / p; - - for (j = k; j < nn; j++) { - p = H[k][j] + q * H[k + 1][j]; - if (notlast) { - p = p + r * H[k + 2][j]; - H[k + 2][j] = H[k + 2][j] - p * z; - } - - H[k][j] = H[k][j] - p * x; - H[k + 1][j] = H[k + 1][j] - p * y; - } - - for (i = 0; i <= Math.min(n, k + 3); i++) { - p = x * H[i][k] + y * H[i][k + 1]; - if (notlast) { - p = p + z * H[i][k + 2]; - H[i][k + 2] = H[i][k + 2] - p * r; - } - - H[i][k] = H[i][k] - p; - H[i][k + 1] = H[i][k + 1] - p * q; - } - - for (i = low; i <= high; i++) { - p = x * V[i][k] + y * V[i][k + 1]; - if (notlast) { - p = p + z * V[i][k + 2]; - V[i][k + 2] = V[i][k + 2] - p * r; - } - - V[i][k] = V[i][k] - p; - V[i][k + 1] = V[i][k + 1] - p * q; - } - } - } - } - } - - if (norm === 0) { - return; - } - - for (n = nn - 1; n >= 0; n--) { - p = d[n]; - q = e[n]; - - if (q === 0) { - l = n; - H[n][n] = 1; - for (i = n - 1; i >= 0; i--) { - w = H[i][i] - p; - r = 0; - for (j = l; j <= n; j++) { - r = r + H[i][j] * H[j][n]; - } - - if (e[i] < 0) { - z = w; - s = r; - } else { - l = i; - if (e[i] === 0) { - H[i][n] = w !== 0 ? -r / w : -r / (eps * norm); - } else { - x = H[i][i + 1]; - y = H[i + 1][i]; - q = (d[i] - p) * (d[i] - p) + e[i] * e[i]; - t = (x * s - z * r) / q; - H[i][n] = t; - H[i + 1][n] = - Math.abs(x) > Math.abs(z) ? (-r - w * t) / x : (-s - y * t) / z; - } - - t = Math.abs(H[i][n]); - if (eps * t * t > 1) { - for (j = i; j <= n; j++) { - H[j][n] = H[j][n] / t; - } - } - } - } - } else if (q < 0) { - l = n - 1; - - if (Math.abs(H[n][n - 1]) > Math.abs(H[n - 1][n])) { - H[n - 1][n - 1] = q / H[n][n - 1]; - H[n - 1][n] = -(H[n][n] - p) / H[n][n - 1]; - } else { - cdivres = cdiv(0, -H[n - 1][n], H[n - 1][n - 1] - p, q); - H[n - 1][n - 1] = cdivres[0]; - H[n - 1][n] = cdivres[1]; - } - - H[n][n - 1] = 0; - H[n][n] = 1; - for (i = n - 2; i >= 0; i--) { - ra = 0; - sa = 0; - for (j = l; j <= n; j++) { - ra = ra + H[i][j] * H[j][n - 1]; - sa = sa + H[i][j] * H[j][n]; - } - - w = H[i][i] - p; - - if (e[i] < 0) { - z = w; - r = ra; - s = sa; - } else { - l = i; - if (e[i] === 0) { - cdivres = cdiv(-ra, -sa, w, q); - H[i][n - 1] = cdivres[0]; - H[i][n] = cdivres[1]; - } else { - x = H[i][i + 1]; - y = H[i + 1][i]; - vr = (d[i] - p) * (d[i] - p) + e[i] * e[i] - q * q; - vi = (d[i] - p) * 2 * q; - if (vr === 0 && vi === 0) { - vr = - eps * - norm * - (Math.abs(w) + - Math.abs(q) + - Math.abs(x) + - Math.abs(y) + - Math.abs(z)); - } - cdivres = cdiv( - x * r - z * ra + q * sa, - x * s - z * sa - q * ra, - vr, - vi - ); - H[i][n - 1] = cdivres[0]; - H[i][n] = cdivres[1]; - if (Math.abs(x) > Math.abs(z) + Math.abs(q)) { - H[i + 1][n - 1] = (-ra - w * H[i][n - 1] + q * H[i][n]) / x; - H[i + 1][n] = (-sa - w * H[i][n] - q * H[i][n - 1]) / x; - } else { - cdivres = cdiv(-r - y * H[i][n - 1], -s - y * H[i][n], z, q); - H[i + 1][n - 1] = cdivres[0]; - H[i + 1][n] = cdivres[1]; - } - } - - t = Math.max(Math.abs(H[i][n - 1]), Math.abs(H[i][n])); - if (eps * t * t > 1) { - for (j = i; j <= n; j++) { - H[j][n - 1] = H[j][n - 1] / t; - H[j][n] = H[j][n] / t; - } - } - } - } - } - } - - for (i = 0; i < nn; i++) { - if (i < low || i > high) { - for (j = i; j < nn; j++) { - V[i][j] = H[i][j]; - } - } - } - - for (j = nn - 1; j >= low; j--) { - for (i = low; i <= high; i++) { - z = 0; - for (k = low; k <= Math.min(j, high); k++) { - z = z + V[i][k] * H[k][j]; - } - V[i][j] = z; - } - } -} - -function cdiv(xr, xi, yr, yi) { - var r, d; - if (Math.abs(yr) > Math.abs(yi)) { - r = yi / yr; - d = yr + r * yi; - return [(xr + r * xi) / d, (xi - r * xr) / d]; - } else { - r = yr / yi; - d = yi + r * yr; - return [(r * xr + xi) / d, (r * xi - xr) / d]; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/dc/cholesky.js - - -/** - * @class CholeskyDecomposition - * @link https://github.com/lutzroeder/Mapack/blob/master/Source/CholeskyDecomposition.cs - * @param {Matrix} value - */ -class cholesky_CholeskyDecomposition { - constructor(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - if (!value.isSymmetric()) { - throw new Error('Matrix is not symmetric'); - } - - var a = value; - var dimension = a.rows; - var l = new matrix_Matrix(dimension, dimension); - var positiveDefinite = true; - var i, j, k; - - for (j = 0; j < dimension; j++) { - var Lrowj = l[j]; - var d = 0; - for (k = 0; k < j; k++) { - var Lrowk = l[k]; - var s = 0; - for (i = 0; i < k; i++) { - s += Lrowk[i] * Lrowj[i]; - } - Lrowj[k] = s = (a.get(j, k) - s) / l[k][k]; - d = d + s * s; - } - - d = a.get(j, j) - d; - - positiveDefinite &= d > 0; - l[j][j] = Math.sqrt(Math.max(d, 0)); - for (k = j + 1; k < dimension; k++) { - l[j][k] = 0; - } - } - - if (!positiveDefinite) { - throw new Error('Matrix is not positive definite'); - } - - this.L = l; - } - - /** - * - * @param {Matrix} value - * @return {Matrix} - */ - solve(value) { - value = WrapperMatrix2D_WrapperMatrix2D.checkMatrix(value); - - var l = this.L; - var dimension = l.rows; - - if (value.rows !== dimension) { - throw new Error('Matrix dimensions do not match'); - } - - var count = value.columns; - var B = value.clone(); - var i, j, k; - - for (k = 0; k < dimension; k++) { - for (j = 0; j < count; j++) { - for (i = 0; i < k; i++) { - B[k][j] -= B[i][j] * l[k][i]; - } - B[k][j] /= l[k][k]; - } - } - - for (k = dimension - 1; k >= 0; k--) { - for (j = 0; j < count; j++) { - for (i = k + 1; i < dimension; i++) { - B[k][j] -= B[i][j] * l[i][k]; - } - B[k][j] /= l[k][k]; - } - } - - return B; - } - - /** - * - * @return {Matrix} - */ - get lowerTriangularMatrix() { - return this.L; - } -} - -// CONCATENATED MODULE: ./node_modules/ml-matrix/src/index.js -/* concated harmony reexport default */__webpack_require__.d(__webpack_exports__, "default", function() { return matrix_Matrix; }); -/* concated harmony reexport Matrix */__webpack_require__.d(__webpack_exports__, "Matrix", function() { return matrix_Matrix; }); -/* concated harmony reexport abstractMatrix */__webpack_require__.d(__webpack_exports__, "abstractMatrix", function() { return AbstractMatrix; }); -/* concated harmony reexport wrap */__webpack_require__.d(__webpack_exports__, "wrap", function() { return wrap; }); -/* concated harmony reexport WrapperMatrix2D */__webpack_require__.d(__webpack_exports__, "WrapperMatrix2D", function() { return WrapperMatrix2D_WrapperMatrix2D; }); -/* concated harmony reexport WrapperMatrix1D */__webpack_require__.d(__webpack_exports__, "WrapperMatrix1D", function() { return WrapperMatrix1D_WrapperMatrix1D; }); -/* concated harmony reexport solve */__webpack_require__.d(__webpack_exports__, "solve", function() { return solve; }); -/* concated harmony reexport inverse */__webpack_require__.d(__webpack_exports__, "inverse", function() { return inverse; }); -/* concated harmony reexport linearDependencies */__webpack_require__.d(__webpack_exports__, "linearDependencies", function() { return linearDependencies; }); -/* concated harmony reexport SingularValueDecomposition */__webpack_require__.d(__webpack_exports__, "SingularValueDecomposition", function() { return svd_SingularValueDecomposition; }); -/* concated harmony reexport SVD */__webpack_require__.d(__webpack_exports__, "SVD", function() { return svd_SingularValueDecomposition; }); -/* concated harmony reexport EigenvalueDecomposition */__webpack_require__.d(__webpack_exports__, "EigenvalueDecomposition", function() { return evd_EigenvalueDecomposition; }); -/* concated harmony reexport EVD */__webpack_require__.d(__webpack_exports__, "EVD", function() { return evd_EigenvalueDecomposition; }); -/* concated harmony reexport CholeskyDecomposition */__webpack_require__.d(__webpack_exports__, "CholeskyDecomposition", function() { return cholesky_CholeskyDecomposition; }); -/* concated harmony reexport CHO */__webpack_require__.d(__webpack_exports__, "CHO", function() { return cholesky_CholeskyDecomposition; }); -/* concated harmony reexport LuDecomposition */__webpack_require__.d(__webpack_exports__, "LuDecomposition", function() { return lu_LuDecomposition; }); -/* concated harmony reexport LU */__webpack_require__.d(__webpack_exports__, "LU", function() { return lu_LuDecomposition; }); -/* concated harmony reexport QrDecomposition */__webpack_require__.d(__webpack_exports__, "QrDecomposition", function() { return qr_QrDecomposition; }); -/* concated harmony reexport QR */__webpack_require__.d(__webpack_exports__, "QR", function() { return qr_QrDecomposition; }); - - - - - - - - - - - - - - - - -/***/ }) -/******/ ]); -}); \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/README.md b/spaces/merve/uncertainty-calibration/README.md deleted file mode 100644 index 9797501d00a3d7c793871c35ae4ab786d41a4bdb..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: uncertainty-calibration -emoji: 🪄 -colorFrom: green -colorTo: purple -sdk: static -pinned: false -license: apache-2.0 -app_file: public/uncertainty-calibration/index.html ---- diff --git a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/style.css b/spaces/merve/uncertainty-calibration/source/dataset-worldviews/style.css deleted file mode 100644 index b8cdd4b074388e961c5dd22322a9e056903f2b2c..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/style.css +++ /dev/null @@ -1,260 +0,0 @@ -:root { - --shaded-shape-color: #9e9e9e; - --not-shaded-shape-color: white; - --classifier-bg-color: #e6e6e6; -} - -.right { - float: right; -} -.left { - float: left; -} - -.gt-shaded { - fill: var(--shaded-shape-color); - stroke: black; - stroke-width: 1; -} - -.gt-unshaded { - fill: var(--not-shaded-shape-color); - stroke: black; - stroke-width: 1; -} - -.shape-label-group { - opacity: 0; -} -.shape-label-group.visible { - opacity: 100; -} - -.incorrect.is-classified { - stroke-width: 2; - transition: stroke-width 0.5s; - transition-timing-function: cubic-bezier(0, 7, 0, 7); - stroke: #d15830; -} - -.correct.is-classified { - stroke-width: 1; - stroke: green; -} - -.shape-label-rect { - opacity: 50; - fill: white; - stroke: none; -} - -.shape-label-text { - color: black; -} - -.source { - text-decoration: none; - font-size: 10px; -} - -.newspaper-image { - width: 450px; -} - -.interface-image { - width: 450px; -} -.summary-text { - opacity: 0; - padding-top: 0px; - padding-bottom: 20px; - text-indent: 50px; -} - -.summary-text.is-classified { - transition: opacity 1000ms; - transition-delay: 2500ms; - opacity: 100; -} - -.classifier { - /* fill:#c2c2c2; - stroke-width: 0;*/ - opacity: 0; -} - -.classifier.is-classified { - transition: opacity 1000ms; - transition-delay: 1500ms; - opacity: 100; - fill: #c2c2c2; - stroke-width: 2; -} - -.classifier-text { - text-anchor: middle; - /*alignment-baseline: central;*/ - font-size: 30px; -} - -.classifier-caption { - width: 800px; - text-align: center; - position: relative; - left: 50%; - margin-left: -400px; - font-size: 12px; - /*right: 50%;*/ -} - -.classifier-bg-shaded { - fill: var(--classifier-bg-color); - stroke-width: 0; -} - -.classifier-bg-unshaded { - fill: var(--classifier-bg-color); -} - -.item-text.invisible { - fill-opacity: 10; -} -.item-text { - fill-opacity: 100; -} - -.explainer-label-text { - padding-left: 2px; - padding-right: 2px; - padding-top: 1px; - padding-bottom: 1px; -} - -mark { - padding-left: 2px; - padding-right: 2px; - padding-top: 1px; - padding-bottom: 1px; - outline: 1px solid #000000; -} - -img.interface { - padding-top: 20px; - padding-right: 20px; - padding-bottom: 20px; - padding-left: 20px; -} - -.classifier-button { - padding: 10px 20px; - text-align: center; - font-family: "Google Sans", sans-serif; - margin-left: 20px; - margin-right: 20px; -} - -.classifer-bg-text { - font-family: "Consolas", "monaco", "monospace"; -} - -.emphasis { - font-weight: 500; -} - -.dropdown { - padding: 8px 7px; - min-width: 200px; - background-color: #f9f9f9; - box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2); - font-family: "Google Sans", sans-serif; - font-size: 14px; -} - -.fake-dropdown { - padding-top: 10px; - padding-bottom: 10px; - padding-left: 10px; - padding-right: 10px; -} - -.monospace { - font-family: "Consolas", "monaco", "monospace"; - font-size: 14px; - font-weight: 500; -} - -.monospace.shaded { - background-color: var(--shaded-shape-color); - outline: 1px solid #000000; - padding: 1px; - font-size: 14px; -} - -.monospace.not-shaded { - background-color: var(--not-shaded-shape-color); - outline: 1px solid #000000; - padding: 1px; - font-size: 14px; -} - -.classifier-info-blurb { - font-style: italic; - font-size: 11; -} - -.photo-button { - cursor: pointer; -} - -.photo-button rect { - fill: #ffffff; -} - -.photo-button.is-active-button rect { - stroke: #000; -} - -.explainer-button { - cursor: pointer; -} - -.explainer-button rect { - fill: #f9f9f9; - stroke: #000000; -} - -.explainer-button.explainer-active-button rect { - fill: #fefefe; - stroke-width: 3; -} - -.tooltip { - width: 180px; - text-align: center; -} - -.tooltip .correct-row span { - outline: 1px solid red; - padding: 2px; -} - -.tooltip .correct-row.is-correct-tooltip span { - outline: 1px solid green; -} - -#row.row-highlighted { - opacity: 0.2; -} - -.shape-row-unhighlighted { - opacity: 0.2; -} - -.results-table { - text-align: center; -} - -.results-table tr.active { - background-color: var(--classifier-bg-color); - outline: 1px solid; -} diff --git a/spaces/mikebars/huggingface/assets/index-894690ff.js b/spaces/mikebars/huggingface/assets/index-894690ff.js deleted file mode 100644 index db23b433f2ebac9ff0d3f7ce2630c897c092980f..0000000000000000000000000000000000000000 --- a/spaces/mikebars/huggingface/assets/index-894690ff.js +++ /dev/null @@ -1,40 +0,0 @@ -var ac=Object.defineProperty;var cc=(e,t,n)=>t in e?ac(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var kt=(e,t,n)=>(cc(e,typeof t!="symbol"?t+"":t,n),n);(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const u of i.addedNodes)u.tagName==="LINK"&&u.rel==="modulepreload"&&r(u)}).observe(document,{childList:!0,subtree:!0});function n(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=n(l);fetch(l.href,i)}})();var Ir={},fc={get exports(){return Ir},set exports(e){Ir=e}},il={},ee={},dc={get exports(){return ee},set exports(e){ee=e}},T={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var qn=Symbol.for("react.element"),pc=Symbol.for("react.portal"),mc=Symbol.for("react.fragment"),hc=Symbol.for("react.strict_mode"),vc=Symbol.for("react.profiler"),yc=Symbol.for("react.provider"),gc=Symbol.for("react.context"),wc=Symbol.for("react.forward_ref"),kc=Symbol.for("react.suspense"),Sc=Symbol.for("react.memo"),Ec=Symbol.for("react.lazy"),Au=Symbol.iterator;function xc(e){return e===null||typeof e!="object"?null:(e=Au&&e[Au]||e["@@iterator"],typeof e=="function"?e:null)}var Jo={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},bo=Object.assign,es={};function sn(e,t,n){this.props=e,this.context=t,this.refs=es,this.updater=n||Jo}sn.prototype.isReactComponent={};sn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};sn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function ts(){}ts.prototype=sn.prototype;function Qi(e,t,n){this.props=e,this.context=t,this.refs=es,this.updater=n||Jo}var Ki=Qi.prototype=new ts;Ki.constructor=Qi;bo(Ki,sn.prototype);Ki.isPureReactComponent=!0;var Vu=Array.isArray,ns=Object.prototype.hasOwnProperty,Xi={current:null},rs={key:!0,ref:!0,__self:!0,__source:!0};function ls(e,t,n){var r,l={},i=null,u=null;if(t!=null)for(r in t.ref!==void 0&&(u=t.ref),t.key!==void 0&&(i=""+t.key),t)ns.call(t,r)&&!rs.hasOwnProperty(r)&&(l[r]=t[r]);var o=arguments.length-2;if(o===1)l.children=n;else if(1]+)>;\s+rel="([^"]+)"/g;return Object.fromEntries([...e.matchAll(t)].map(([,n,r])=>[r,n]))}var $c=["pipeline_tag","private","gated","downloads","likes"];async function*Ac(e){var r,l;Fc(e==null?void 0:e.credentials);const t=new URLSearchParams([...Object.entries({limit:"500",...(r=e==null?void 0:e.search)!=null&&r.owner?{author:e.search.owner}:void 0,...(l=e==null?void 0:e.search)!=null&&l.task?{pipeline_tag:e.search.task}:void 0}),...$c.map(i=>["expand",i])]).toString();let n=`${(e==null?void 0:e.hubUrl)||Mc}/api/models?${t}`;for(;n;){const i=await fetch(n,{headers:{accept:"application/json",...e!=null&&e.credentials?{Authorization:`Bearer ${e.credentials.accessToken}`}:void 0}});if(!i.ok)throw Dc(i);const u=await i.json();for(const s of u)yield{id:s._id,name:s.id,private:s.private,task:s.pipeline_tag,downloads:s.downloads,gated:s.gated,likes:s.likes,updatedAt:new Date(s.lastModified)};const o=i.headers.get("Link");n=o?Uc(o).next:void 0}}function Hu(e){return Array.isArray(e)?e:[e]}var Vc=class{constructor(e="",t={}){kt(this,"apiKey");kt(this,"defaultOptions");this.apiKey=e,this.defaultOptions=t}async fillMask(e,t){return this.request(e,t)}async summarization(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async questionAnswer(e,t){return await this.request(e,t)}async tableQuestionAnswer(e,t){return await this.request(e,t)}async textClassification(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async textGeneration(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async tokenClassification(e,t){return Hu(await this.request(e,t))}async translation(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async zeroShotClassification(e,t){return Hu(await this.request(e,t))}async conversational(e,t){return await this.request(e,t)}async featureExtraction(e,t){return await this.request(e,t)}async automaticSpeechRecognition(e,t){return await this.request(e,{...t,binary:!0})}async audioClassification(e,t){return await this.request(e,{...t,binary:!0})}async imageClassification(e,t){return await this.request(e,{...t,binary:!0})}async objectDetection(e,t){return await this.request(e,{...t,binary:!0})}async imageSegmentation(e,t){return await this.request(e,{...t,binary:!0})}async textToImage(e,t){return await this.request(e,{...t,blob:!0})}async request(e,t){const n={...this.defaultOptions,...t},{model:r,...l}=e,i={};this.apiKey&&(i.Authorization=`Bearer ${this.apiKey}`),t!=null&&t.binary||(i["Content-Type"]="application/json"),t!=null&&t.binary&&(n.wait_for_model&&(i["X-Wait-For-Model"]="true"),n.use_cache===!1&&(i["X-Use-Cache"]="false"),n.dont_load_model&&(i["X-Load-Model"]="0"));const u=await fetch(`https://api-inference.huggingface.co/models/${r}`,{headers:i,method:"POST",body:t!=null&&t.binary?e.data:JSON.stringify({...l,options:n}),credentials:t!=null&&t.includeCredentials?"include":"same-origin"});if(n.retry_on_error!==!1&&u.status===503&&!n.wait_for_model)return this.request(e,{...n,wait_for_model:!0});if(t!=null&&t.blob){if(!u.ok)throw new Error("An error occurred while fetching the blob");return await u.blob()}const o=await u.json();if(o.error)throw new Error(o.error);return o}},Mr=function(){return Mr=Object.assign||function(t){for(var n,r=1,l=arguments.length;r0&&n>="0"&&n<="9"?"_"+n+r:""+n.toUpperCase()+r}function Kc(e,t){return t===void 0&&(t={}),Qc(e,Mr({delimiter:"",transform:us},t))}function Xc(e,t){return t===0?e.toLowerCase():us(e,t)}function Yc(e,t){return t===void 0&&(t={}),Kc(e,Mr({transform:Xc},t))}var ql={},Gc={get exports(){return ql},set exports(e){ql=e}},ke={},Jl={},Zc={get exports(){return Jl},set exports(e){Jl=e}},os={};/** - * @license React - * scheduler.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */(function(e){function t(x,P){var z=x.length;x.push(P);e:for(;0>>1,G=x[W];if(0>>1;Wl(xl,z))wtl(rr,xl)?(x[W]=rr,x[wt]=z,W=wt):(x[W]=xl,x[gt]=z,W=gt);else if(wtl(rr,z))x[W]=rr,x[wt]=z,W=wt;else break e}}return P}function l(x,P){var z=x.sortIndex-P.sortIndex;return z!==0?z:x.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var u=Date,o=u.now();e.unstable_now=function(){return u.now()-o}}var s=[],c=[],h=1,m=null,p=3,g=!1,w=!1,k=!1,F=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(x){for(var P=n(c);P!==null;){if(P.callback===null)r(c);else if(P.startTime<=x)r(c),P.sortIndex=P.expirationTime,t(s,P);else break;P=n(c)}}function v(x){if(k=!1,d(x),!w)if(n(s)!==null)w=!0,Sl(E);else{var P=n(c);P!==null&&El(v,P.startTime-x)}}function E(x,P){w=!1,k&&(k=!1,f(N),N=-1),g=!0;var z=p;try{for(d(P),m=n(s);m!==null&&(!(m.expirationTime>P)||x&&!ze());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var G=W(m.expirationTime<=P);P=e.unstable_now(),typeof G=="function"?m.callback=G:m===n(s)&&r(s),d(P)}else r(s);m=n(s)}if(m!==null)var nr=!0;else{var gt=n(c);gt!==null&&El(v,gt.startTime-P),nr=!1}return nr}finally{m=null,p=z,g=!1}}var C=!1,_=null,N=-1,H=5,O=-1;function ze(){return!(e.unstable_now()-Ox||125W?(x.sortIndex=z,t(c,x),n(s)===null&&x===n(c)&&(k?(f(N),N=-1):k=!0,El(v,z-W))):(x.sortIndex=G,t(s,x),w||g||(w=!0,Sl(E))),x},e.unstable_shouldYield=ze,e.unstable_wrapCallback=function(x){var P=p;return function(){var z=p;p=P;try{return x.apply(this,arguments)}finally{p=z}}}})(os);(function(e){e.exports=os})(Zc);/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var ss=ee,we=Jl;function y(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),bl=Object.prototype.hasOwnProperty,qc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Qu={},Ku={};function Jc(e){return bl.call(Ku,e)?!0:bl.call(Qu,e)?!1:qc.test(e)?Ku[e]=!0:(Qu[e]=!0,!1)}function bc(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function ef(e,t,n,r){if(t===null||typeof t>"u"||bc(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function ce(e,t,n,r,l,i,u){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=i,this.removeEmptyString=u}var ne={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ne[e]=new ce(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];ne[t]=new ce(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ne[e]=new ce(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ne[e]=new ce(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ne[e]=new ce(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ne[e]=new ce(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ne[e]=new ce(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ne[e]=new ce(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ne[e]=new ce(e,5,!1,e.toLowerCase(),null,!1,!1)});var Gi=/[\-:]([a-z])/g;function Zi(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ne[e]=new ce(e,1,!1,e.toLowerCase(),null,!1,!1)});ne.xlinkHref=new ce("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ne[e]=new ce(e,1,!1,e.toLowerCase(),null,!0,!0)});function qi(e,t,n,r){var l=ne.hasOwnProperty(t)?ne[t]:null;(l!==null?l.type!==0:r||!(2o||l[u]!==i[o]){var s=` -`+l[u].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=u&&0<=o);break}}}finally{Nl=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?Sn(e):""}function tf(e){switch(e.tag){case 5:return Sn(e.type);case 16:return Sn("Lazy");case 13:return Sn("Suspense");case 19:return Sn("SuspenseList");case 0:case 2:case 15:return e=Pl(e.type,!1),e;case 11:return e=Pl(e.type.render,!1),e;case 1:return e=Pl(e.type,!0),e;default:return""}}function ri(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Ft:return"Fragment";case jt:return"Portal";case ei:return"Profiler";case Ji:return"StrictMode";case ti:return"Suspense";case ni:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case fs:return(e.displayName||"Context")+".Consumer";case cs:return(e._context.displayName||"Context")+".Provider";case bi:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case eu:return t=e.displayName||null,t!==null?t:ri(e.type)||"Memo";case be:t=e._payload,e=e._init;try{return ri(e(t))}catch{}}return null}function nf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return ri(t);case 8:return t===Ji?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function pt(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ps(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function rf(e){var t=ps(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,i=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(u){r=""+u,i.call(this,u)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(u){r=""+u},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function ur(e){e._valueTracker||(e._valueTracker=rf(e))}function ms(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=ps(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function Dr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function li(e,t){var n=t.checked;return V({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function Yu(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=pt(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function hs(e,t){t=t.checked,t!=null&&qi(e,"checked",t,!1)}function ii(e,t){hs(e,t);var n=pt(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?ui(e,t.type,n):t.hasOwnProperty("defaultValue")&&ui(e,t.type,pt(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function Gu(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function ui(e,t,n){(t!=="number"||Dr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var En=Array.isArray;function Yt(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l"+t.valueOf().toString()+"",t=or.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function Dn(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var _n={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},lf=["Webkit","ms","Moz","O"];Object.keys(_n).forEach(function(e){lf.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),_n[t]=_n[e]})});function ws(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||_n.hasOwnProperty(e)&&_n[e]?(""+t).trim():t+"px"}function ks(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=ws(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var uf=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function ai(e,t){if(t){if(uf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(y(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(y(61))}if(t.style!=null&&typeof t.style!="object")throw Error(y(62))}}function ci(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var fi=null;function tu(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var di=null,Gt=null,Zt=null;function Ju(e){if(e=er(e)){if(typeof di!="function")throw Error(y(280));var t=e.stateNode;t&&(t=cl(t),di(e.stateNode,e.type,t))}}function Ss(e){Gt?Zt?Zt.push(e):Zt=[e]:Gt=e}function Es(){if(Gt){var e=Gt,t=Zt;if(Zt=Gt=null,Ju(e),t)for(e=0;e>>=0,e===0?32:31-(yf(e)/gf|0)|0}var sr=64,ar=4194304;function xn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function $r(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,u=n&268435455;if(u!==0){var o=u&~l;o!==0?r=xn(o):(i&=u,i!==0&&(r=xn(i)))}else u=n&~l,u!==0?r=xn(u):i!==0&&(r=xn(i));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,i=t&-t,l>=i||l===16&&(i&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function Jn(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-Ie(t),e[t]=n}function Ef(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Pn),oo=String.fromCharCode(32),so=!1;function Bs(e,t){switch(e){case"keyup":return Zf.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Hs(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var Ut=!1;function Jf(e,t){switch(e){case"compositionend":return Hs(t);case"keypress":return t.which!==32?null:(so=!0,oo);case"textInput":return e=t.data,e===oo&&so?null:e;default:return null}}function bf(e,t){if(Ut)return e==="compositionend"||!au&&Bs(e,t)?(e=As(),Cr=uu=rt=null,Ut=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=po(n)}}function Xs(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?Xs(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function Ys(){for(var e=window,t=Dr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=Dr(e.document)}return t}function cu(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function sd(e){var t=Ys(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&Xs(n.ownerDocument.documentElement,n)){if(r!==null&&cu(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=mo(n,i);var u=mo(n,r);l&&u&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==u.node||e.focusOffset!==u.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(t),e.extend(u.node,u.offset)):(t.setEnd(u.node,u.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,$t=null,gi=null,Ln=null,wi=!1;function ho(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;wi||$t==null||$t!==Dr(r)||(r=$t,"selectionStart"in r&&cu(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Ln&&Vn(Ln,r)||(Ln=r,r=Br(gi,"onSelect"),0Bt||(e.current=_i[Bt],_i[Bt]=null,Bt--)}function M(e,t){Bt++,_i[Bt]=e.current,e.current=t}var mt={},ue=vt(mt),pe=vt(!1),zt=mt;function tn(e,t){var n=e.type.contextTypes;if(!n)return mt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in n)l[i]=t[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function me(e){return e=e.childContextTypes,e!=null}function Wr(){j(pe),j(ue)}function Eo(e,t,n){if(ue.current!==mt)throw Error(y(168));M(ue,t),M(pe,n)}function ra(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(y(108,nf(e)||"Unknown",l));return V({},n,r)}function Qr(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||mt,zt=ue.current,M(ue,e),M(pe,pe.current),!0}function xo(e,t,n){var r=e.stateNode;if(!r)throw Error(y(169));n?(e=ra(e,t,zt),r.__reactInternalMemoizedMergedChildContext=e,j(pe),j(ue),M(ue,e)):j(pe),M(pe,n)}var He=null,fl=!1,Vl=!1;function la(e){He===null?He=[e]:He.push(e)}function kd(e){fl=!0,la(e)}function yt(){if(!Vl&&He!==null){Vl=!0;var e=0,t=I;try{var n=He;for(I=1;e>=u,l-=u,We=1<<32-Ie(t)+l|n<N?(H=_,_=null):H=_.sibling;var O=p(f,_,d[N],v);if(O===null){_===null&&(_=H);break}e&&_&&O.alternate===null&&t(f,_),a=i(O,a,N),C===null?E=O:C.sibling=O,C=O,_=H}if(N===d.length)return n(f,_),U&&St(f,N),E;if(_===null){for(;NN?(H=_,_=null):H=_.sibling;var ze=p(f,_,O.value,v);if(ze===null){_===null&&(_=H);break}e&&_&&ze.alternate===null&&t(f,_),a=i(ze,a,N),C===null?E=ze:C.sibling=ze,C=ze,_=H}if(O.done)return n(f,_),U&&St(f,N),E;if(_===null){for(;!O.done;N++,O=d.next())O=m(f,O.value,v),O!==null&&(a=i(O,a,N),C===null?E=O:C.sibling=O,C=O);return U&&St(f,N),E}for(_=r(f,_);!O.done;N++,O=d.next())O=g(_,f,N,O.value,v),O!==null&&(e&&O.alternate!==null&&_.delete(O.key===null?N:O.key),a=i(O,a,N),C===null?E=O:C.sibling=O,C=O);return e&&_.forEach(function(fn){return t(f,fn)}),U&&St(f,N),E}function F(f,a,d,v){if(typeof d=="object"&&d!==null&&d.type===Ft&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case ir:e:{for(var E=d.key,C=a;C!==null;){if(C.key===E){if(E=d.type,E===Ft){if(C.tag===7){n(f,C.sibling),a=l(C,d.props.children),a.return=f,f=a;break e}}else if(C.elementType===E||typeof E=="object"&&E!==null&&E.$$typeof===be&&To(E)===C.type){n(f,C.sibling),a=l(C,d.props),a.ref=gn(f,C,d),a.return=f,f=a;break e}n(f,C);break}else t(f,C);C=C.sibling}d.type===Ft?(a=Pt(d.props.children,f.mode,v,d.key),a.return=f,f=a):(v=Rr(d.type,d.key,d.props,null,f.mode,v),v.ref=gn(f,a,d),v.return=f,f=v)}return u(f);case jt:e:{for(C=d.key;a!==null;){if(a.key===C)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){n(f,a.sibling),a=l(a,d.children||[]),a.return=f,f=a;break e}else{n(f,a);break}else t(f,a);a=a.sibling}a=Gl(d,f.mode,v),a.return=f,f=a}return u(f);case be:return C=d._init,F(f,a,C(d._payload),v)}if(En(d))return w(f,a,d,v);if(pn(d))return k(f,a,d,v);vr(f,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(n(f,a.sibling),a=l(a,d),a.return=f,f=a):(n(f,a),a=Yl(d,f.mode,v),a.return=f,f=a),u(f)):n(f,a)}return F}var rn=da(!0),pa=da(!1),tr={},Ve=vt(tr),Qn=vt(tr),Kn=vt(tr);function _t(e){if(e===tr)throw Error(y(174));return e}function wu(e,t){switch(M(Kn,t),M(Qn,e),M(Ve,tr),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:si(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=si(t,e)}j(Ve),M(Ve,t)}function ln(){j(Ve),j(Qn),j(Kn)}function ma(e){_t(Kn.current);var t=_t(Ve.current),n=si(t,e.type);t!==n&&(M(Qn,e),M(Ve,n))}function ku(e){Qn.current===e&&(j(Ve),j(Qn))}var $=vt(0);function qr(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if(t.flags&128)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var Bl=[];function Su(){for(var e=0;en?n:4,e(!0);var r=Hl.transition;Hl.transition={};try{e(!1),t()}finally{I=n,Hl.transition=r}}function Ta(){return Pe().memoizedState}function Cd(e,t,n){var r=ft(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Oa(e))Ra(t,n);else if(n=sa(e,t,n,r),n!==null){var l=se();Me(n,e,r,l),Ia(n,t,r)}}function _d(e,t,n){var r=ft(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Oa(e))Ra(t,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=t.lastRenderedReducer,i!==null))try{var u=t.lastRenderedState,o=i(u,n);if(l.hasEagerState=!0,l.eagerState=o,je(o,u)){var s=t.interleaved;s===null?(l.next=l,yu(t)):(l.next=s.next,s.next=l),t.interleaved=l;return}}catch{}finally{}n=sa(e,t,l,r),n!==null&&(l=se(),Me(n,e,r,l),Ia(n,t,r))}}function Oa(e){var t=e.alternate;return e===A||t!==null&&t===A}function Ra(e,t){Tn=Jr=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Ia(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,ru(e,n)}}var br={readContext:Ne,useCallback:re,useContext:re,useEffect:re,useImperativeHandle:re,useInsertionEffect:re,useLayoutEffect:re,useMemo:re,useReducer:re,useRef:re,useState:re,useDebugValue:re,useDeferredValue:re,useTransition:re,useMutableSource:re,useSyncExternalStore:re,useId:re,unstable_isNewReconciler:!1},Nd={readContext:Ne,useCallback:function(e,t){return Ue().memoizedState=[e,t===void 0?null:t],e},useContext:Ne,useEffect:Ro,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,zr(4194308,4,_a.bind(null,t,e),n)},useLayoutEffect:function(e,t){return zr(4194308,4,e,t)},useInsertionEffect:function(e,t){return zr(4,2,e,t)},useMemo:function(e,t){var n=Ue();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=Ue();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=Cd.bind(null,A,e),[r.memoizedState,e]},useRef:function(e){var t=Ue();return e={current:e},t.memoizedState=e},useState:Oo,useDebugValue:Nu,useDeferredValue:function(e){return Ue().memoizedState=e},useTransition:function(){var e=Oo(!1),t=e[0];return e=xd.bind(null,e[1]),Ue().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=A,l=Ue();if(U){if(n===void 0)throw Error(y(407));n=n()}else{if(n=t(),J===null)throw Error(y(349));Tt&30||ya(r,t,n)}l.memoizedState=n;var i={value:n,getSnapshot:t};return l.queue=i,Ro(wa.bind(null,r,i,e),[e]),r.flags|=2048,Gn(9,ga.bind(null,r,i,n,t),void 0,null),n},useId:function(){var e=Ue(),t=J.identifierPrefix;if(U){var n=Qe,r=We;n=(r&~(1<<32-Ie(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=Xn++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=u.createElement(n,{is:r.is}):(e=u.createElement(n),n==="select"&&(u=e,r.multiple?u.multiple=!0:r.size&&(u.size=r.size))):e=u.createElementNS(e,n),e[$e]=t,e[Wn]=r,Ba(e,t,!1,!1),t.stateNode=e;e:{switch(u=ci(n,r),n){case"dialog":D("cancel",e),D("close",e),l=r;break;case"iframe":case"object":case"embed":D("load",e),l=r;break;case"video":case"audio":for(l=0;lon&&(t.flags|=128,r=!0,wn(i,!1),t.lanes=4194304)}else{if(!r)if(e=qr(u),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),wn(i,!0),i.tail===null&&i.tailMode==="hidden"&&!u.alternate&&!U)return le(t),null}else 2*Q()-i.renderingStartTime>on&&n!==1073741824&&(t.flags|=128,r=!0,wn(i,!1),t.lanes=4194304);i.isBackwards?(u.sibling=t.child,t.child=u):(n=i.last,n!==null?n.sibling=u:t.child=u,i.last=u)}return i.tail!==null?(t=i.tail,i.rendering=t,i.tail=t.sibling,i.renderingStartTime=Q(),t.sibling=null,n=$.current,M($,r?n&1|2:n&1),t):(le(t),null);case 22:case 23:return Ru(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?ve&1073741824&&(le(t),t.subtreeFlags&6&&(t.flags|=8192)):le(t),null;case 24:return null;case 25:return null}throw Error(y(156,t.tag))}function Md(e,t){switch(du(t),t.tag){case 1:return me(t.type)&&Wr(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return ln(),j(pe),j(ue),Su(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return ku(t),null;case 13:if(j($),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(y(340));nn()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return j($),null;case 4:return ln(),null;case 10:return vu(t.type._context),null;case 22:case 23:return Ru(),null;case 24:return null;default:return null}}var gr=!1,ie=!1,Dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Kt(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){B(e,t,r)}else n.current=null}function Fi(e,t,n){try{n()}catch(r){B(e,t,r)}}var Vo=!1;function jd(e,t){if(ki=Ar,e=Ys(),cu(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{n.nodeType,i.nodeType}catch{n=null;break e}var u=0,o=-1,s=-1,c=0,h=0,m=e,p=null;t:for(;;){for(var g;m!==n||l!==0&&m.nodeType!==3||(o=u+l),m!==i||r!==0&&m.nodeType!==3||(s=u+r),m.nodeType===3&&(u+=m.nodeValue.length),(g=m.firstChild)!==null;)p=m,m=g;for(;;){if(m===e)break t;if(p===n&&++c===l&&(o=u),p===i&&++h===r&&(s=u),(g=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=g}n=o===-1||s===-1?null:{start:o,end:s}}else n=null}n=n||{start:0,end:0}}else n=null;for(Si={focusedElem:e,selectionRange:n},Ar=!1,S=t;S!==null;)if(t=S,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,S=e;else for(;S!==null;){t=S;try{var w=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,F=w.memoizedState,f=t.stateNode,a=f.getSnapshotBeforeUpdate(t.elementType===t.type?k:Te(t.type,k),F);f.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=t.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){B(t,t.return,v)}if(e=t.sibling,e!==null){e.return=t.return,S=e;break}S=t.return}return w=Vo,Vo=!1,w}function On(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&Fi(t,n,i)}l=l.next}while(l!==r)}}function ml(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function Ui(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function Qa(e){var t=e.alternate;t!==null&&(e.alternate=null,Qa(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[$e],delete t[Wn],delete t[Ci],delete t[gd],delete t[wd])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Ka(e){return e.tag===5||e.tag===3||e.tag===4}function Bo(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Ka(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function $i(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Hr));else if(r!==4&&(e=e.child,e!==null))for($i(e,t,n),e=e.sibling;e!==null;)$i(e,t,n),e=e.sibling}function Ai(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Ai(e,t,n),e=e.sibling;e!==null;)Ai(e,t,n),e=e.sibling}var b=null,Oe=!1;function Je(e,t,n){for(n=n.child;n!==null;)Xa(e,t,n),n=n.sibling}function Xa(e,t,n){if(Ae&&typeof Ae.onCommitFiberUnmount=="function")try{Ae.onCommitFiberUnmount(ul,n)}catch{}switch(n.tag){case 5:ie||Kt(n,t);case 6:var r=b,l=Oe;b=null,Je(e,t,n),b=r,Oe=l,b!==null&&(Oe?(e=b,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):b.removeChild(n.stateNode));break;case 18:b!==null&&(Oe?(e=b,n=n.stateNode,e.nodeType===8?Al(e.parentNode,n):e.nodeType===1&&Al(e,n),$n(e)):Al(b,n.stateNode));break;case 4:r=b,l=Oe,b=n.stateNode.containerInfo,Oe=!0,Je(e,t,n),b=r,Oe=l;break;case 0:case 11:case 14:case 15:if(!ie&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,u=i.destroy;i=i.tag,u!==void 0&&(i&2||i&4)&&Fi(n,t,u),l=l.next}while(l!==r)}Je(e,t,n);break;case 1:if(!ie&&(Kt(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(o){B(n,t,o)}Je(e,t,n);break;case 21:Je(e,t,n);break;case 22:n.mode&1?(ie=(r=ie)||n.memoizedState!==null,Je(e,t,n),ie=r):Je(e,t,n);break;default:Je(e,t,n)}}function Ho(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new Dd),t.forEach(function(r){var l=Qd.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Le(e,t){var n=t.deletions;if(n!==null)for(var r=0;rl&&(l=u),r&=~i}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*Ud(r/1960))-r,10e?16:e,lt===null)var r=!1;else{if(e=lt,lt=null,nl=0,R&6)throw Error(y(331));var l=R;for(R|=4,S=e.current;S!==null;){var i=S,u=i.child;if(S.flags&16){var o=i.deletions;if(o!==null){for(var s=0;sQ()-Tu?Nt(e,0):Lu|=n),he(e,t)}function tc(e,t){t===0&&(e.mode&1?(t=ar,ar<<=1,!(ar&130023424)&&(ar=4194304)):t=1);var n=se();e=Ge(e,t),e!==null&&(Jn(e,t,n),he(e,n))}function Wd(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),tc(e,n)}function Qd(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(t),tc(e,n)}var nc;nc=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||pe.current)de=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return de=!1,Rd(e,t,n);de=!!(e.flags&131072)}else de=!1,U&&t.flags&1048576&&ia(t,Xr,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;Lr(e,t),e=t.pendingProps;var l=tn(t,ue.current);Jt(t,n),l=xu(null,t,r,e,l,n);var i=Cu();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,me(r)?(i=!0,Qr(t)):i=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,gu(t),l.updater=dl,t.stateNode=l,l._reactInternals=t,Ti(t,r,e,n),t=Ii(null,t,r,!0,i,n)):(t.tag=0,U&&i&&fu(t),oe(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(Lr(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=Xd(r),e=Te(r,e),l){case 0:t=Ri(null,t,r,e,n);break e;case 1:t=Uo(null,t,r,e,n);break e;case 11:t=jo(null,t,r,e,n);break e;case 14:t=Fo(null,t,r,Te(r.type,e),n);break e}throw Error(y(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Ri(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Uo(e,t,r,l,n);case 3:e:{if($a(t),e===null)throw Error(y(387));r=t.pendingProps,i=t.memoizedState,l=i.element,aa(e,t),Zr(t,r,null,n);var u=t.memoizedState;if(r=u.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:u.cache,pendingSuspenseBoundaries:u.pendingSuspenseBoundaries,transitions:u.transitions},t.updateQueue.baseState=i,t.memoizedState=i,t.flags&256){l=un(Error(y(423)),t),t=$o(e,t,r,n,l);break e}else if(r!==l){l=un(Error(y(424)),t),t=$o(e,t,r,n,l);break e}else for(ye=st(t.stateNode.containerInfo.firstChild),ge=t,U=!0,Re=null,n=pa(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(nn(),r===l){t=Ze(e,t,n);break e}oe(e,t,r,n)}t=t.child}return t;case 5:return ma(t),e===null&&Pi(t),r=t.type,l=t.pendingProps,i=e!==null?e.memoizedProps:null,u=l.children,Ei(r,l)?u=null:i!==null&&Ei(r,i)&&(t.flags|=32),Ua(e,t),oe(e,t,u,n),t.child;case 6:return e===null&&Pi(t),null;case 13:return Aa(e,t,n);case 4:return wu(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=rn(t,null,r,n):oe(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),jo(e,t,r,l,n);case 7:return oe(e,t,t.pendingProps,n),t.child;case 8:return oe(e,t,t.pendingProps.children,n),t.child;case 12:return oe(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,i=t.memoizedProps,u=l.value,M(Yr,r._currentValue),r._currentValue=u,i!==null)if(je(i.value,u)){if(i.children===l.children&&!pe.current){t=Ze(e,t,n);break e}}else for(i=t.child,i!==null&&(i.return=t);i!==null;){var o=i.dependencies;if(o!==null){u=i.child;for(var s=o.firstContext;s!==null;){if(s.context===r){if(i.tag===1){s=Ke(-1,n&-n),s.tag=2;var c=i.updateQueue;if(c!==null){c=c.shared;var h=c.pending;h===null?s.next=s:(s.next=h.next,h.next=s),c.pending=s}}i.lanes|=n,s=i.alternate,s!==null&&(s.lanes|=n),zi(i.return,n,t),o.lanes|=n;break}s=s.next}}else if(i.tag===10)u=i.type===t.type?null:i.child;else if(i.tag===18){if(u=i.return,u===null)throw Error(y(341));u.lanes|=n,o=u.alternate,o!==null&&(o.lanes|=n),zi(u,n,t),u=i.sibling}else u=i.child;if(u!==null)u.return=i;else for(u=i;u!==null;){if(u===t){u=null;break}if(i=u.sibling,i!==null){i.return=u.return,u=i;break}u=u.return}i=u}oe(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,Jt(t,n),l=Ne(l),r=r(l),t.flags|=1,oe(e,t,r,n),t.child;case 14:return r=t.type,l=Te(r,t.pendingProps),l=Te(r.type,l),Fo(e,t,r,l,n);case 15:return ja(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Lr(e,t),t.tag=1,me(r)?(e=!0,Qr(t)):e=!1,Jt(t,n),fa(t,r,l),Ti(t,r,l,n),Ii(null,t,r,!0,e,n);case 19:return Va(e,t,n);case 22:return Fa(e,t,n)}throw Error(y(156,t.tag))};function rc(e,t){return Ls(e,t)}function Kd(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ce(e,t,n,r){return new Kd(e,t,n,r)}function Mu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Xd(e){if(typeof e=="function")return Mu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===bi)return 11;if(e===eu)return 14}return 2}function dt(e,t){var n=e.alternate;return n===null?(n=Ce(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Rr(e,t,n,r,l,i){var u=2;if(r=e,typeof e=="function")Mu(e)&&(u=1);else if(typeof e=="string")u=5;else e:switch(e){case Ft:return Pt(n.children,l,i,t);case Ji:u=8,l|=8;break;case ei:return e=Ce(12,n,t,l|2),e.elementType=ei,e.lanes=i,e;case ti:return e=Ce(13,n,t,l),e.elementType=ti,e.lanes=i,e;case ni:return e=Ce(19,n,t,l),e.elementType=ni,e.lanes=i,e;case ds:return vl(n,l,i,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case cs:u=10;break e;case fs:u=9;break e;case bi:u=11;break e;case eu:u=14;break e;case be:u=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return t=Ce(u,n,t,l),t.elementType=e,t.type=r,t.lanes=i,t}function Pt(e,t,n,r){return e=Ce(7,e,r,t),e.lanes=n,e}function vl(e,t,n,r){return e=Ce(22,e,r,t),e.elementType=ds,e.lanes=n,e.stateNode={isHidden:!1},e}function Yl(e,t,n){return e=Ce(6,e,null,t),e.lanes=n,e}function Gl(e,t,n){return t=Ce(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function Yd(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Ll(0),this.expirationTimes=Ll(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Ll(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Du(e,t,n,r,l,i,u,o,s){return e=new Yd(e,t,n,o,s),t===1?(t=1,i===!0&&(t|=8)):t=0,i=Ce(3,null,null,t),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},gu(i),e}function Gd(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(t)}catch(n){console.error(n)}}t(),e.exports=ke})(Gc);var oc,qo=ql;oc=qo.createRoot,qo.hydrateRoot;const Z=new Vc,ep=["audio-classification","audio-to-audio","automatic-speech-recognition","conversational","depth-estimation","document-question-answering","feature-extraction","fill-mask","graph-ml","image-classification","image-segmentation","image-to-image","image-to-text","multiple-choice","object-detection","other","question-answering","reinforcement-learning","robotics","sentence-similarity","summarization","table-question-answering","table-to-text","tabular-classification","tabular-regression","tabular-to-text","text-classification","text-generation","text-retrieval","text-to-image","text-to-speech","text2text-generation","time-series-forecasting","token-classification","translation","unconditional-image-generation","video-classification","visual-question-answering","voice-activity-detection","zero-shot-classification","zero-shot-image-classification"].filter(e=>Object.getOwnPropertyNames(Object.getPrototypeOf(Z)).includes(Yc(e))),Zl={},tp=async e=>{if(Zl[e])return Zl[e];const t=[];for await(const n of Ac({search:{task:e}}))t.push(n);return Zl[e]=t,t},np=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Task"}),De("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:t=>e.setTask(t.target.value),placeholder:"Select a task",value:e.task,children:[L("option",{children:"Select a task"}),ep.map(t=>L("option",{value:t,children:t},t))]})]}),rp=e=>{const[t,n]=ee.useState(!1),[r,l]=ee.useState([]);return ee.useEffect(()=>{e.task&&(n(!0),tp(e.task).then(i=>l(i)).finally(()=>n(!1)))},[e.task]),r.length>0?De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Model"}),De("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:i=>e.setModel(i.target.value),placeholder:"Select a model",value:e.model,children:[L("option",{children:"Select a model"}),r.map(i=>L("option",{value:i.name,children:i.name},i.name))]})]}):L("p",{className:"text-center w-full",children:e.task?t?"Loading models for this task":"No models available for this task":"Select a task to view available models"})},lp=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),e.inputs?L("audio",{className:"w-full",controls:!0,src:URL.createObjectURL(e.inputs)}):De("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",L("input",{accept:"audio/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),ip=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),e.inputs?L("img",{className:"w-full",src:URL.createObjectURL(e.inputs)}):De("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",L("input",{accept:"image/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),up=e=>e.model&&e.task?["audio-classification","automatic-speech-recognition"].includes(e.task)?L(lp,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["image-classification","image-segmentation","object-detection"].includes(e.task)?L(ip,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["conversational","feature-extraction","fill-mask","question-answering","summarization","table-question-answering","text-classification","text-generation","text-to-image","token-classification","translation","zero-shot-classification"].includes(e.task)?De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),L("input",{className:"bg-yellow-200 py-6 text-center w-full",onChange:t=>{t.target.value?e.setInputs(t.target.value):e.setInputs("")},type:"text",value:e.inputs??""})]}):L("div",{className:"w-full",children:L("p",{className:"text-center",children:"Inference for this task is not yet supported."})}):L(ee.Fragment,{}),op=e=>{if(e.inputs&&e.model&&e.task){const t=()=>{e.setInputs(void 0),e.setOutput(void 0)};return L("button",{className:`border-4 border-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:"Clear"})}return L(ee.Fragment,{})},sp=e=>{if(e.inputs&&e.model&&e.task){const t=async()=>{if(e.inputs&&e.model&&e.task){e.setLoading(!0);try{switch(e.task){case"audio-classification":{const n=await Z.audioClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"automatic-speech-recognition":{const n=await Z.automaticSpeechRecognition({data:e.inputs,model:e.model});e.setOutput(n);break}case"conversational":{const n=await Z.conversational({inputs:{text:e.inputs},model:e.model});e.setOutput(n);break}case"feature-extraction":{const n=await Z.featureExtraction({inputs:{[e.inputs]:e.inputs},model:e.model});e.setOutput(n);break}case"fill-mask":{const n=await Z.fillMask({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"image-classification":{const n=await Z.imageClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"image-segmentation":{const n=await Z.imageSegmentation({data:e.inputs,model:e.model});e.setOutput(n);break}case"object-detection":{const n=await Z.objectDetection({data:e.inputs,model:e.model});e.setOutput(n);break}case"question-answering":{const n=await Z.questionAnswer({inputs:{context:e.inputs,question:e.inputs},model:e.model});e.setOutput(n);break}case"summarization":{const n=await Z.summarization({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"table-question-answering":{const n=await Z.tableQuestionAnswer({inputs:{query:e.inputs,table:{[e.inputs]:[e.inputs]}},model:e.model});e.setOutput(n);break}case"text-classification":{const n=await Z.textClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-generation":{const n=await Z.textGeneration({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-to-image":{const n=await Z.textToImage({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"token-classification":{const n=await Z.tokenClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"translation":{const n=await Z.translation({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"zero-shot-classification":{const n=await Z.zeroShotClassification({inputs:e.inputs,model:e.model,parameters:{candidate_labels:[e.inputs]}});e.setOutput(n);break}}}catch(n){n instanceof Error&&e.setOutput(n.message)}e.setLoading(!1)}};return L("button",{className:`bg-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:e.loading?"Submitting":"Submit"})}return L(ee.Fragment,{})},ap=e=>{if(e.output){const t=(()=>{try{return JSON.stringify(e.output,void 0,2)}catch(n){if(n instanceof Error)return`Error during JSON.stringify: ${n.message}`}})();return De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Output"}),L("pre",{className:`bg-yellow-200 p-6 select-text w-full whitespace-pre-wrap ${e.loading?"cursor-wait opacity-50":""}`,children:t})]})}return L(ee.Fragment,{})},cp=()=>{const[e,t]=ee.useState(),[n,r]=ee.useState(),[l,i]=ee.useState(),[u,o]=ee.useState(!1),[s,c]=ee.useState();return L("div",{className:"bg-yellow-500 flex flex-col h-full items-center min-h-screen min-w-screen overflow-auto w-full",children:De("div",{className:"flex flex-col items-center justify-center py-24 space-y-12 w-2/3 lg:w-1/3",children:[L("header",{className:"text-center text-6xl",children:"🤗"}),L(np,{setTask:t,task:e}),L(rp,{model:n,setModel:r,task:e}),L(up,{inputs:l,model:n,setInputs:i,task:e}),L(op,{inputs:l,loading:u,model:n,setInputs:i,setOutput:c,task:e}),L(sp,{inputs:l,loading:u,model:n,setLoading:o,setOutput:c,task:e}),L(ap,{loading:u,output:s})]})})},fp=()=>{const e="root",t=document.getElementById(e);if(t){const n=oc(t),r=L(ee.StrictMode,{children:L(cp,{})});n.render(r)}};fp(); diff --git a/spaces/miracle01/speechemotion/app.py b/spaces/miracle01/speechemotion/app.py deleted file mode 100644 index 4b9439a09e89faaa190868122902bb6af0e6f80c..0000000000000000000000000000000000000000 --- a/spaces/miracle01/speechemotion/app.py +++ /dev/null @@ -1,451 +0,0 @@ -import numpy as np -import streamlit as st -import cv2 -import librosa -import librosa.display -from tensorflow.keras.models import load_model -import os -from datetime import datetime -import streamlit.components.v1 as components -import matplotlib.pyplot as plt -from PIL import Image -from melspec import plot_colored_polar, plot_melspec - -# load models -model = load_model("model3.h5") - -# constants -starttime = datetime.now() - -CAT6 = ['fear', 'angry', 'neutral', 'happy', 'sad', 'surprise'] -CAT7 = ['fear', 'disgust', 'neutral', 'happy', 'sad', 'surprise', 'angry'] -CAT3 = ["positive", "neutral", "negative"] - -COLOR_DICT = {"neutral": "grey", - "positive": "green", - "happy": "green", - "surprise": "orange", - "fear": "purple", - "negative": "red", - "angry": "red", - "sad": "lightblue", - "disgust": "brown"} - -TEST_CAT = ['fear', 'disgust', 'neutral', 'happy', 'sad', 'surprise', 'angry'] -TEST_PRED = np.array([.3, .3, .4, .1, .6, .9, .1]) - -# page settings -st.set_page_config(page_title="SER web-app", page_icon=":speech_balloon:", layout="wide") -# COLOR = "#1f1f2e" -# BACKGROUND_COLOR = "#d1d1e0" - - -# @st.cache(hash_funcs={tf_agents.utils.object_identity.ObjectIdentityDictionary: load_model}) -# def load_model_cache(model): -# return load_model(model) - -# @st.cache -def log_file(txt=None): - with open("log.txt", "a") as f: - datetoday = datetime.now().strftime("%d/%m/%Y %H:%M:%S") - f.write(f"{txt} - {datetoday};\n") - - -# @st.cache -def save_audio(file): - if file.size > 4000000: - return 1 - # if not os.path.exists("audio"): - # os.makedirs("audio") - folder = "audio" - datetoday = datetime.now().strftime("%d/%m/%Y %H:%M:%S") - # clear the folder to avoid storage overload - for filename in os.listdir(folder): - file_path = os.path.join(folder, filename) - try: - if os.path.isfile(file_path) or os.path.islink(file_path): - os.unlink(file_path) - except Exception as e: - print('Failed to delete %s. Reason: %s' % (file_path, e)) - - try: - with open("log0.txt", "a") as f: - f.write(f"{file.name} - {file.size} - {datetoday};\n") - except: - pass - - with open(os.path.join(folder, file.name), "wb") as f: - f.write(file.getbuffer()) - return 0 - - -# @st.cache -def get_melspec(audio): - y, sr = librosa.load(audio, sr=44100) - X = librosa.stft(y) - Xdb = librosa.amplitude_to_db(abs(X)) - img = np.stack((Xdb,) * 3, -1) - img = img.astype(np.uint8) - grayImage = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - grayImage = cv2.resize(grayImage, (224, 224)) - rgbImage = np.repeat(grayImage[..., np.newaxis], 3, -1) - return (rgbImage, Xdb) - - -# @st.cache -def get_mfccs(audio, limit): - y, sr = librosa.load(audio) - a = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=40) - if a.shape[1] > limit: - mfccs = a[:, :limit] - elif a.shape[1] < limit: - mfccs = np.zeros((a.shape[0], limit)) - mfccs[:, :a.shape[1]] = a - return mfccs - - -@st.cache -def get_title(predictions, categories=CAT6): - title = f"Detected emotion: {categories[predictions.argmax()]} \ - - {predictions.max() * 100:.2f}%" - return title - - -@st.cache -def color_dict(coldict=COLOR_DICT): - return COLOR_DICT - - -@st.cache -def plot_polar(fig, predictions=TEST_PRED, categories=TEST_CAT, - title="TEST", colors=COLOR_DICT): - # color_sector = "grey" - - N = len(predictions) - ind = predictions.argmax() - - COLOR = color_sector = colors[categories[ind]] - theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False) - radii = np.zeros_like(predictions) - radii[predictions.argmax()] = predictions.max() * 10 - width = np.pi / 1.8 * predictions - fig.set_facecolor("#d1d1e0") - ax = plt.subplot(111, polar="True") - ax.bar(theta, radii, width=width, bottom=0.0, color=color_sector, alpha=0.25) - - angles = [i / float(N) * 2 * np.pi for i in range(N)] - angles += angles[:1] - - data = list(predictions) - data += data[:1] - plt.polar(angles, data, color=COLOR, linewidth=2) - plt.fill(angles, data, facecolor=COLOR, alpha=0.25) - - ax.spines['polar'].set_color('lightgrey') - ax.set_theta_offset(np.pi / 3) - ax.set_theta_direction(-1) - plt.xticks(angles[:-1], categories) - ax.set_rlabel_position(0) - plt.yticks([0, .25, .5, .75, 1], color="grey", size=8) - plt.suptitle(title, color="darkblue", size=12) - plt.title(f"BIG {N}\n", color=COLOR) - plt.ylim(0, 1) - plt.subplots_adjust(top=0.75) - - -def main(): - side_img = Image.open("images/emotion3.jpg") - with st.sidebar: - st.image(side_img, width=300) - st.sidebar.subheader("Menu") - website_menu = st.sidebar.selectbox("Menu", ("Emotion Recognition", "Project description")) - st.set_option('deprecation.showfileUploaderEncoding', False) - - if website_menu == "Emotion Recognition": - st.sidebar.subheader("Model") - model_type = st.sidebar.selectbox("How would you like to predict?", ("mfccs", "mel-specs")) - em3 = em6 = em7 = gender = False - st.sidebar.subheader("Settings") - - st.markdown("## Upload the file") - with st.container(): - col1, col2, col3 = st.columns(3) - # audio_file = None - # path = None - with col1: - audio_file = st.file_uploader("Upload audio file", type=['wav', 'mp3', 'ogg']) - if audio_file is not None: - if not os.path.exists("audio"): - os.makedirs("audio") - path = os.path.join("audio", audio_file.name) - if_save_audio = save_audio(audio_file) - if if_save_audio == 1: - st.warning("File size is too large. Try another file.") - elif if_save_audio == 0: - # extract features - # display audio - st.audio(audio_file, format='audio/wav', start_time=0) - try: - wav, sr = librosa.load(path, sr=44100) - Xdb = get_melspec(path)[1] - mfccs = librosa.feature.mfcc(y=wav, sr=sr) - # # display audio - # st.audio(audio_file, format='audio/wav', start_time=0) - except Exception as e: - audio_file = None - st.error(f"Error {e} - wrong format of the file. Try another .wav file.") - else: - st.error("Unknown error") - else: - if st.button("Try test file"): - wav, sr = librosa.load("test.wav", sr=44100) - Xdb = get_melspec("test.wav")[1] - mfccs = librosa.feature.mfcc(y=wav, sr=sr) - # display audio - st.audio("test.wav", format='audio/wav', start_time=0) - path = "test.wav" - audio_file = "test" - with col2: - if audio_file is not None: - fig = plt.figure(figsize=(10, 2)) - fig.set_facecolor('#d1d1e0') - plt.title("Wave-form") - librosa.display.waveshow(wav, sr=44100, color="blue") - plt.gca().axes.get_yaxis().set_visible(False) - plt.gca().axes.get_xaxis().set_visible(False) - plt.gca().axes.spines["right"].set_visible(False) - plt.gca().axes.spines["left"].set_visible(False) - plt.gca().axes.spines["top"].set_visible(False) - plt.gca().axes.spines["bottom"].set_visible(False) - plt.gca().axes.set_facecolor('#d1d1e0') - st.write(fig) - else: - pass - # st.write("Record audio file") - # if st.button('Record'): - # with st.spinner(f'Recording for 5 seconds ....'): - # st.write("Recording...") - # time.sleep(3) - # st.success("Recording completed") - # st.write("Error while loading the file") - with col3: - st.title("Convert any MP3 audio file to .WAV") - st.subheader("Convert audio file") - - link = '[File conversion]' \ - '(https://cloudconvert.com/mp3-to-wav)' - st.markdown(link, unsafe_allow_html=True) - - - if model_type == "mfccs": - em3 = st.sidebar.checkbox("3 emotions", True) - em6 = st.sidebar.checkbox("6 emotions", True) - em7 = st.sidebar.checkbox("7 emotions") - gender = st.sidebar.checkbox("gender") - - elif model_type == "mel-specs": - st.sidebar.warning("This model is temporarily disabled") - - else: - st.sidebar.warning("This model is temporarily disabled") - - # with st.sidebar.expander("Change colors"): - # st.sidebar.write("Use this options after you got the plots") - # col1, col2, col3, col4, col5, col6, col7 = st.columns(7) - # - # with col1: - # a = st.color_picker("Angry", value="#FF0000") - # with col2: - # f = st.color_picker("Fear", value="#800080") - # with col3: - # d = st.color_picker("Disgust", value="#A52A2A") - # with col4: - # sd = st.color_picker("Sad", value="#ADD8E6") - # with col5: - # n = st.color_picker("Neutral", value="#808080") - # with col6: - # sp = st.color_picker("Surprise", value="#FFA500") - # with col7: - # h = st.color_picker("Happy", value="#008000") - # if st.button("Update colors"): - # global COLOR_DICT - # COLOR_DICT = {"neutral": n, - # "positive": h, - # "happy": h, - # "surprise": sp, - # "fear": f, - # "negative": a, - # "angry": a, - # "sad": sd, - # "disgust": d} - # st.success(COLOR_DICT) - - if audio_file is not None: - st.markdown("## Analyzing...") - if not audio_file == "test": - st.sidebar.subheader("Audio file") - file_details = {"Filename": audio_file.name, "FileSize": audio_file.size} - st.sidebar.write(file_details) - - with st.container(): - col1, col2 = st.columns(2) - with col1: - fig = plt.figure(figsize=(10, 2)) - fig.set_facecolor('#d1d1e0') - plt.title("MFCCs") - librosa.display.specshow(mfccs, sr=sr, x_axis='time') - plt.gca().axes.get_yaxis().set_visible(False) - plt.gca().axes.spines["right"].set_visible(False) - plt.gca().axes.spines["left"].set_visible(False) - plt.gca().axes.spines["top"].set_visible(False) - st.write(fig) - with col2: - fig2 = plt.figure(figsize=(10, 2)) - fig2.set_facecolor('#d1d1e0') - plt.title("Mel-log-spectrogram") - librosa.display.specshow(Xdb, sr=sr, x_axis='time', y_axis='hz') - plt.gca().axes.get_yaxis().set_visible(False) - plt.gca().axes.spines["right"].set_visible(False) - plt.gca().axes.spines["left"].set_visible(False) - plt.gca().axes.spines["top"].set_visible(False) - st.write(fig2) - - if model_type == "mfccs": - st.markdown("## Predictions") - with st.container(): - col1, col2, col3, col4 = st.columns(4) - mfccs = get_mfccs(path, model.input_shape[-1]) - mfccs = mfccs.reshape(1, *mfccs.shape) - pred = model.predict(mfccs)[0] - - with col1: - if em3: - pos = pred[3] + pred[5] * .5 - neu = pred[2] + pred[5] * .5 + pred[4] * .5 - neg = pred[0] + pred[1] + pred[4] * .5 - data3 = np.array([pos, neu, neg]) - txt = "MFCCs\n" + get_title(data3, CAT3) - fig = plt.figure(figsize=(5, 5)) - COLORS = color_dict(COLOR_DICT) - plot_colored_polar(fig, predictions=data3, categories=CAT3, - title=txt, colors=COLORS) - # plot_polar(fig, predictions=data3, categories=CAT3, - # title=txt, colors=COLORS) - st.write(fig) - with col2: - if em6: - txt = "MFCCs\n" + get_title(pred, CAT6) - fig2 = plt.figure(figsize=(5, 5)) - COLORS = color_dict(COLOR_DICT) - plot_colored_polar(fig2, predictions=pred, categories=CAT6, - title=txt, colors=COLORS) - # plot_polar(fig2, predictions=pred, categories=CAT6, - # title=txt, colors=COLORS) - st.write(fig2) - with col3: - if em7: - model_ = load_model("model4.h5") - mfccs_ = get_mfccs(path, model_.input_shape[-2]) - mfccs_ = mfccs_.T.reshape(1, *mfccs_.T.shape) - pred_ = model_.predict(mfccs_)[0] - txt = "MFCCs\n" + get_title(pred_, CAT7) - fig3 = plt.figure(figsize=(5, 5)) - COLORS = color_dict(COLOR_DICT) - plot_colored_polar(fig3, predictions=pred_, categories=CAT7, - title=txt, colors=COLORS) - # plot_polar(fig3, predictions=pred_, categories=CAT7, - # title=txt, colors=COLORS) - st.write(fig3) - with col4: - if gender: - with st.spinner('Wait for it...'): - gmodel = load_model("model_mw.h5") - gmfccs = get_mfccs(path, gmodel.input_shape[-1]) - gmfccs = gmfccs.reshape(1, *gmfccs.shape) - gpred = gmodel.predict(gmfccs)[0] - gdict = [["female", "woman.png"], ["male", "man.png"]] - ind = gpred.argmax() - txt = "Predicted gender: " + gdict[ind][0] - img = Image.open("images/" + gdict[ind][1]) - - fig4 = plt.figure(figsize=(3, 3)) - fig4.set_facecolor('#d1d1e0') - plt.title(txt) - plt.imshow(img) - plt.axis("off") - st.write(fig4) - - # if model_type == "mel-specs": - # st.markdown("## Predictions") - # st.warning("The model in test mode. It may not be working properly.") - # if st.checkbox("I'm OK with it"): - # try: - # with st.spinner("Wait... It can take some time"): - # global tmodel - # tmodel = load_model_cache("tmodel_all.h5") - # fig, tpred = plot_melspec(path, tmodel) - # col1, col2, col3 = st.columns(3) - # with col1: - # st.markdown("### Emotional spectrum") - # dimg = Image.open("images/spectrum.png") - # st.image(dimg, use_column_width=True) - # with col2: - # fig_, tpred_ = plot_melspec(path=path, - # tmodel=tmodel, - # three=True) - # st.write(fig_, use_column_width=True) - # with col3: - # st.write(fig, use_column_width=True) - # except Exception as e: - # st.error(f"Error {e}, model is not loaded") - - - elif website_menu == "Project description": - import pandas as pd - import plotly.express as px - st.title("Project description") - st.subheader("Student Details") - txt = """ - Student information include; - * Student Name: **Adewuyi Gbenga Kolawole** - * Student Matric No: **HNDCOM/22/035** - * Session: **2022/2023** - * Class: **HND 2** - * Level: **400L** - - This machine learning web-application PROJECT is a partial fulfillment of requirement in Higher National Diploma (HND) computer science **The Federal College of Animal Health and Production Technology** **FCAHPTIB, 2023**. - """ - st.markdown(txt, unsafe_allow_html=True) - - st.subheader("Theory") - link = '[Theory behind - the project(emotion recognition) ]' - st.markdown(link + ":clap::clap::clap:", unsafe_allow_html=True) - with st.expander("See Wikipedia definition"): - components.iframe("https://en.wikipedia.org/wiki/Emotion_recognition", - height=320, scrolling=True) - - st.subheader("Dataset") - txt = """ - - Datasets used in this project - * Crowd-sourced Emotional Mutimodal Actors Dataset (**Crema-D**) ("https://www.kaggle.com/code/ejlok1/audio-emotion-part-1-explore-data") - * Ryerson Audio-Visual Database of Emotional Speech and Song (**Ravdess**) ("https://www.kaggle.com/datasets/uwrfkaggler/ravdess-emotional-speech-audio") - * Surrey Audio-Visual Expressed Emotion (**Savee**) ("https://www.kaggle.com/datasets/ejlok1/surrey-audiovisual-expressed-emotion-savee") - * Toronto emotional speech set (**Tess**) - - All datasets used can be found on **Kaggle** - - The above datasets was used in the model training of this software before deployment - """ - st.markdown(txt, unsafe_allow_html=True) - - df = pd.read_csv("df_audio.csv") - fig = px.violin(df, y="source", x="emotion4", color="actors", box=True, points="all", hover_data=df.columns) - st.plotly_chart(fig, use_container_width=True) - - else: - pass - - -if __name__ == '__main__': - main() diff --git a/spaces/mnauf/detect-bees/CONTRIBUTING.md b/spaces/mnauf/detect-bees/CONTRIBUTING.md deleted file mode 100644 index 7498f8995d40122520e67b193ba4091a783beb86..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/CONTRIBUTING.md +++ /dev/null @@ -1,93 +0,0 @@ -## Contributing to YOLOv5 🚀 - -We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible, whether it's: - -- Reporting a bug -- Discussing the current state of the code -- Submitting a fix -- Proposing a new feature -- Becoming a maintainer - -YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be -helping push the frontiers of what's possible in AI 😃! - -## Submitting a Pull Request (PR) 🛠️ - -Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps: - -### 1. Select File to Update - -Select `requirements.txt` to update by clicking on it in GitHub. - -

    PR_step1

    - -### 2. Click 'Edit this file' - -Button is in top-right corner. - -

    PR_step2

    - -### 3. Make Changes - -Change `matplotlib` version from `3.2.2` to `3.3`. - -

    PR_step3

    - -### 4. Preview Changes and Submit PR - -Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** -for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose -changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃! - -

    PR_step4

    - -### PR recommendations - -To allow your work to be integrated as seamlessly as possible, we advise you to: - -- ✅ Verify your PR is **up-to-date** with `ultralytics/yolov5` `master` branch. If your PR is behind you can update - your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally. - -

    Screenshot 2022-08-29 at 22 47 15

    - -- ✅ Verify all YOLOv5 Continuous Integration (CI) **checks are passing**. - -

    Screenshot 2022-08-29 at 22 47 03

    - -- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase - but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee - -## Submitting a Bug Report 🐛 - -If you spot a problem with YOLOv5 please submit a Bug Report! - -For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few -short guidelines below to help users provide what we need in order to get started. - -When asking a question, people will be better able to provide help if you provide **code** that they can easily -understand and use to **reproduce** the problem. This is referred to by community members as creating -a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces -the problem should be: - -- ✅ **Minimal** – Use as little code as possible that still produces the same problem -- ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself -- ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem - -In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code -should be: - -- ✅ **Current** – Verify that your code is up-to-date with current - GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new - copy to ensure your problem has not already been resolved by previous commits. -- ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this - repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️. - -If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 -**Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing -a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better -understand and diagnose your problem. - -## License - -By contributing, you agree that your contributions will be licensed under -the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/legacy/masked_lm_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/legacy/masked_lm_dataset.py deleted file mode 100644 index dd8ea2c60aff306ab3a756223a298a28d41a4991..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/legacy/masked_lm_dataset.py +++ /dev/null @@ -1,303 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Tuple - -import numpy as np -import torch -from fairseq.data import Dictionary, FairseqDataset, data_utils -from fairseq.data.concat_dataset import ConcatDataset -from fairseq.data.legacy.block_pair_dataset import BlockPairDataset -from fairseq.data.token_block_dataset import TokenBlockDataset - - -class MaskedLMDataset(FairseqDataset): - """ - A wrapper Dataset for masked language modelling. The dataset - wraps around TokenBlockDataset or BlockedPairDataset and creates a batch - where the input blocks are masked according to the specified masking - probability. Additionally the batch can also contain sentence level targets - if this is specified. - - Args: - dataset: Dataset which generates blocks of data. Only BlockPairDataset - and TokenBlockDataset are supported. - sizes: Sentence lengths - vocab: Dictionary with the vocabulary and special tokens. - pad_idx: Id of padding token in dictionary - mask_idx: Id of mask token in dictionary - classif_token_idx: Id of classification token in dictionary. This is the - token associated with the sentence embedding (Eg: CLS for BERT) - sep_token_idx: Id of separator token in dictionary - (Eg: SEP in BERT) - seed: Seed for random number generator for reproducibility. - shuffle: Shuffle the elements before batching. - has_pairs: Specifies whether the underlying dataset - generates a pair of blocks along with a sentence_target or not. - Setting it to True assumes that the underlying dataset generates a - label for the pair of sentences which is surfaced as - sentence_target. The default value assumes a single block with no - sentence target. - segment_id: An optional segment id for filling in the segment labels - when we are in the single block setting (Eg: XLM). Default is 0. - masking_ratio: specifies what percentage of the blocks should be masked. - masking_prob: specifies the probability of a given token being - replaced with the "MASK" token. - random_token_prob: specifies the probability of a given token being - replaced by a random token from the vocabulary. - """ - - def __init__( - self, - dataset: FairseqDataset, - sizes: np.ndarray, - vocab: Dictionary, - pad_idx: int, - mask_idx: int, - classif_token_idx: int, - sep_token_idx: int, - seed: int = 1, - shuffle: bool = True, - has_pairs: bool = True, - segment_id: int = 0, - masking_ratio: float = 0.15, - masking_prob: float = 0.8, - random_token_prob: float = 0.1, - ): - # Make sure the input datasets are the ones supported - assert ( - isinstance(dataset, TokenBlockDataset) - or isinstance(dataset, BlockPairDataset) - or isinstance(dataset, ConcatDataset) - ), ( - "MaskedLMDataset only wraps TokenBlockDataset or BlockPairDataset or " - "ConcatDataset" - ) - - self.dataset = dataset - self.sizes = np.array(sizes) - self.vocab = vocab - self.pad_idx = pad_idx - self.mask_idx = mask_idx - self.classif_token_idx = classif_token_idx - self.sep_token_idx = sep_token_idx - self.shuffle = shuffle - self.seed = seed - self.has_pairs = has_pairs - self.segment_id = segment_id - self.masking_ratio = masking_ratio - self.masking_prob = masking_prob - self.random_token_prob = random_token_prob - - # If we have only one block then sizes needs to be updated to include - # the classification token - if not has_pairs: - self.sizes = self.sizes + 1 - - def __getitem__(self, index: int): - # if has_pairs, then expect 2 blocks and a sentence target - if self.has_pairs: - (block_one, block_two, sentence_target) = self.dataset[index] - else: - block_one = self.dataset[index] - - return { - "id": index, - "block_one": block_one, - "block_two": block_two if self.has_pairs else None, - "sentence_target": sentence_target if self.has_pairs else None, - } - - def __len__(self): - return len(self.dataset) - - def _mask_block( - self, - sentence: np.ndarray, - mask_idx: int, - pad_idx: int, - dictionary_token_range: Tuple, - ): - """ - Mask tokens for Masked Language Model training - Samples mask_ratio tokens that will be predicted by LM. - - Note:This function may not be efficient enough since we had multiple - conversions between np and torch, we can replace them with torch - operators later. - - Args: - sentence: 1d tensor to be masked - mask_idx: index to use for masking the sentence - pad_idx: index to use for masking the target for tokens we aren't - predicting - dictionary_token_range: range of indices in dictionary which can - be used for random word replacement - (e.g. without special characters) - Return: - masked_sent: masked sentence - target: target with words which we are not predicting replaced - by pad_idx - """ - masked_sent = np.copy(sentence) - sent_length = len(sentence) - mask_num = math.ceil(sent_length * self.masking_ratio) - mask = np.random.choice(sent_length, mask_num, replace=False) - target = np.copy(sentence) - - for i in range(sent_length): - if i in mask: - rand = np.random.random() - - # replace with mask if probability is less than masking_prob - # (Eg: 0.8) - if rand < self.masking_prob: - masked_sent[i] = mask_idx - - # replace with random token if probability is less than - # masking_prob + random_token_prob (Eg: 0.9) - elif rand < (self.masking_prob + self.random_token_prob): - # sample random token from dictionary - masked_sent[i] = np.random.randint( - dictionary_token_range[0], dictionary_token_range[1] - ) - else: - target[i] = pad_idx - - return masked_sent, target - - def _collate(self, samples: List[Dict], pad_idx: int, eos_idx: int): - """ - Does the heavy lifting for creating a batch from the input list of - examples. The logic is as follows: - 1. Mask the input blocks. In case has_pair is True then we have 2 - blocks to mask. - 2. Prepend the first masked block tensor with the special token - used as sentence embedding. Eg: CLS in BERT. This happens - irrespective of the value of has_pair. - 3. If has_pair is True, then append the first masked block with the - special separator token (eg: SEP for BERT) and compute segment - label accordingly. In this case, also append the second masked - block with this special separator token and compute its segment - label. - 4. For the targets tensor, prepend and append with padding index - accordingly. - 5. Concatenate all tensors. - """ - if len(samples) == 0: - return {} - # To ensure determinism, we reset the state of the PRNG after every - # batch based on the seed and the first id of the batch. This ensures - # that across epochs we get the same mask for the same example. This - # is needed for reproducibility and is how BERT does masking - # TODO: Can we add deteminism without this constraint? - with data_utils.numpy_seed(self.seed + samples[0]["id"]): - for s in samples: - - # token range is needed for replacing with random token during - # masking - token_range = (self.vocab.nspecial, len(self.vocab)) - - # mask according to specified probabilities. - masked_blk_one, masked_tgt_one = self._mask_block( - s["block_one"], - self.mask_idx, - self.pad_idx, - token_range, - ) - - tokens = np.concatenate([[self.classif_token_idx], masked_blk_one]) - targets = np.concatenate([[self.pad_idx], masked_tgt_one]) - segments = np.ones(len(tokens)) * self.segment_id - - # if has_pairs is True then we need to add the SEP token to both - # the blocks after masking and re-compute segments based on the new - # lengths. - if self.has_pairs: - tokens_one = np.concatenate([tokens, [self.sep_token_idx]]) - targets_one = np.concatenate([targets, [self.pad_idx]]) - - masked_blk_two, masked_tgt_two = self._mask_block( - s["block_two"], self.mask_idx, self.pad_idx, token_range - ) - tokens_two = np.concatenate([masked_blk_two, [self.sep_token_idx]]) - targets_two = np.concatenate([masked_tgt_two, [self.pad_idx]]) - - # block + 1 sep + 1 special (CLS) - segments_one = np.zeros(len(tokens_one)) - # block + 1 sep - segments_two = np.ones(len(tokens_two)) - - tokens = np.concatenate([tokens_one, tokens_two]) - targets = np.concatenate([targets_one, targets_two]) - segments = np.concatenate([segments_one, segments_two]) - - s["source"] = torch.LongTensor(tokens) - s["segment_labels"] = torch.LongTensor(segments) - s["lm_target"] = torch.LongTensor(targets) - - def merge(key): - return data_utils.collate_tokens( - [s[key] for s in samples], pad_idx, eos_idx, left_pad=False - ) - - return { - "id": torch.LongTensor([s["id"] for s in samples]), - "ntokens": sum(len(s["source"]) for s in samples), - "net_input": { - "src_tokens": merge("source"), - "segment_labels": merge("segment_labels"), - }, - "lm_target": merge("lm_target"), - "sentence_target": torch.LongTensor([s["sentence_target"] for s in samples]) - if self.has_pairs - else None, - "nsentences": len(samples), - } - - def collater(self, samples: List[Dict]): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - - Returns: - dict: a mini-batch of data - """ - return self._collate(samples, self.vocab.pad(), self.vocab.eos()) - - def num_tokens(self, index: int): - """ - Return the number of tokens in a sample. This value is used to - enforce max-tokens during batching. - """ - return self.sizes[index] - - def size(self, index: int): - """ - Return an example's size as a float or tuple. This value is used when - filtering a dataset with max-positions. - """ - return self.sizes[index] - - def ordered_indices(self): - """ - Return an ordered list of indices. Batches will be constructed based - on this order. - """ - if self.shuffle: - return np.random.permutation(len(self)) - else: - order = [np.arange(len(self))] - order.append(self.sizes) - return np.lexsort(order) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - self.dataset.prefetch(indices) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/distributed/tpu_distributed_data_parallel.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/distributed/tpu_distributed_data_parallel.py deleted file mode 100644 index e971cf07c57c4e864726781092a690dd4d7d3e46..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/distributed/tpu_distributed_data_parallel.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn - -from fairseq.distributed import utils - - -class TPUDistributedDataParallel(nn.Module): - - def __init__(self, module, process_group): - super().__init__() - self.module = module - self.process_group = process_group - self.world_size = utils.get_world_size(self.process_group) - - def forward(self, *inputs, **kwargs): - return self.module(*inputs, **kwargs) - - def all_reduce_grads(self): - gradients = [] - for p in self.parameters(): - if not p.requires_grad: - continue - if p.grad is None: - p.grad = torch.zeros_like(p) - if p.grad.requires_grad: - raise RuntimeError( - "TPUDistributedDataParallel only works with gradients that don't " - "require grad" - ) - gradients.append(p.grad) - - import torch_xla.core.xla_model as xm - xm.all_reduce( - 'sum', - gradients, - scale=1. / self.world_size, - groups=self.process_group[1], - ) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/fused_lamb.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/fused_lamb.py deleted file mode 100644 index f4f2bdb0c6c65f7758509b6d4d2f2c48cb6e8b4f..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/fused_lamb.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.optim import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("lamb") -class FairseqLAMB(LegacyFairseqOptimizer): - """LAMB optimizer.""" - - def __init__(self, args, params): - super().__init__(args) - try: - from apex.optimizers import FusedLAMB - - self._optimizer = FusedLAMB(params, **self.optimizer_config) - except ImportError: - raise ImportError("Please install apex to use LAMB optimizer") - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--lamb-betas', default='(0.9, 0.999)', metavar='B', - help='betas for LAMB optimizer') - parser.add_argument('--lamb-eps', type=float, default=1e-8, metavar='D', - help='epsilon for LAMB optimizer') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "betas": eval(self.args.lamb_betas), - "eps": self.args.lamb_eps, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return False diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/fusing/scaling_best/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initavg_caprefsnlivqa.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/fusing/scaling_best/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initavg_caprefsnlivqa.sh deleted file mode 100644 index 7e69711ecb92a457c7427ba87e8e772ec859f28c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/fusing/scaling_best/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initavg_caprefsnlivqa.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initavg_caprefsnlivqa -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --time=5:00:00 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initavg_caprefsnlivqa.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/fusing/scaling_best/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initavg_caprefsnlivqa.sh - - diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/vqa/ofa_ratavqa_snli_bart_noema_lr1e6.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/vqa/ofa_ratavqa_snli_bart_noema_lr1e6.sh deleted file mode 100644 index e497b6c93b235f54f3ecd7ede5186373d69a4553..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/vqa/ofa_ratavqa_snli_bart_noema_lr1e6.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=ofa_ratavqa_snli_bart_noema_lr1e6 -#SBATCH --nodes=2 -#SBATCH --ntasks=2 -#SBATCH --gpus=16 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -####SBATCH --nodelist=x1004c4s1b0n0,x1004c4s1b1n0 -#SBATCH --time=24:00:00 -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/ofa_ratavqa_snli_bart_noema_lr1e6.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 2 -n 2 -c 128 --gpus=16 --gpu-bind=closest bash averaging/ratatouille/vqa/ofa_ratavqa_snli_bart_noema_lr1e6.sh - - diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/params_model.py b/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/params_model.py deleted file mode 100644 index 3e356472fb5a27f370cb3920976a11d12a76c1b7..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/params_model.py +++ /dev/null @@ -1,11 +0,0 @@ - -## Model parameters -model_hidden_size = 256 -model_embedding_size = 256 -model_num_layers = 3 - - -## Training parameters -learning_rate_init = 1e-4 -speakers_per_batch = 64 -utterances_per_speaker = 10 diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/models/lm.py b/spaces/nakas/MusicGenDemucs/audiocraft/models/lm.py deleted file mode 100644 index c8aad8f06797eef3293605056e1de14d07c56c2a..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/audiocraft/models/lm.py +++ /dev/null @@ -1,527 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (Optional[float]): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (Optional[str]): Method for weight initialization. - depthwise_init (Optional[str]): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initalize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): indices of the codes to model. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks' - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - if self.two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error. - if use_sampling and temp > 0.0: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: bool = False, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T]. - conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None. - num_samples (int or None): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - remove_prompts (bool): Whether to remove prompts from generation or not. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistents. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train anf test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/nateraw/fuego/README.md b/spaces/nateraw/fuego/README.md deleted file mode 100644 index 88a394ef1c61ca1d76e23ed69ef65e6e493eadab..0000000000000000000000000000000000000000 --- a/spaces/nateraw/fuego/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🔥Fuego GitHub Script Runner🔥 -emoji: 🔥 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/nateraw/yolov6/yolov6/utils/torch_utils.py b/spaces/nateraw/yolov6/yolov6/utils/torch_utils.py deleted file mode 100644 index 2adc6da0c10bbb4f7f808a66372ab19fd9cd837c..0000000000000000000000000000000000000000 --- a/spaces/nateraw/yolov6/yolov6/utils/torch_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- - -import time -from contextlib import contextmanager -from copy import deepcopy -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from yolov6.utils.events import LOGGER - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - """ - Decorator to make all processes in distributed training wait for each local_master to do something. - """ - if local_rank not in [-1, 0]: - dist.barrier(device_ids=[local_rank]) - yield - if local_rank == 0: - dist.barrier(device_ids=[0]) - - -def time_sync(): - # Waits for all kernels in all streams on a CUDA device to complete if cuda is available. - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True - - -def fuse_conv_and_bn(conv, bn): - # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = ( - nn.Conv2d( - conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - groups=conv.groups, - bias=True, - ) - .requires_grad_(False) - .to(conv.weight.device) - ) - - # prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - - # prepare spatial bias - b_conv = ( - torch.zeros(conv.weight.size(0), device=conv.weight.device) - if conv.bias is None - else conv.bias - ) - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div( - torch.sqrt(bn.running_var + bn.eps) - ) - fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) - - return fusedconv - - -def fuse_model(model): - from yolov6.layers.common import Conv - - for m in model.modules(): - if type(m) is Conv and hasattr(m, "bn"): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, "bn") # remove batchnorm - m.forward = m.forward_fuse # update forward - return model - - -def get_model_info(model, img_size=640): - """Get model Params and GFlops. - Code base on https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/utils/model_utils.py - """ - from thop import profile - stride = 32 - img = torch.zeros((1, 3, stride, stride), device=next(model.parameters()).device) - - flops, params = profile(deepcopy(model), inputs=(img,), verbose=False) - params /= 1e6 - flops /= 1e9 - img_size = img_size if isinstance(img_size, list) else [img_size, img_size] - flops *= img_size[0] * img_size[1] / stride / stride * 2 # Gflops - info = "Params: {:.2f}M, Gflops: {:.2f}".format(params, flops) - return info diff --git a/spaces/ner4archives/NER4Archives-analytics/n4a_analytics_lib/metrics_utils.py b/spaces/ner4archives/NER4Archives-analytics/n4a_analytics_lib/metrics_utils.py deleted file mode 100644 index bad9de5a4d41888733106409aa052780fc4645d5..0000000000000000000000000000000000000000 --- a/spaces/ner4archives/NER4Archives-analytics/n4a_analytics_lib/metrics_utils.py +++ /dev/null @@ -1,73 +0,0 @@ -# -*- coding:utf-8 -*- - -"""Collection of statistics functions. -""" - -import numpy as np - - -def percentage_agreement_pov(total_pov: int, total_annotations: int) -> float: - """Computes a percentage - :param total_pov: total agree/disagree annotations - :type total_pov: int - :param total_annotations: total annotations in project - :type total_annotations: int - :rtype: float - :return: agreement percentage - """ - return round((total_pov / total_annotations) * 100, 2) - - -def fleiss_kappa_function(matrix: list) -> float: - """Computes Fleiss' kappa for group of annotators. - :param matrix: a matrix of shape (:attr:'N', :attr:'k') with - 'N' = number of subjects and 'k' = the number of categories. - 'M[i, j]' represent the number of raters who assigned - the 'i'th subject to the 'j'th category. - :type matrix: numpy matrix - :rtype: float - :return: Fleiss' kappa score - """ - N, _ = matrix.shape # N is # of items, k is # of categories - n_annotators = float(np.sum(matrix[0, :])) # # of annotators - tot_annotations = N * n_annotators # the total # of annotations - category_sum = np.sum(matrix, axis=0) # the sum of each category over all items - - # chance agreement - p = category_sum / tot_annotations # the distribution of each category over all annotations - PbarE = np.sum(p * p) # average chance agreement over all categories - - # observed agreement - P = (np.sum(matrix * matrix, axis=1) - n_annotators) / (n_annotators * (n_annotators - 1)) - Pbar = np.sum(P) / N - # add all observed agreement - # chances per item and divide by amount of items - - return round((Pbar - PbarE) / (1 - PbarE), 4) - - -def cohen_kappa_function(ann1: list, ann2: list) -> float: - """Computes Cohen kappa for pair-wise annotators. - :param ann1: annotations provided by first annotator - :type ann1: list - :param ann2: annotations provided by second annotator - :type ann2: list - :rtype: float - :return: Cohen kappa statistic - """ - count = 0 - for an1, an2 in zip(ann1, ann2): - if an1 == an2: - count += 1 - A = count / len(ann1) # observed agreement A (Po) - - uniq = set(ann1 + ann2) - E = 0 # expected agreement E (Pe) - for item in uniq: - cnt1 = ann1.count(item) - cnt2 = ann2.count(item) - count = (cnt1 / len(ann1)) * (cnt2 / len(ann2)) - E += count - - return round((A - E) / (1 - E), 4) - diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/3d Home Architect Deluxe Free __FULL__ Download Windows 776.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/3d Home Architect Deluxe Free __FULL__ Download Windows 776.md deleted file mode 100644 index 10830fdcb956acffc8d54b219423d14c8f4ed7c4..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/3d Home Architect Deluxe Free __FULL__ Download Windows 776.md +++ /dev/null @@ -1,22 +0,0 @@ -
    -

    How to Download and Install 3D Home Architect Deluxe on Windows 776

    -

    3D Home Architect Deluxe is a popular home design software that allows you to create realistic and detailed floor plans, landscapes, and interiors. It was originally developed by Broderbund and released in 1998 for Windows 95/98. However, it is still compatible with newer versions of Windows, including Windows 776. In this article, we will show you how to download and install 3D Home Architect Deluxe on your Windows 776 computer.

    -

    3d home architect deluxe free download windows 776


    DOWNLOAD >>> https://urlcod.com/2uIaLP



    -

    Step 1: Download 3D Home Architect Deluxe

    -

    There are several websites that offer free downloads of 3D Home Architect Deluxe, such as Free Download Manager or Software Informer. You can choose any of them and follow the instructions on the website to download the software. The file size is about 100 MB and it will be saved as a ZIP file.

    -

    Step 2: Extract the ZIP file

    -

    After downloading the ZIP file, you need to extract its contents to a folder on your computer. You can use any file extraction software, such as WinZip or 7-Zip. To extract the ZIP file, right-click on it and select "Extract All" or "Extract Here". Choose a destination folder for the extracted files and click "OK". You should see a folder named "3D Home Architect Deluxe" with several files inside.

    -

    Step 3: Run the Setup.exe file

    -

    To install 3D Home Architect Deluxe on your Windows 776 computer, you need to run the Setup.exe file that is located in the extracted folder. Double-click on it and follow the installation wizard. You may need to accept the license agreement, choose a destination folder, and select some options. The installation process may take a few minutes.

    -

    Step 4: Enjoy 3D Home Architect Deluxe

    -

    Once the installation is complete, you can launch 3D Home Architect Deluxe from your Start menu or desktop shortcut. You can start creating your own home designs using the tools and features of the software. You can also access some tutorials and help files from the Help menu. Have fun designing your dream home!

    -

    - -

    Step 5: Save and Export Your Home Design

    -

    When you are satisfied with your home design, you can save it as a project file (.pl0, .pl1, or .pl5) that can be opened and edited later. To save your project, go to the File menu and select "Save" or "Save As". You can also export your home design as an image file (.bmp, .jpg, or .png) that can be viewed or printed. To export your project, go to the File menu and select "Export" or "Export Image". You can choose the resolution and quality of the image file.

    -

    Step 6: Share Your Home Design

    -

    If you want to share your home design with others, you can upload it to online platforms or send it via email. Some websites that allow you to upload and share your home design are Houzz, Homestyler, or Floorplanner. You can also send your project file or image file as an attachment to your friends or family. You can also print your home design and show it to a professional architect or contractor if you want to turn it into reality.

    -

    Step 7: Uninstall 3D Home Architect Deluxe

    -

    If you want to uninstall 3D Home Architect Deluxe from your Windows 776 computer, you can do so easily. Go to the Control Panel and select "Programs and Features". Find 3D Home Architect Deluxe in the list of installed programs and click on it. Then click on "Uninstall" and follow the instructions. You may need to restart your computer after the uninstallation is complete.

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/BEST Dreambox Control Center (DCC) For Enigma2 - V 1.20 PORTABLE Full Version.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/BEST Dreambox Control Center (DCC) For Enigma2 - V 1.20 PORTABLE Full Version.md deleted file mode 100644 index 925859884c487925e0424021183d5b8a39d53922..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/BEST Dreambox Control Center (DCC) For Enigma2 - V 1.20 PORTABLE Full Version.md +++ /dev/null @@ -1,27 +0,0 @@ -
    -

    How to Use the |BEST| Dreambox Control Center (DCC) For Enigma2 - V 1.20 Full Version

    -

    Dreambox Control Center (DCC) is a program that allows you to manage your Enigma2 box over the network. You can perform various tasks such as network management, telnet client, FTP client, download recordings, MP3 playlists, web interface, settings backup/restore/editor and much more[^1^] [^2^]. In this article, we will show you how to use the |BEST| DCC for Enigma2 - V 1.20 full version[^3^] [^4^], which is the latest and most stable version of the program.

    -

    |BEST| Dreambox Control Center (DCC) For Enigma2 - V 1.20 Full Version


    Download File --->>> https://urlcod.com/2uIbWn



    -

    Step 1: Download and Install DCC for Enigma2 - V 1.20 Full Version

    -

    To download DCC for Enigma2 - V 1.20 full version, you can visit the official website of the developer BernyR[^3^] or use this link[^4^]. The file size is about 8.68 MB and it is a zip file. You will need to extract it to a folder on your computer. To install DCC for Enigma2 - V 1.20 full version, you just need to run the brctrcen.exe file from the extracted folder. You will see a window like this:

    -DCC for Enigma2 - V 1.20 installation window -

    Click on "Next" and follow the instructions to complete the installation.

    -

    Step 2: Connect DCC for Enigma2 - V 1.20 Full Version to Your Enigma2 Box

    -

    To connect DCC for Enigma2 - V 1.20 full version to your Enigma2 box, you will need to know the IP address of your box and your login and password. You can find these information in your box settings or by using your remote control. Once you have these information, you can launch DCC for Enigma2 - V 1.20 full version from your desktop or start menu. You will see a window like this:

    -

    -DCC for Enigma2 - V 1.20 main window -

    Click on "Network" and select your language from the drop-down menu. Then select your connection type (LAN or WLAN) and enter the IP address of your box in the "Dreambox IP" field. The IP address of your computer will be detected automatically in the "PC IP" field. Then enter your login and password in the corresponding fields and click on "Reconnect". If everything is correct, you will see a green light next to "Connected" and the name of an active DreamFlash-Image in the bottom left corner.

    -

    Step 3: Explore DCC for Enigma2 - V 1.20 Full Version Features

    -

    Now that you have connected DCC for Enigma2 - V 1.20 full version to your Enigma2 box, you can explore its features by clicking on the tabs at the top of the window. Here are some of the features you can use:

    -
      -
    • Telnet: This tab allows you to access the command line interface of your box and execute commands directly.
    • -
    • FTP: This tab allows you to transfer files between your computer and your box using FTP protocol.
    • -
    • Recordings: This tab allows you to download recordings from your box to your computer or delete recordings from your box.
    • -
    • MP3: This tab allows you to create and manage MP3 playlists on your box.
    • -
    • WebIf: This tab allows you to access the web interface of your box using your browser.
    • -
    • Tools: This tab allows you to use various tools such as settings backup/restore/editor, screenshot, reboot/shutdown, etc.
    • -
    • Scripts: This tab allows you to edit and execute scripts on your box.
    • -
    -

    You can

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Govt Laptop Hcl Ltc Model 02101 Driver 35 ((LINK)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Govt Laptop Hcl Ltc Model 02101 Driver 35 ((LINK)).md deleted file mode 100644 index f5eb384ec2a61db966dfb359be4ba571975589d9..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Govt Laptop Hcl Ltc Model 02101 Driver 35 ((LINK)).md +++ /dev/null @@ -1,48 +0,0 @@ -
    -

    How to Download and Install HCL LTC 02101 Drivers for Your Government Laptop

    -

    If you have a government laptop with the model number HCL LTC 02101, you may need to download and install the drivers for it to function properly. Drivers are software programs that allow your laptop to communicate with the hardware devices, such as the keyboard, mouse, webcam, sound card, etc. Without the drivers, your laptop may not work as expected or may encounter errors.

    -

    Govt Laptop Hcl Ltc Model 02101 Driver 35


    Download File ===> https://urlcod.com/2uIbQ8



    -

    In this article, we will show you how to download and install HCL LTC 02101 drivers for your government laptop. We will also provide some tips on how to remove the bios image in case you want to change it.

    -

    Steps to Download and Install HCL LTC 02101 Drivers

    -
      -
    1. Go to the official website of HCL Infosystems at https://www.hclinfosystems.com/.
    2. -
    3. Click on the "Support" tab at the top menu and select "Drivers & Downloads".
    4. -
    5. Enter your laptop model number (HCL LTC 02101) in the search box and click on "Search".
    6. -
    7. You will see a list of drivers for your laptop model. Choose the ones that match your operating system (Windows 7, Windows 8, etc.) and click on "Download".
    8. -
    9. Save the downloaded files in a folder on your laptop.
    10. -
    11. Open the folder and double-click on each driver file to run the installation wizard.
    12. -
    13. Follow the on-screen instructions to complete the installation process.
    14. -
    15. Restart your laptop after installing all the drivers.
    16. -
    -

    Tips to Remove Bios Image in HCL LTC 02101 Government Laptop

    -

    If you want to remove the bios image that shows up when you boot your government laptop, you can use a tool called Windows 7 Boot Animation Updater. This tool allows you to change or remove the boot animation and text of Windows 7. Here are the steps to use it:

    -
      -
    1. Download Windows 7 Boot Animation Updater from https://www.coderforlife.com/projects/win7boot/.
    2. -
    3. Extract the zip file and run the program as administrator.
    4. -
    5. Click on "Browse" and select a new boot animation file (in .bs7 format) or leave it blank if you want to remove it.
    6. -
    7. Click on "Text" and enter a new boot text or leave it blank if you want to remove it.
    8. -
    9. Click on "Apply" and wait for the program to update your boot settings.
    10. -
    11. Restart your laptop and enjoy your new or no boot image.
    12. -
    -

    Conclusion

    -

    HCL LTC 02101 is a government laptop model that requires drivers to function properly. You can download and install them from the official website of HCL Infosystems. You can also remove or change the bios image using Windows 7 Boot Animation Updater. We hope this article was helpful for you. If you have any questions or feedback, please leave a comment below.

    Benefits of HCL LTC 02101 Government Laptop

    -

    HCL LTC 02101 is a government laptop that offers many benefits for its users. Some of the benefits are:

    -
      -
    • It is affordable and durable. The government laptop is subsidized by the government and has a low price compared to other laptops. It also has a sturdy design and can withstand rough handling.
    • -
    • It has a long battery life. The government laptop has a battery that can last up to 6 hours on a single charge. This is useful for users who need to work on their laptop for long periods of time without access to power outlets.
    • -
    • It has a good performance. The government laptop has a Intel Core i3 processor, 4 GB of RAM, and 500 GB of hard disk space. It can run multiple applications and tasks smoothly and efficiently.
    • -
    • It has a warranty and support. The government laptop comes with a one-year warranty and free technical support from HCL Infosystems. Users can contact the customer care center or visit the nearest service center if they encounter any issues with their laptop.
    • -
    -

    Drawbacks of HCL LTC 02101 Government Laptop

    -

    Despite its benefits, HCL LTC 02101 also has some drawbacks that users should be aware of. Some of the drawbacks are:

    -

    -
      -
    • It has a low screen resolution. The government laptop has a 14-inch screen with a resolution of 1366 x 768 pixels. This may result in poor image quality and eye strain for some users.
    • -
    • It has a limited storage space. The government laptop has a hard disk space of 500 GB, which may not be enough for some users who need to store large amounts of data or files.
    • -
    • It has a outdated operating system. The government laptop comes with Windows 7, which is no longer supported by Microsoft. This may expose the laptop to security risks and compatibility issues with newer software and devices.
    • -
    • It has a bios image that cannot be easily removed. The government laptop has a bios image that shows the government logo and text when it boots up. Some users may find this annoying or unprofessional and may want to remove it or change it to something else.
    • -
    -

    Final Thoughts

    -

    HCL LTC 02101 is a government laptop that has its pros and cons. It is a cheap and reliable laptop that can perform well for basic tasks and needs. However, it also has some limitations and drawbacks that may affect its usability and appeal for some users. Users should weigh the benefits and drawbacks of this laptop before deciding to buy it or use it.

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Office 2007 Torrent Download Full Version [HOT].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Office 2007 Torrent Download Full Version [HOT].md deleted file mode 100644 index fc9847bcc4ee5256cbf0bb840a5d285f8d2c58b2..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Office 2007 Torrent Download Full Version [HOT].md +++ /dev/null @@ -1,21 +0,0 @@ - -

    How to Download and Install Office 2007 for Free

    -

    If you are looking for a way to get Office 2007 for free, you might be interested in downloading it from a torrent site. Torrents are files that contain information about other files that can be downloaded from multiple sources. However, downloading torrents can be risky, as they may contain viruses, malware, or illegal content. Therefore, you should always be careful and use a reliable antivirus program and a VPN service when downloading torrents.

    -

    In this article, we will show you how to download and install Office 2007 for free from a torrent site. We will also provide some alternative ways to get Office 2007 legally and safely.

    -

    Office 2007 Torrent Download Full Version


    Download Filehttps://urlcod.com/2uIcwl



    -

    Step 1: Find a Torrent Site

    -

    The first step is to find a torrent site that has Office 2007 available for download. There are many torrent sites on the internet, but not all of them are trustworthy or safe. Some of them may have fake or malicious files, or they may be blocked by your internet service provider or government. Therefore, you should do some research and check the reviews and ratings of the torrent sites before using them.

    -

    One of the torrent sites that has Office 2007 is FileCR[^1^]. This site claims to offer Microsoft Office 2007 SP3 Enterprise + Visio Pro + Project Pro 12.0.6798.5000 June 2018 full version standalone offline installer for Windows. You can also find other versions of Office 2007 on other torrent sites, such as Internet Archive[^2^] [^3^] [^4^]. However, we cannot guarantee the quality or safety of these files, so download them at your own risk.

    -

    -

    Step 2: Download a Torrent Client

    -

    The next step is to download a torrent client, which is a software that allows you to download and manage torrent files. There are many torrent clients available for free, such as uTorrent, BitTorrent, qBittorrent, etc. You can choose any of them according to your preference and compatibility with your device.

    -

    To download a torrent client, you can visit its official website and follow the instructions to install it on your computer. Alternatively, you can use an online torrent client, such as Seedr or Bitport, which allows you to download torrents directly from your browser without installing any software.

    -

    Step 3: Download Office 2007 Torrent File

    -

    The third step is to download the Office 2007 torrent file from the torrent site that you have chosen. To do this, you need to visit the site and search for Office 2007 using the keyword "Office 2007 Torrent Download Full Version". You should see a list of results that match your query. You can sort them by date, size, seeders, leechers, etc. to find the most suitable one.

    -

    Once you have found the Office 2007 torrent file that you want to download, you need to click on the download button or link. This will either open your torrent client automatically or prompt you to save the file on your computer. If you are using an online torrent client, you need to copy and paste the URL of the torrent file into the online client's interface.

    -

    Step 4: Download Office 2007 from Torrent File

    -

    The final step is to download Office 2007 from the torrent file that you have downloaded or opened in your torrent client. To do this, you need to wait for the torrent client to connect to other peers who have the same file and start downloading it. The speed and time of the download will depend on various factors, such as the size of the file, the number of seeders and leechers, your internet connection speed, etc.

    -

    Once the download is complete, you need to open the folder where the file is saved and extract it using a software like WinRAR or 7-Zip. You should see a folder containing the setup files for Office 2007. You need to run the setup.exe file and follow the instructions to install Office 2007 on your computer. You may also need to enter a product key or activate it using a crack or patch.

    -

    Alternative Ways to Get Office 2007 7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/cascade_rcnn.py b/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/cascade_rcnn.py deleted file mode 100644 index c7372a801dc00d7fec4db8cda8c2612ce281d48a..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/cascade_rcnn.py +++ /dev/null @@ -1,36 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.roi_heads import FastRCNNOutputLayers, FastRCNNConvFCHead, CascadeROIHeads - -from .mask_rcnn_fpn import model - -# arguments that don't exist for Cascade R-CNN -[model.roi_heads.pop(k) for k in ["box_head", "box_predictor", "proposal_matcher"]] - -model.roi_heads.update( - _target_=CascadeROIHeads, - box_heads=[ - L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[], - fc_dims=[1024, 1024], - ) - for k in range(3) - ], - box_predictors=[ - L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - test_score_thresh=0.05, - box2box_transform=L(Box2BoxTransform)(weights=(w1, w1, w2, w2)), - cls_agnostic_bbox_reg=True, - num_classes="${...num_classes}", - ) - for (w1, w2) in [(10, 5), (20, 10), (30, 15)] - ], - proposal_matchers=[ - L(Matcher)(thresholds=[th], labels=[0, 1], allow_low_quality_matches=False) - for th in [0.5, 0.6, 0.7] - ], -) diff --git a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/HTMLRule.py b/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/HTMLRule.py deleted file mode 100644 index dc649e8de8ba2e54e94e5e706e984e091c53a18b..0000000000000000000000000000000000000000 --- a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/HTMLRule.py +++ /dev/null @@ -1,27 +0,0 @@ -from bs4 import BeautifulSoup - -from src.rule_based_system.Rule import Rule -from src.rule_based_system.Verdict import Verdict - - -class HTMLRule(Rule): - - def get_verdict(self, comment_text: str) -> Verdict: - html = self.find_html(comment_text) - - return Verdict(len(html) == 0, html) - - @staticmethod - def find_html(text: str) -> list: - html = BeautifulSoup(text, "html.parser").find_all() - - return [str(tag) for tag in html] - - def is_strict(self) -> bool: - """ - This rule occasionally removes appropriate comments when names are enclosed in triangular brackets e.g. - """ - return False - - def get_rule_description(self) -> str: - return 'HTML used in comment text' diff --git a/spaces/openbmb/viscpm-chat/README.md b/spaces/openbmb/viscpm-chat/README.md deleted file mode 100644 index cf6cefb9fb5fe7234699824516d94f57cf9fdaa4..0000000000000000000000000000000000000000 --- a/spaces/openbmb/viscpm-chat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Viscpm Chat -emoji: 🚀 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -duplicated_from: cppowboy/viscpm-chat ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/os1187/contract-review/README.md b/spaces/os1187/contract-review/README.md deleted file mode 100644 index eacc305c3ded72631fac2fdeeca82ac2f63a8415..0000000000000000000000000000000000000000 --- a/spaces/os1187/contract-review/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Contract Review -emoji: 📜 -colorFrom: purple -colorTo: red -sdk: streamlit -app_file: app.py -pinned: true -duplicated_from: marshmellow77/contract-review ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/osbm/prostate158-monai-inference/inference.py b/spaces/osbm/prostate158-monai-inference/inference.py deleted file mode 100644 index 17bb542326566f69b5104fbf8a1a86319f41aac8..0000000000000000000000000000000000000000 --- a/spaces/osbm/prostate158-monai-inference/inference.py +++ /dev/null @@ -1,184 +0,0 @@ -import monai -import torch -import pandas as pd -import nibabel as nib -import numpy as np -from monai.data import DataLoader -from monai.utils.enums import CommonKeys -from scipy import ndimage -from monai.data import Dataset -from monai.inferers import sliding_window_inference -from monai.metrics import DiceMetric -from monai.transforms import ( - Activationsd, - AsDiscreted, - Compose, - ConcatItemsd, - KeepLargestConnectedComponentd, - LoadImaged, - EnsureChannelFirstd, - EnsureTyped, - SaveImaged, - ScaleIntensityd, - NormalizeIntensityd, - Spacingd, - Orientationd, -) - -# device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -# print("Using device:", device) - -# model = monai.networks.nets.UNet( -# in_channels=1, -# out_channels=3, -# spatial_dims=3, -# channels=[16, 32, 64, 128, 256, 512], -# strides=[2, 2, 2, 2, 2], -# num_res_units=4, -# act="PRELU", -# norm="BATCH", -# dropout=0.15, -# ) - -# model.load_state_dict(torch.load("anatomy.pt", map_location=device)) - -# keys = ("t2", "t2_anatomy_reader1") -# transforms = Compose( -# [ -# LoadImaged(keys=keys, image_only=False), -# EnsureChannelFirstd(keys=keys), -# Spacingd(keys=keys, pixdim=[0.5, 0.5, 0.5], mode=("bilinear", "nearest")), -# Orientationd(keys=keys, axcodes="RAS"), -# ScaleIntensityd(keys=keys, minv=0, maxv=1), -# NormalizeIntensityd(keys=keys), -# EnsureTyped(keys=keys), -# ConcatItemsd(keys=("t2"), name=CommonKeys.IMAGE, dim=0), -# ConcatItemsd(keys=("t2_anatomy_reader1"), name=CommonKeys.LABEL, dim=0), -# ], -# ) - -# postprocessing = Compose( -# [ -# EnsureTyped(keys=[CommonKeys.PRED, CommonKeys.LABEL]), -# KeepLargestConnectedComponentd( -# keys=CommonKeys.PRED, -# applied_labels=list(range(1, 3)) -# ), -# ], -# ) - -keys = ("t2") -transforms = Compose( - [ - LoadImaged(keys=keys, image_only=False), - EnsureChannelFirstd(keys=keys), - Spacingd(keys=keys, pixdim=[0.5, 0.5, 0.5], mode=("bilinear")), - Orientationd(keys=keys, axcodes="RAS"), - ScaleIntensityd(keys=keys, minv=0, maxv=1), - NormalizeIntensityd(keys=keys), - EnsureTyped(keys=keys), - ConcatItemsd(keys=("t2"), name=CommonKeys.IMAGE, dim=0), - ], -) - -postprocessing = Compose( - [ - EnsureTyped(keys=[CommonKeys.PRED]), - KeepLargestConnectedComponentd( - keys=CommonKeys.PRED, - applied_labels=list(range(1, 3)) - ), - ], -) - - - -inferer = monai.inferers.SlidingWindowInferer( - roi_size=(96, 96, 96), - sw_batch_size=4, - overlap=0.5, -) - -def resize_image(image: np.array, target_shape: tuple): - depth_factor = target_shape[0] / image.shape[0] - width_factor = target_shape[1] / image.shape[1] - height_factor = target_shape[2] / image.shape[2] - - return ndimage.zoom(image, (depth_factor, width_factor, height_factor), order=1) - -# model.eval() -# with torch.no_grad(): -# for i in range(len(test_ds)): -# example = test_ds[i] -# label = example["t2_anatomy_reader1"] -# input_tensor = example["t2"].unsqueeze(0) -# input_tensor = input_tensor.to(device) -# output_tensor = inferer(input_tensor, model) -# output_tensor = output_tensor.argmax(dim=1, keepdim=False) -# output_tensor = output_tensor.squeeze(0).to(torch.device("cpu")) - -# output_tensor = postprocessing({"pred": output_tensor, "label": label})["pred"] -# output_tensor = output_tensor.numpy().astype(np.uint8) -# target_shape = example["t2_meta_dict"]["spatial_shape"] -# output_tensor = resize_image(output_tensor, target_shape) - -# # flip first two dimensions -# output_tensor = np.flip(output_tensor, axis=0) -# output_tensor = np.flip(output_tensor, axis=1) - -# new_image = nib.Nifti1Image(output_tensor, affine=example["t2_meta_dict"]["affine"]) -# nib.save(new_image, f"test/{i+1:03}/predicted.nii.gz") - -# print("Saved", i+1) - - -def make_inference(data_dict:list) -> str: - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - print("Using device:", device) - - model = monai.networks.nets.UNet( - in_channels=1, - out_channels=3, - spatial_dims=3, - channels=[16, 32, 64, 128, 256, 512], - strides=[2, 2, 2, 2, 2], - num_res_units=4, - act="PRELU", - norm="BATCH", - dropout=0.15, - ) - - model.load_state_dict(torch.load("anatomy.pt", map_location=device)) - - - test_ds = Dataset( - data=data_dict, - transform=transforms, - ) - model.eval() - with torch.no_grad(): - example = test_ds[0] - # label = example["t2_anatomy_reader1"] - input_tensor = example["t2"].unsqueeze(0) - input_tensor = input_tensor.to(device) - output_tensor = inferer(input_tensor, model) - output_tensor = output_tensor.argmax(dim=1, keepdim=False) - output_tensor = output_tensor.squeeze(0).to(torch.device("cpu")) - - # output_tensor = postprocessing({"pred": output_tensor, "label": label})["pred"] - output_tensor = postprocessing({"pred": output_tensor})["pred"] - output_tensor = output_tensor.numpy().astype(np.uint8) - target_shape = example["t2_meta_dict"]["spatial_shape"] - output_tensor = resize_image(output_tensor, target_shape) - - # flip first two dimensions - output_tensor = np.flip(output_tensor, axis=0) - output_tensor = np.flip(output_tensor, axis=1) - - new_image = nib.Nifti1Image(output_tensor, affine=example["t2_meta_dict"]["affine"]) - nib.save(new_image, "predicted.nii.gz") - return "predicted.nii.gz" - diff --git a/spaces/owaiskha9654/Custom_Yolov7/README.md b/spaces/owaiskha9654/Custom_Yolov7/README.md deleted file mode 100644 index 09b9b1af10af049da52dce17b5a4d62cdcd72b3e..0000000000000000000000000000000000000000 --- a/spaces/owaiskha9654/Custom_Yolov7/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Custom Yolov7 Car Person -emoji: 🔥 ⚡ 🔥 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.0.24 -python: 3.8.0 -app_file: app.py -models: [owaiskha9654/Yolov7_Custom_Object_Detection, best] -pinned: false ---- \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/CODE_OF_CONDUCT.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/CODE_OF_CONDUCT.md deleted file mode 100644 index 05954dfae2798fd0707c3c100ced94855a938eac..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,130 +0,0 @@ - -# Contributor Covenant Code of Conduct - -## Our Pledge - -We as members, contributors, and leaders pledge to make participation in our -community a harassment-free experience for everyone, regardless of age, body -size, visible or invisible disability, ethnicity, sex characteristics, gender -identity and expression, level of experience, education, socio-economic status, -nationality, personal appearance, race, religion, or sexual identity -and orientation. - -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. - -## Our Standards - -Examples of behavior that contributes to a positive environment for our -community include: - -* Demonstrating empathy and kindness toward other people -* Being respectful of differing opinions, viewpoints, and experiences -* Giving and gracefully accepting constructive feedback -* Accepting responsibility and apologizing to those affected by our mistakes, - and learning from the experience -* Focusing on what is best not just for us as individuals, but for the - overall diffusers community - -Examples of unacceptable behavior include: - -* The use of sexualized language or imagery, and sexual attention or - advances of any kind -* Trolling, insulting or derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or email - address, without their explicit permission -* Spamming issues or PRs with links to projects unrelated to this library -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Enforcement Responsibilities - -Community leaders are responsible for clarifying and enforcing our standards of -acceptable behavior and will take appropriate and fair corrective action in -response to any behavior that they deem inappropriate, threatening, offensive, -or harmful. - -Community leaders have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions that are -not aligned to this Code of Conduct, and will communicate reasons for moderation -decisions when appropriate. - -## Scope - -This Code of Conduct applies within all community spaces, and also applies when -an individual is officially representing the community in public spaces. -Examples of representing our community include using an official e-mail address, -posting via an official social media account, or acting as an appointed -representative at an online or offline event. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement at -feedback@huggingface.co. -All complaints will be reviewed and investigated promptly and fairly. - -All community leaders are obligated to respect the privacy and security of the -reporter of any incident. - -## Enforcement Guidelines - -Community leaders will follow these Community Impact Guidelines in determining -the consequences for any action they deem in violation of this Code of Conduct: - -### 1. Correction - -**Community Impact**: Use of inappropriate language or other behavior deemed -unprofessional or unwelcome in the community. - -**Consequence**: A private, written warning from community leaders, providing -clarity around the nature of the violation and an explanation of why the -behavior was inappropriate. A public apology may be requested. - -### 2. Warning - -**Community Impact**: A violation through a single incident or series -of actions. - -**Consequence**: A warning with consequences for continued behavior. No -interaction with the people involved, including unsolicited interaction with -those enforcing the Code of Conduct, for a specified period of time. This -includes avoiding interactions in community spaces as well as external channels -like social media. Violating these terms may lead to a temporary or -permanent ban. - -### 3. Temporary Ban - -**Community Impact**: A serious violation of community standards, including -sustained inappropriate behavior. - -**Consequence**: A temporary ban from any sort of interaction or public -communication with the community for a specified period of time. No public or -private interaction with the people involved, including unsolicited interaction -with those enforcing the Code of Conduct, is allowed during this period. -Violating these terms may lead to a permanent ban. - -### 4. Permanent Ban - -**Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an -individual, or aggression toward or disparagement of classes of individuals. - -**Consequence**: A permanent ban from any sort of public interaction within -the community. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], -version 2.0, available at -https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. - -Community Impact Guidelines were inspired by [Mozilla's code of conduct -enforcement ladder](https://github.com/mozilla/diversity). - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see the FAQ at -https://www.contributor-covenant.org/faq. Translations are available at -https://www.contributor-covenant.org/translations. diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py deleted file mode 100644 index ff24130c9b61e932e14687250a0ad0e95a5c7089..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py +++ /dev/null @@ -1,635 +0,0 @@ -import argparse -import itertools -import math -import os -import random -from pathlib import Path - -import intel_extension_for_pytorch as ipex -import numpy as np -import PIL -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers import AutoencoderKL, DDPMScheduler, PNDMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from diffusers.utils import check_min_version - - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.13.0.dev0") - - -logger = get_logger(__name__) - - -def save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path): - logger.info("Saving embeddings") - learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_id] - learned_embeds_dict = {args.placeholder_token: learned_embeds.detach().cpu()} - torch.save(learned_embeds_dict, save_path) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--only_save_embeds", - action="store_true", - default=False, - help="Save only the embeddings for the new concept.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=True, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def freeze_params(params): - for param in params: - param.requires_grad = False - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load the tokenizer and add the placeholder token as a additional special token - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Add the placeholder token in tokenizer - num_added_tokens = tokenizer.add_tokens(args.placeholder_token) - if num_added_tokens == 0: - raise ValueError( - f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different" - " `placeholder_token` that is not already in the tokenizer." - ) - - # Convert the initializer_token, placeholder_token to ids - token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False) - # Check if initializer_token is a single token or a sequence of tokens - if len(token_ids) > 1: - raise ValueError("The initializer token must be a single token.") - - initializer_token_id = token_ids[0] - placeholder_token_id = tokenizer.convert_tokens_to_ids(args.placeholder_token) - - # Load models and create wrapper for stable diffusion - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="text_encoder", - revision=args.revision, - ) - vae = AutoencoderKL.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="vae", - revision=args.revision, - ) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="unet", - revision=args.revision, - ) - - # Resize the token embeddings as we are adding new special tokens to the tokenizer - text_encoder.resize_token_embeddings(len(tokenizer)) - - # Initialise the newly added placeholder token with the embeddings of the initializer token - token_embeds = text_encoder.get_input_embeddings().weight.data - token_embeds[placeholder_token_id] = token_embeds[initializer_token_id] - - # Freeze vae and unet - freeze_params(vae.parameters()) - freeze_params(unet.parameters()) - # Freeze all parameters except for the token embeddings in text encoder - params_to_freeze = itertools.chain( - text_encoder.text_model.encoder.parameters(), - text_encoder.text_model.final_layer_norm.parameters(), - text_encoder.text_model.embeddings.position_embedding.parameters(), - ) - freeze_params(params_to_freeze) - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=args.train_batch_size, shuffle=True) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - ) - - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # Move vae and unet to device - vae.to(accelerator.device) - unet.to(accelerator.device) - - # Keep vae and unet in eval model as we don't train these - vae.eval() - unet.eval() - - unet = ipex.optimize(unet, dtype=torch.bfloat16, inplace=True) - vae = ipex.optimize(vae, dtype=torch.bfloat16, inplace=True) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - global_step = 0 - - text_encoder.train() - text_encoder, optimizer = ipex.optimize(text_encoder, optimizer=optimizer, dtype=torch.bfloat16) - - for epoch in range(args.num_train_epochs): - for step, batch in enumerate(train_dataloader): - with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16): - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"]).latent_dist.sample().detach() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn(latents.shape).to(latents.device) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device - ).long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = F.mse_loss(model_pred, target, reduction="none").mean([1, 2, 3]).mean() - accelerator.backward(loss) - - # Zero out the gradients for all token embeddings except the newly added - # embeddings for the concept, as we only want to optimize the concept embeddings - if accelerator.num_processes > 1: - grads = text_encoder.module.get_input_embeddings().weight.grad - else: - grads = text_encoder.get_input_embeddings().weight.grad - # Get the index for tokens that we want to zero the grads for - index_grads_to_zero = torch.arange(len(tokenizer)) != placeholder_token_id - grads.data[index_grads_to_zero, :] = grads.data[index_grads_to_zero, :].fill_(0) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin") - save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.push_to_hub and args.only_save_embeds: - logger.warn("Enabling full model saving because --push_to_hub=True was specified.") - save_full_model = True - else: - save_full_model = not args.only_save_embeds - if save_full_model: - pipeline = StableDiffusionPipeline( - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - scheduler=PNDMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler"), - safety_checker=StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"), - feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"), - ) - pipeline.save_pretrained(args.output_dir) - # Save the newly trained embeddings - save_path = os.path.join(args.output_dir, "learned_embeds.bin") - save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/patgpt4/MusicGen/tests/data/__init__.py b/spaces/patgpt4/MusicGen/tests/data/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/tests/data/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/README.md b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/README.md deleted file mode 100644 index f49a7f77264cb1ebed19cb96eaf376b55fa2bf78..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/README.md +++ /dev/null @@ -1 +0,0 @@ -The code for network dissection in this folder is taken from the original implementation here: https://github.com/davidbau/dissect \ No newline at end of file diff --git a/spaces/paulokewunmi/jumia_product_search/app.py b/spaces/paulokewunmi/jumia_product_search/app.py deleted file mode 100644 index a69a68fa9fbaba09d2099c3a03c1000a402cbc30..0000000000000000000000000000000000000000 --- a/spaces/paulokewunmi/jumia_product_search/app.py +++ /dev/null @@ -1,100 +0,0 @@ -import streamlit as st - -import sys - -sys.path.insert(0, "image_search_engine") - - -from image_search_engine import utils -from image_search_engine.product_image_search import JumiaProductSearch -from PIL import Image, ImageOps - -import requests -from streamlit_image_select import image_select - -jumia = JumiaProductSearch() - - -def get_search_results(query): - res = jumia.search(query, 9) - - images, names, urls = [], [], [] - - for i, record in enumerate(res["matches"]): - metadata = record["metadata"] - images.append(metadata["product_image_url"]) - names.append(metadata["product_name"]) - urls.append(metadata["product_url"]) - - return images, names, urls - - -banner_img = Image.open(utils.PACKAGE_DIR.parent / "jumia_lens.png") -st.image(banner_img) - -st.markdown("#### Currently limited to the following product categories; wrist watches, backpacks, usb drives/hubs, mouse, ergonomic/office chairs, earpiece/headsets") - -input_options = st.radio("Select Input Option", ("image upload", "use example images")) - -img = None - -if input_options == "image upload": - img = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"]) - - -else: - with st.expander(label="Choose sample image", expanded=False): - img = image_select( - label="Use example image", - images=[ - "https://ng.jumia.is/unsafe/fit-in/500x500/filters:fill(white)/product/88/3305912/1.jpg?5807", - "https://ng.jumia.is/unsafe/fit-in/500x500/filters:fill(white)/product/74/0464341/1.jpg?7325", - "https://watchlocker.ng/wp-content/uploads/2021/04/JY8085-14H.jpg", - "https://www-konga-com-res.cloudinary.com/w_auto,f_auto,fl_lossy,dpr_auto,q_auto/media/catalog/product/M/L/196920_1641394875.jpg", - "https://www-konga-com-res.cloudinary.com/w_auto,f_auto,fl_lossy,dpr_auto,q_auto/media/catalog/product/I/K/154983_1595624114.jpg", - "https://ng.jumia.is/unsafe/fit-in/500x500/filters:fill(white)/product/73/3254702/1.jpg?5592", - "https://store.storeimages.cdn-apple.com/4668/as-images.apple.com/is/MKUQ3_VW_34FR+watch-44-alum-midnight-cell-se_VW_34FR_WF_CO_GEO_AE?wid=1400&hei=1400&trim=1%2C0&fmt=p-jpg&qlt=95&.v=1683237043713", - "https://ng.jumia.is/unsafe/fit-in/500x500/filters:fill(white)/product/71/6579011/1.jpg?5730", - ], - ) - - -if img: - if isinstance(img, str): - image = Image.open(requests.get(img, stream=True).raw) - else: - image = Image.open(img) - - image = ImageOps.exif_transpose(image) - - with st.columns(3)[1]: - st.markdown("### Query Image.") - st.image(image) - st.markdown(" ") - st.markdown("### Search results.") - st.markdown(" ") - n = 3 - product_images, product_names, product_urls = get_search_results(image) - - for i, col in enumerate(st.columns(n)): - positions = (i, i + 3, i + 6) - names = [product_names[i] for i in positions] - images = [product_images[i] for i in positions] - urls = [product_urls[i] for i in positions] - - with col: - st.write(names[0]) - st.image(images[0]) - st.write(urls[0]) - - st.write(names[1]) - st.image(images[1]) - st.write(urls[1]) - - st.write(names[2]) - st.image(images[2]) - st.write(urls[2]) - - # st.write(names[3]) - # st.image(images[3]) - # st.write(urls[3]) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/resolvelib/base.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/resolvelib/base.py deleted file mode 100644 index b206692a0a976d8336e3f5896eadf4765a33fb2c..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/resolvelib/base.py +++ /dev/null @@ -1,141 +0,0 @@ -from typing import FrozenSet, Iterable, Optional, Tuple, Union - -from pip._vendor.packaging.specifiers import SpecifierSet -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.packaging.version import LegacyVersion, Version - -from pip._internal.models.link import Link, links_equivalent -from pip._internal.req.req_install import InstallRequirement -from pip._internal.utils.hashes import Hashes - -CandidateLookup = Tuple[Optional["Candidate"], Optional[InstallRequirement]] -CandidateVersion = Union[LegacyVersion, Version] - - -def format_name(project: str, extras: FrozenSet[str]) -> str: - if not extras: - return project - canonical_extras = sorted(canonicalize_name(e) for e in extras) - return "{}[{}]".format(project, ",".join(canonical_extras)) - - -class Constraint: - def __init__( - self, specifier: SpecifierSet, hashes: Hashes, links: FrozenSet[Link] - ) -> None: - self.specifier = specifier - self.hashes = hashes - self.links = links - - @classmethod - def empty(cls) -> "Constraint": - return Constraint(SpecifierSet(), Hashes(), frozenset()) - - @classmethod - def from_ireq(cls, ireq: InstallRequirement) -> "Constraint": - links = frozenset([ireq.link]) if ireq.link else frozenset() - return Constraint(ireq.specifier, ireq.hashes(trust_internet=False), links) - - def __bool__(self) -> bool: - return bool(self.specifier) or bool(self.hashes) or bool(self.links) - - def __and__(self, other: InstallRequirement) -> "Constraint": - if not isinstance(other, InstallRequirement): - return NotImplemented - specifier = self.specifier & other.specifier - hashes = self.hashes & other.hashes(trust_internet=False) - links = self.links - if other.link: - links = links.union([other.link]) - return Constraint(specifier, hashes, links) - - def is_satisfied_by(self, candidate: "Candidate") -> bool: - # Reject if there are any mismatched URL constraints on this package. - if self.links and not all(_match_link(link, candidate) for link in self.links): - return False - # We can safely always allow prereleases here since PackageFinder - # already implements the prerelease logic, and would have filtered out - # prerelease candidates if the user does not expect them. - return self.specifier.contains(candidate.version, prereleases=True) - - -class Requirement: - @property - def project_name(self) -> NormalizedName: - """The "project name" of a requirement. - - This is different from ``name`` if this requirement contains extras, - in which case ``name`` would contain the ``[...]`` part, while this - refers to the name of the project. - """ - raise NotImplementedError("Subclass should override") - - @property - def name(self) -> str: - """The name identifying this requirement in the resolver. - - This is different from ``project_name`` if this requirement contains - extras, where ``project_name`` would not contain the ``[...]`` part. - """ - raise NotImplementedError("Subclass should override") - - def is_satisfied_by(self, candidate: "Candidate") -> bool: - return False - - def get_candidate_lookup(self) -> CandidateLookup: - raise NotImplementedError("Subclass should override") - - def format_for_error(self) -> str: - raise NotImplementedError("Subclass should override") - - -def _match_link(link: Link, candidate: "Candidate") -> bool: - if candidate.source_link: - return links_equivalent(link, candidate.source_link) - return False - - -class Candidate: - @property - def project_name(self) -> NormalizedName: - """The "project name" of the candidate. - - This is different from ``name`` if this candidate contains extras, - in which case ``name`` would contain the ``[...]`` part, while this - refers to the name of the project. - """ - raise NotImplementedError("Override in subclass") - - @property - def name(self) -> str: - """The name identifying this candidate in the resolver. - - This is different from ``project_name`` if this candidate contains - extras, where ``project_name`` would not contain the ``[...]`` part. - """ - raise NotImplementedError("Override in subclass") - - @property - def version(self) -> CandidateVersion: - raise NotImplementedError("Override in subclass") - - @property - def is_installed(self) -> bool: - raise NotImplementedError("Override in subclass") - - @property - def is_editable(self) -> bool: - raise NotImplementedError("Override in subclass") - - @property - def source_link(self) -> Optional[Link]: - raise NotImplementedError("Override in subclass") - - def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: - raise NotImplementedError("Override in subclass") - - def get_install_requirement(self) -> Optional[InstallRequirement]: - raise NotImplementedError("Override in subclass") - - def format_for_error(self) -> str: - raise NotImplementedError("Subclass should override") diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langhungarianmodel.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langhungarianmodel.py deleted file mode 100644 index 09a0d326b983b59b58f84b00e55fbe6909a23793..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langhungarianmodel.py +++ /dev/null @@ -1,4649 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -HUNGARIAN_LANG_MODEL = { - 28: { # 'A' - 28: 0, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 2, # 'D' - 32: 1, # 'E' - 50: 1, # 'F' - 49: 2, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 2, # 'K' - 41: 2, # 'L' - 34: 1, # 'M' - 35: 2, # 'N' - 47: 1, # 'O' - 46: 2, # 'P' - 43: 2, # 'R' - 33: 2, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 2, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 2, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 1, # 'i' - 22: 1, # 'j' - 7: 2, # 'k' - 6: 2, # 'l' - 13: 2, # 'm' - 4: 2, # 'n' - 8: 0, # 'o' - 23: 2, # 'p' - 10: 2, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 1, # 'u' - 19: 1, # 'v' - 62: 1, # 'x' - 16: 0, # 'y' - 11: 3, # 'z' - 51: 1, # 'Á' - 44: 0, # 'É' - 61: 1, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 40: { # 'B' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 0, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 3, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 2, # 'i' - 22: 1, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 3, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 54: { # 'C' - 28: 1, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 1, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 0, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 2, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 0, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 1, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 1, # 'h' - 9: 1, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 3, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 1, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 45: { # 'D' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 0, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 0, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 3, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 1, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 1, # 'o' - 23: 0, # 'p' - 10: 2, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 2, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 1, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 0, # 'ű' - }, - 32: { # 'E' - 28: 1, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 1, # 'E' - 50: 1, # 'F' - 49: 2, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 2, # 'K' - 41: 2, # 'L' - 34: 2, # 'M' - 35: 2, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 2, # 'R' - 33: 2, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 1, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 2, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 3, # 'g' - 20: 1, # 'h' - 9: 1, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 2, # 'l' - 13: 2, # 'm' - 4: 2, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 2, # 's' - 3: 1, # 't' - 21: 2, # 'u' - 19: 1, # 'v' - 62: 1, # 'x' - 16: 0, # 'y' - 11: 3, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 0, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 0, # 'Ú' - 63: 1, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 1, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 50: { # 'F' - 28: 1, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 1, # 'E' - 50: 1, # 'F' - 49: 0, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 0, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 0, # 'V' - 55: 1, # 'Y' - 52: 0, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 1, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 2, # 'i' - 22: 1, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 2, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 0, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 0, # 'Ú' - 63: 1, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 2, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 49: { # 'G' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 2, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 1, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 2, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 2, # 'y' - 11: 0, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 0, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 0, # 'ű' - }, - 38: { # 'H' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 0, # 'D' - 32: 1, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 1, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 1, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 1, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 0, # 'V' - 55: 1, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 2, # 'i' - 22: 1, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 0, # 'n' - 8: 3, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 2, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 2, # 'Á' - 44: 2, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 1, # 'é' - 30: 2, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 39: { # 'I' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 1, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 2, # 'K' - 41: 2, # 'L' - 34: 1, # 'M' - 35: 2, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 2, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 2, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 2, # 'd' - 1: 0, # 'e' - 27: 1, # 'f' - 12: 2, # 'g' - 20: 1, # 'h' - 9: 0, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 2, # 'l' - 13: 2, # 'm' - 4: 1, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 2, # 's' - 3: 2, # 't' - 21: 0, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 0, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 53: { # 'J' - 28: 2, # 'A' - 40: 0, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 1, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 1, # 'o' - 23: 0, # 'p' - 10: 0, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 2, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 0, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 1, # 'é' - 30: 0, # 'í' - 25: 2, # 'ó' - 24: 2, # 'ö' - 31: 1, # 'ú' - 29: 0, # 'ü' - 42: 1, # 'ő' - 56: 0, # 'ű' - }, - 36: { # 'K' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 0, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 0, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 1, # 'f' - 12: 0, # 'g' - 20: 1, # 'h' - 9: 3, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 2, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 2, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 2, # 'ö' - 31: 1, # 'ú' - 29: 2, # 'ü' - 42: 1, # 'ő' - 56: 0, # 'ű' - }, - 41: { # 'L' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 2, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 3, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 2, # 'i' - 22: 1, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 0, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 2, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 2, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 0, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 34: { # 'M' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 0, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 3, # 'a' - 18: 0, # 'b' - 26: 1, # 'c' - 17: 0, # 'd' - 1: 3, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 3, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 3, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 2, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 2, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 1, # 'ű' - }, - 35: { # 'N' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 2, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 2, # 'Y' - 52: 1, # 'Z' - 2: 3, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 3, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 2, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 0, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 2, # 'y' - 11: 0, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 1, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 1, # 'ő' - 56: 0, # 'ű' - }, - 47: { # 'O' - 28: 1, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 1, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 2, # 'K' - 41: 2, # 'L' - 34: 2, # 'M' - 35: 2, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 2, # 'R' - 33: 2, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 1, # 'i' - 22: 1, # 'j' - 7: 2, # 'k' - 6: 2, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 1, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 1, # 's' - 3: 2, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 1, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 0, # 'Í' - 58: 1, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 46: { # 'P' - 28: 1, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 1, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 0, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 2, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 1, # 'f' - 12: 0, # 'g' - 20: 1, # 'h' - 9: 2, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 2, # 'r' - 5: 1, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 2, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 0, # 'Ú' - 63: 1, # 'Ü' - 14: 3, # 'á' - 15: 2, # 'é' - 30: 0, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 0, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 0, # 'ű' - }, - 43: { # 'R' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 2, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 1, # 'h' - 9: 2, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 0, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 2, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 2, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 2, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 33: { # 'S' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 2, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 3, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 1, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 1, # 'h' - 9: 2, # 'i' - 22: 0, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 1, # 'p' - 10: 0, # 'r' - 5: 0, # 's' - 3: 1, # 't' - 21: 1, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 3, # 'z' - 51: 2, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 37: { # 'T' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 1, # 'P' - 43: 2, # 'R' - 33: 1, # 'S' - 37: 2, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 2, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 1, # 'h' - 9: 2, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 0, # 't' - 21: 2, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 1, # 'z' - 51: 2, # 'Á' - 44: 2, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 2, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 57: { # 'U' - 28: 1, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 1, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 2, # 'S' - 37: 1, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 1, # 'e' - 27: 0, # 'f' - 12: 2, # 'g' - 20: 0, # 'h' - 9: 0, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 1, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 48: { # 'V' - 28: 2, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 0, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 2, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 2, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 2, # 'o' - 23: 0, # 'p' - 10: 0, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 2, # 'Á' - 44: 2, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 0, # 'Ú' - 63: 1, # 'Ü' - 14: 2, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 0, # 'ó' - 24: 1, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 55: { # 'Y' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 1, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 2, # 'Z' - 2: 1, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 1, # 'd' - 1: 1, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 0, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 0, # 'n' - 8: 1, # 'o' - 23: 1, # 'p' - 10: 0, # 'r' - 5: 0, # 's' - 3: 0, # 't' - 21: 0, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 1, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 52: { # 'Z' - 28: 2, # 'A' - 40: 1, # 'B' - 54: 0, # 'C' - 45: 1, # 'D' - 32: 2, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 2, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 2, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 2, # 'S' - 37: 1, # 'T' - 57: 1, # 'U' - 48: 1, # 'V' - 55: 1, # 'Y' - 52: 1, # 'Z' - 2: 1, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 1, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 1, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 8: 1, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 2, # 's' - 3: 0, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 2, # 'Á' - 44: 1, # 'É' - 61: 1, # 'Í' - 58: 1, # 'Ó' - 59: 1, # 'Ö' - 60: 1, # 'Ú' - 63: 1, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 2: { # 'a' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 3, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 2, # 'e' - 27: 2, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 3, # 'i' - 22: 3, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 2, # 'o' - 23: 3, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 1, # 'x' - 16: 2, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 18: { # 'b' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 3, # 'i' - 22: 2, # 'j' - 7: 2, # 'k' - 6: 2, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 3, # 'r' - 5: 2, # 's' - 3: 1, # 't' - 21: 3, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 3, # 'ó' - 24: 2, # 'ö' - 31: 2, # 'ú' - 29: 2, # 'ü' - 42: 2, # 'ő' - 56: 1, # 'ű' - }, - 26: { # 'c' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 1, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 1, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 2, # 'a' - 18: 1, # 'b' - 26: 2, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 3, # 'h' - 9: 3, # 'i' - 22: 1, # 'j' - 7: 2, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 3, # 's' - 3: 2, # 't' - 21: 2, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 2, # 'á' - 15: 2, # 'é' - 30: 2, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 17: { # 'd' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 2, # 'b' - 26: 1, # 'c' - 17: 2, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 3, # 'j' - 7: 2, # 'k' - 6: 1, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 2, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 3, # 'í' - 25: 3, # 'ó' - 24: 3, # 'ö' - 31: 2, # 'ú' - 29: 2, # 'ü' - 42: 2, # 'ő' - 56: 1, # 'ű' - }, - 1: { # 'e' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 2, # 'a' - 18: 3, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 2, # 'e' - 27: 3, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 3, # 'i' - 22: 3, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 2, # 'o' - 23: 3, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 2, # 'u' - 19: 3, # 'v' - 62: 2, # 'x' - 16: 2, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 27: { # 'f' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 2, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 3, # 'i' - 22: 2, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 3, # 'o' - 23: 0, # 'p' - 10: 3, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 2, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 0, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 3, # 'ö' - 31: 1, # 'ú' - 29: 2, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 12: { # 'g' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 2, # 'c' - 17: 2, # 'd' - 1: 3, # 'e' - 27: 2, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 3, # 'i' - 22: 3, # 'j' - 7: 2, # 'k' - 6: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 3, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 3, # 'ó' - 24: 2, # 'ö' - 31: 2, # 'ú' - 29: 2, # 'ü' - 42: 2, # 'ő' - 56: 1, # 'ű' - }, - 20: { # 'h' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 0, # 'd' - 1: 3, # 'e' - 27: 0, # 'f' - 12: 1, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 3, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 2, # 's' - 3: 1, # 't' - 21: 3, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 2, # 'y' - 11: 0, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 3, # 'í' - 25: 2, # 'ó' - 24: 2, # 'ö' - 31: 2, # 'ú' - 29: 1, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 9: { # 'i' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 3, # 'e' - 27: 3, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 2, # 'i' - 22: 2, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 2, # 'o' - 23: 2, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 1, # 'x' - 16: 1, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 3, # 'ó' - 24: 1, # 'ö' - 31: 2, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 1, # 'ű' - }, - 22: { # 'j' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 2, # 'b' - 26: 1, # 'c' - 17: 3, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 2, # 'h' - 9: 1, # 'i' - 22: 2, # 'j' - 7: 2, # 'k' - 6: 2, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 2, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 1, # 'í' - 25: 3, # 'ó' - 24: 3, # 'ö' - 31: 3, # 'ú' - 29: 2, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 7: { # 'k' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 2, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 2, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 2, # 'v' - 62: 0, # 'x' - 16: 2, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 3, # 'í' - 25: 2, # 'ó' - 24: 3, # 'ö' - 31: 1, # 'ú' - 29: 3, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 6: { # 'l' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 1, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 1, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 2, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 3, # 'e' - 27: 3, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 3, # 'i' - 22: 3, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 2, # 'p' - 10: 2, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 3, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 3, # 'í' - 25: 3, # 'ó' - 24: 3, # 'ö' - 31: 2, # 'ú' - 29: 2, # 'ü' - 42: 3, # 'ő' - 56: 1, # 'ű' - }, - 13: { # 'm' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 2, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 2, # 'j' - 7: 1, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 2, # 'n' - 8: 3, # 'o' - 23: 3, # 'p' - 10: 2, # 'r' - 5: 2, # 's' - 3: 2, # 't' - 21: 3, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 2, # 'ó' - 24: 2, # 'ö' - 31: 2, # 'ú' - 29: 2, # 'ü' - 42: 1, # 'ő' - 56: 2, # 'ű' - }, - 4: { # 'n' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 3, # 'e' - 27: 2, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 3, # 'i' - 22: 2, # 'j' - 7: 3, # 'k' - 6: 2, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 2, # 'p' - 10: 2, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 2, # 'v' - 62: 1, # 'x' - 16: 3, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 2, # 'ó' - 24: 3, # 'ö' - 31: 2, # 'ú' - 29: 3, # 'ü' - 42: 2, # 'ő' - 56: 1, # 'ű' - }, - 8: { # 'o' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 1, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 2, # 'a' - 18: 3, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 2, # 'e' - 27: 2, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 2, # 'i' - 22: 2, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 1, # 'o' - 23: 3, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 2, # 'u' - 19: 3, # 'v' - 62: 1, # 'x' - 16: 1, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 23: { # 'p' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 1, # 'b' - 26: 2, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 2, # 'j' - 7: 2, # 'k' - 6: 3, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 8: 3, # 'o' - 23: 3, # 'p' - 10: 3, # 'r' - 5: 2, # 's' - 3: 2, # 't' - 21: 3, # 'u' - 19: 2, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 2, # 'ó' - 24: 2, # 'ö' - 31: 1, # 'ú' - 29: 2, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 10: { # 'r' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 3, # 'e' - 27: 2, # 'f' - 12: 3, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 3, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 2, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 1, # 'x' - 16: 2, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 3, # 'ó' - 24: 3, # 'ö' - 31: 3, # 'ú' - 29: 3, # 'ü' - 42: 2, # 'ő' - 56: 2, # 'ű' - }, - 5: { # 's' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 2, # 'c' - 17: 2, # 'd' - 1: 3, # 'e' - 27: 2, # 'f' - 12: 2, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 1, # 'j' - 7: 3, # 'k' - 6: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 2, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 2, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 3, # 'í' - 25: 3, # 'ó' - 24: 3, # 'ö' - 31: 3, # 'ú' - 29: 3, # 'ü' - 42: 2, # 'ő' - 56: 1, # 'ű' - }, - 3: { # 't' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 3, # 'b' - 26: 2, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 2, # 'f' - 12: 1, # 'g' - 20: 3, # 'h' - 9: 3, # 'i' - 22: 3, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 3, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 3, # 'ó' - 24: 3, # 'ö' - 31: 3, # 'ú' - 29: 3, # 'ü' - 42: 3, # 'ő' - 56: 2, # 'ű' - }, - 21: { # 'u' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 2, # 'b' - 26: 2, # 'c' - 17: 3, # 'd' - 1: 2, # 'e' - 27: 1, # 'f' - 12: 3, # 'g' - 20: 2, # 'h' - 9: 2, # 'i' - 22: 2, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 1, # 'o' - 23: 2, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 1, # 'u' - 19: 3, # 'v' - 62: 1, # 'x' - 16: 1, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 2, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 0, # 'ö' - 31: 1, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 19: { # 'v' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 2, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 3, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 1, # 'r' - 5: 2, # 's' - 3: 2, # 't' - 21: 2, # 'u' - 19: 2, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 2, # 'ó' - 24: 2, # 'ö' - 31: 1, # 'ú' - 29: 2, # 'ü' - 42: 1, # 'ő' - 56: 1, # 'ű' - }, - 62: { # 'x' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 0, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 1, # 'i' - 22: 0, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 1, # 'o' - 23: 1, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 1, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 1, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 16: { # 'y' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 2, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 3, # 'e' - 27: 2, # 'f' - 12: 2, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 2, # 'j' - 7: 2, # 'k' - 6: 2, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 2, # 'p' - 10: 2, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 2, # 'í' - 25: 2, # 'ó' - 24: 3, # 'ö' - 31: 2, # 'ú' - 29: 2, # 'ü' - 42: 1, # 'ő' - 56: 2, # 'ű' - }, - 11: { # 'z' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 3, # 'a' - 18: 2, # 'b' - 26: 1, # 'c' - 17: 3, # 'd' - 1: 3, # 'e' - 27: 1, # 'f' - 12: 2, # 'g' - 20: 2, # 'h' - 9: 3, # 'i' - 22: 1, # 'j' - 7: 3, # 'k' - 6: 2, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 3, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 3, # 'u' - 19: 2, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 3, # 'á' - 15: 3, # 'é' - 30: 3, # 'í' - 25: 3, # 'ó' - 24: 3, # 'ö' - 31: 2, # 'ú' - 29: 3, # 'ü' - 42: 2, # 'ő' - 56: 1, # 'ű' - }, - 51: { # 'Á' - 28: 0, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 0, # 'E' - 50: 1, # 'F' - 49: 2, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 2, # 'L' - 34: 1, # 'M' - 35: 2, # 'N' - 47: 0, # 'O' - 46: 1, # 'P' - 43: 2, # 'R' - 33: 2, # 'S' - 37: 1, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 0, # 'e' - 27: 0, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 0, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 2, # 'l' - 13: 2, # 'm' - 4: 0, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 1, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 44: { # 'É' - 28: 0, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 1, # 'E' - 50: 0, # 'F' - 49: 2, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 2, # 'L' - 34: 1, # 'M' - 35: 2, # 'N' - 47: 0, # 'O' - 46: 1, # 'P' - 43: 2, # 'R' - 33: 2, # 'S' - 37: 2, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 0, # 'e' - 27: 0, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 0, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 2, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 3, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 0, # 'Á' - 44: 1, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 61: { # 'Í' - 28: 0, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 0, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 1, # 'J' - 36: 0, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 0, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 0, # 'e' - 27: 0, # 'f' - 12: 2, # 'g' - 20: 0, # 'h' - 9: 0, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 1, # 'm' - 4: 0, # 'n' - 8: 0, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 0, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 58: { # 'Ó' - 28: 1, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 0, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 1, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 2, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 0, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 0, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 2, # 'h' - 9: 0, # 'i' - 22: 0, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 0, # 't' - 21: 0, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 1, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 59: { # 'Ö' - 28: 0, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 0, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 0, # 'O' - 46: 1, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 0, # 'b' - 26: 1, # 'c' - 17: 1, # 'd' - 1: 0, # 'e' - 27: 0, # 'f' - 12: 0, # 'g' - 20: 0, # 'h' - 9: 0, # 'i' - 22: 0, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 1, # 'm' - 4: 1, # 'n' - 8: 0, # 'o' - 23: 0, # 'p' - 10: 2, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 60: { # 'Ú' - 28: 0, # 'A' - 40: 1, # 'B' - 54: 1, # 'C' - 45: 1, # 'D' - 32: 0, # 'E' - 50: 1, # 'F' - 49: 1, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 0, # 'b' - 26: 0, # 'c' - 17: 0, # 'd' - 1: 0, # 'e' - 27: 0, # 'f' - 12: 2, # 'g' - 20: 0, # 'h' - 9: 0, # 'i' - 22: 2, # 'j' - 7: 0, # 'k' - 6: 0, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 8: 0, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 0, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 0, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 63: { # 'Ü' - 28: 0, # 'A' - 40: 1, # 'B' - 54: 0, # 'C' - 45: 1, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 1, # 'G' - 38: 1, # 'H' - 39: 0, # 'I' - 53: 1, # 'J' - 36: 1, # 'K' - 41: 1, # 'L' - 34: 1, # 'M' - 35: 1, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 1, # 'R' - 33: 1, # 'S' - 37: 1, # 'T' - 57: 0, # 'U' - 48: 1, # 'V' - 55: 0, # 'Y' - 52: 1, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 0, # 'c' - 17: 1, # 'd' - 1: 0, # 'e' - 27: 0, # 'f' - 12: 1, # 'g' - 20: 0, # 'h' - 9: 0, # 'i' - 22: 0, # 'j' - 7: 0, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 1, # 'n' - 8: 0, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 1, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 14: { # 'á' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 3, # 'b' - 26: 3, # 'c' - 17: 3, # 'd' - 1: 1, # 'e' - 27: 2, # 'f' - 12: 3, # 'g' - 20: 2, # 'h' - 9: 2, # 'i' - 22: 3, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 1, # 'o' - 23: 2, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 2, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 1, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 2, # 'é' - 30: 1, # 'í' - 25: 0, # 'ó' - 24: 1, # 'ö' - 31: 0, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 15: { # 'é' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 3, # 'b' - 26: 2, # 'c' - 17: 3, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 3, # 'g' - 20: 3, # 'h' - 9: 2, # 'i' - 22: 2, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 1, # 'o' - 23: 3, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 0, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 30: { # 'í' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 0, # 'a' - 18: 1, # 'b' - 26: 2, # 'c' - 17: 1, # 'd' - 1: 0, # 'e' - 27: 1, # 'f' - 12: 3, # 'g' - 20: 0, # 'h' - 9: 0, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 2, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 3, # 'r' - 5: 2, # 's' - 3: 3, # 't' - 21: 0, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 25: { # 'ó' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 2, # 'a' - 18: 3, # 'b' - 26: 2, # 'c' - 17: 3, # 'd' - 1: 1, # 'e' - 27: 2, # 'f' - 12: 2, # 'g' - 20: 2, # 'h' - 9: 2, # 'i' - 22: 2, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 2, # 'm' - 4: 3, # 'n' - 8: 1, # 'o' - 23: 2, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 1, # 'u' - 19: 2, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 0, # 'ó' - 24: 1, # 'ö' - 31: 1, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 24: { # 'ö' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 0, # 'a' - 18: 3, # 'b' - 26: 1, # 'c' - 17: 2, # 'd' - 1: 0, # 'e' - 27: 1, # 'f' - 12: 2, # 'g' - 20: 1, # 'h' - 9: 0, # 'i' - 22: 1, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 3, # 'm' - 4: 3, # 'n' - 8: 0, # 'o' - 23: 2, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 3, # 't' - 21: 0, # 'u' - 19: 3, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 3, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 31: { # 'ú' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 1, # 'b' - 26: 2, # 'c' - 17: 1, # 'd' - 1: 1, # 'e' - 27: 2, # 'f' - 12: 3, # 'g' - 20: 1, # 'h' - 9: 1, # 'i' - 22: 3, # 'j' - 7: 1, # 'k' - 6: 3, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 3, # 'r' - 5: 3, # 's' - 3: 2, # 't' - 21: 1, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 1, # 'á' - 15: 1, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 29: { # 'ü' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 1, # 'b' - 26: 1, # 'c' - 17: 2, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 3, # 'g' - 20: 2, # 'h' - 9: 1, # 'i' - 22: 1, # 'j' - 7: 3, # 'k' - 6: 3, # 'l' - 13: 1, # 'm' - 4: 3, # 'n' - 8: 0, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 2, # 's' - 3: 2, # 't' - 21: 0, # 'u' - 19: 2, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 1, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 42: { # 'ő' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 2, # 'b' - 26: 1, # 'c' - 17: 2, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 1, # 'i' - 22: 1, # 'j' - 7: 2, # 'k' - 6: 3, # 'l' - 13: 1, # 'm' - 4: 2, # 'n' - 8: 1, # 'o' - 23: 1, # 'p' - 10: 2, # 'r' - 5: 2, # 's' - 3: 2, # 't' - 21: 1, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 1, # 'é' - 30: 1, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 1, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, - 56: { # 'ű' - 28: 0, # 'A' - 40: 0, # 'B' - 54: 0, # 'C' - 45: 0, # 'D' - 32: 0, # 'E' - 50: 0, # 'F' - 49: 0, # 'G' - 38: 0, # 'H' - 39: 0, # 'I' - 53: 0, # 'J' - 36: 0, # 'K' - 41: 0, # 'L' - 34: 0, # 'M' - 35: 0, # 'N' - 47: 0, # 'O' - 46: 0, # 'P' - 43: 0, # 'R' - 33: 0, # 'S' - 37: 0, # 'T' - 57: 0, # 'U' - 48: 0, # 'V' - 55: 0, # 'Y' - 52: 0, # 'Z' - 2: 1, # 'a' - 18: 1, # 'b' - 26: 0, # 'c' - 17: 1, # 'd' - 1: 1, # 'e' - 27: 1, # 'f' - 12: 1, # 'g' - 20: 1, # 'h' - 9: 1, # 'i' - 22: 1, # 'j' - 7: 1, # 'k' - 6: 1, # 'l' - 13: 0, # 'm' - 4: 2, # 'n' - 8: 0, # 'o' - 23: 0, # 'p' - 10: 1, # 'r' - 5: 1, # 's' - 3: 1, # 't' - 21: 0, # 'u' - 19: 1, # 'v' - 62: 0, # 'x' - 16: 0, # 'y' - 11: 2, # 'z' - 51: 0, # 'Á' - 44: 0, # 'É' - 61: 0, # 'Í' - 58: 0, # 'Ó' - 59: 0, # 'Ö' - 60: 0, # 'Ú' - 63: 0, # 'Ü' - 14: 0, # 'á' - 15: 0, # 'é' - 30: 0, # 'í' - 25: 0, # 'ó' - 24: 0, # 'ö' - 31: 0, # 'ú' - 29: 0, # 'ü' - 42: 0, # 'ő' - 56: 0, # 'ű' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -WINDOWS_1250_HUNGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 28, # 'A' - 66: 40, # 'B' - 67: 54, # 'C' - 68: 45, # 'D' - 69: 32, # 'E' - 70: 50, # 'F' - 71: 49, # 'G' - 72: 38, # 'H' - 73: 39, # 'I' - 74: 53, # 'J' - 75: 36, # 'K' - 76: 41, # 'L' - 77: 34, # 'M' - 78: 35, # 'N' - 79: 47, # 'O' - 80: 46, # 'P' - 81: 72, # 'Q' - 82: 43, # 'R' - 83: 33, # 'S' - 84: 37, # 'T' - 85: 57, # 'U' - 86: 48, # 'V' - 87: 64, # 'W' - 88: 68, # 'X' - 89: 55, # 'Y' - 90: 52, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 2, # 'a' - 98: 18, # 'b' - 99: 26, # 'c' - 100: 17, # 'd' - 101: 1, # 'e' - 102: 27, # 'f' - 103: 12, # 'g' - 104: 20, # 'h' - 105: 9, # 'i' - 106: 22, # 'j' - 107: 7, # 'k' - 108: 6, # 'l' - 109: 13, # 'm' - 110: 4, # 'n' - 111: 8, # 'o' - 112: 23, # 'p' - 113: 67, # 'q' - 114: 10, # 'r' - 115: 5, # 's' - 116: 3, # 't' - 117: 21, # 'u' - 118: 19, # 'v' - 119: 65, # 'w' - 120: 62, # 'x' - 121: 16, # 'y' - 122: 11, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 161, # '€' - 129: 162, # None - 130: 163, # '‚' - 131: 164, # None - 132: 165, # '„' - 133: 166, # '…' - 134: 167, # '†' - 135: 168, # '‡' - 136: 169, # None - 137: 170, # '‰' - 138: 171, # 'Š' - 139: 172, # '‹' - 140: 173, # 'Ś' - 141: 174, # 'Ť' - 142: 175, # 'Ž' - 143: 176, # 'Ź' - 144: 177, # None - 145: 178, # '‘' - 146: 179, # '’' - 147: 180, # '“' - 148: 78, # '”' - 149: 181, # '•' - 150: 69, # '–' - 151: 182, # '—' - 152: 183, # None - 153: 184, # '™' - 154: 185, # 'š' - 155: 186, # '›' - 156: 187, # 'ś' - 157: 188, # 'ť' - 158: 189, # 'ž' - 159: 190, # 'ź' - 160: 191, # '\xa0' - 161: 192, # 'ˇ' - 162: 193, # '˘' - 163: 194, # 'Ł' - 164: 195, # '¤' - 165: 196, # 'Ą' - 166: 197, # '¦' - 167: 76, # '§' - 168: 198, # '¨' - 169: 199, # '©' - 170: 200, # 'Ş' - 171: 201, # '«' - 172: 202, # '¬' - 173: 203, # '\xad' - 174: 204, # '®' - 175: 205, # 'Ż' - 176: 81, # '°' - 177: 206, # '±' - 178: 207, # '˛' - 179: 208, # 'ł' - 180: 209, # '´' - 181: 210, # 'µ' - 182: 211, # '¶' - 183: 212, # '·' - 184: 213, # '¸' - 185: 214, # 'ą' - 186: 215, # 'ş' - 187: 216, # '»' - 188: 217, # 'Ľ' - 189: 218, # '˝' - 190: 219, # 'ľ' - 191: 220, # 'ż' - 192: 221, # 'Ŕ' - 193: 51, # 'Á' - 194: 83, # 'Â' - 195: 222, # 'Ă' - 196: 80, # 'Ä' - 197: 223, # 'Ĺ' - 198: 224, # 'Ć' - 199: 225, # 'Ç' - 200: 226, # 'Č' - 201: 44, # 'É' - 202: 227, # 'Ę' - 203: 228, # 'Ë' - 204: 229, # 'Ě' - 205: 61, # 'Í' - 206: 230, # 'Î' - 207: 231, # 'Ď' - 208: 232, # 'Đ' - 209: 233, # 'Ń' - 210: 234, # 'Ň' - 211: 58, # 'Ó' - 212: 235, # 'Ô' - 213: 66, # 'Ő' - 214: 59, # 'Ö' - 215: 236, # '×' - 216: 237, # 'Ř' - 217: 238, # 'Ů' - 218: 60, # 'Ú' - 219: 70, # 'Ű' - 220: 63, # 'Ü' - 221: 239, # 'Ý' - 222: 240, # 'Ţ' - 223: 241, # 'ß' - 224: 84, # 'ŕ' - 225: 14, # 'á' - 226: 75, # 'â' - 227: 242, # 'ă' - 228: 71, # 'ä' - 229: 82, # 'ĺ' - 230: 243, # 'ć' - 231: 73, # 'ç' - 232: 244, # 'č' - 233: 15, # 'é' - 234: 85, # 'ę' - 235: 79, # 'ë' - 236: 86, # 'ě' - 237: 30, # 'í' - 238: 77, # 'î' - 239: 87, # 'ď' - 240: 245, # 'đ' - 241: 246, # 'ń' - 242: 247, # 'ň' - 243: 25, # 'ó' - 244: 74, # 'ô' - 245: 42, # 'ő' - 246: 24, # 'ö' - 247: 248, # '÷' - 248: 249, # 'ř' - 249: 250, # 'ů' - 250: 31, # 'ú' - 251: 56, # 'ű' - 252: 29, # 'ü' - 253: 251, # 'ý' - 254: 252, # 'ţ' - 255: 253, # '˙' -} - -WINDOWS_1250_HUNGARIAN_MODEL = SingleByteCharSetModel( - charset_name="windows-1250", - language="Hungarian", - char_to_order_map=WINDOWS_1250_HUNGARIAN_CHAR_TO_ORDER, - language_model=HUNGARIAN_LANG_MODEL, - typical_positive_ratio=0.947368, - keep_ascii_letters=True, - alphabet="ABCDEFGHIJKLMNOPRSTUVZabcdefghijklmnoprstuvzÁÉÍÓÖÚÜáéíóöúüŐőŰű", -) - -ISO_8859_2_HUNGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 28, # 'A' - 66: 40, # 'B' - 67: 54, # 'C' - 68: 45, # 'D' - 69: 32, # 'E' - 70: 50, # 'F' - 71: 49, # 'G' - 72: 38, # 'H' - 73: 39, # 'I' - 74: 53, # 'J' - 75: 36, # 'K' - 76: 41, # 'L' - 77: 34, # 'M' - 78: 35, # 'N' - 79: 47, # 'O' - 80: 46, # 'P' - 81: 71, # 'Q' - 82: 43, # 'R' - 83: 33, # 'S' - 84: 37, # 'T' - 85: 57, # 'U' - 86: 48, # 'V' - 87: 64, # 'W' - 88: 68, # 'X' - 89: 55, # 'Y' - 90: 52, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 2, # 'a' - 98: 18, # 'b' - 99: 26, # 'c' - 100: 17, # 'd' - 101: 1, # 'e' - 102: 27, # 'f' - 103: 12, # 'g' - 104: 20, # 'h' - 105: 9, # 'i' - 106: 22, # 'j' - 107: 7, # 'k' - 108: 6, # 'l' - 109: 13, # 'm' - 110: 4, # 'n' - 111: 8, # 'o' - 112: 23, # 'p' - 113: 67, # 'q' - 114: 10, # 'r' - 115: 5, # 's' - 116: 3, # 't' - 117: 21, # 'u' - 118: 19, # 'v' - 119: 65, # 'w' - 120: 62, # 'x' - 121: 16, # 'y' - 122: 11, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 159, # '\x80' - 129: 160, # '\x81' - 130: 161, # '\x82' - 131: 162, # '\x83' - 132: 163, # '\x84' - 133: 164, # '\x85' - 134: 165, # '\x86' - 135: 166, # '\x87' - 136: 167, # '\x88' - 137: 168, # '\x89' - 138: 169, # '\x8a' - 139: 170, # '\x8b' - 140: 171, # '\x8c' - 141: 172, # '\x8d' - 142: 173, # '\x8e' - 143: 174, # '\x8f' - 144: 175, # '\x90' - 145: 176, # '\x91' - 146: 177, # '\x92' - 147: 178, # '\x93' - 148: 179, # '\x94' - 149: 180, # '\x95' - 150: 181, # '\x96' - 151: 182, # '\x97' - 152: 183, # '\x98' - 153: 184, # '\x99' - 154: 185, # '\x9a' - 155: 186, # '\x9b' - 156: 187, # '\x9c' - 157: 188, # '\x9d' - 158: 189, # '\x9e' - 159: 190, # '\x9f' - 160: 191, # '\xa0' - 161: 192, # 'Ą' - 162: 193, # '˘' - 163: 194, # 'Ł' - 164: 195, # '¤' - 165: 196, # 'Ľ' - 166: 197, # 'Ś' - 167: 75, # '§' - 168: 198, # '¨' - 169: 199, # 'Š' - 170: 200, # 'Ş' - 171: 201, # 'Ť' - 172: 202, # 'Ź' - 173: 203, # '\xad' - 174: 204, # 'Ž' - 175: 205, # 'Ż' - 176: 79, # '°' - 177: 206, # 'ą' - 178: 207, # '˛' - 179: 208, # 'ł' - 180: 209, # '´' - 181: 210, # 'ľ' - 182: 211, # 'ś' - 183: 212, # 'ˇ' - 184: 213, # '¸' - 185: 214, # 'š' - 186: 215, # 'ş' - 187: 216, # 'ť' - 188: 217, # 'ź' - 189: 218, # '˝' - 190: 219, # 'ž' - 191: 220, # 'ż' - 192: 221, # 'Ŕ' - 193: 51, # 'Á' - 194: 81, # 'Â' - 195: 222, # 'Ă' - 196: 78, # 'Ä' - 197: 223, # 'Ĺ' - 198: 224, # 'Ć' - 199: 225, # 'Ç' - 200: 226, # 'Č' - 201: 44, # 'É' - 202: 227, # 'Ę' - 203: 228, # 'Ë' - 204: 229, # 'Ě' - 205: 61, # 'Í' - 206: 230, # 'Î' - 207: 231, # 'Ď' - 208: 232, # 'Đ' - 209: 233, # 'Ń' - 210: 234, # 'Ň' - 211: 58, # 'Ó' - 212: 235, # 'Ô' - 213: 66, # 'Ő' - 214: 59, # 'Ö' - 215: 236, # '×' - 216: 237, # 'Ř' - 217: 238, # 'Ů' - 218: 60, # 'Ú' - 219: 69, # 'Ű' - 220: 63, # 'Ü' - 221: 239, # 'Ý' - 222: 240, # 'Ţ' - 223: 241, # 'ß' - 224: 82, # 'ŕ' - 225: 14, # 'á' - 226: 74, # 'â' - 227: 242, # 'ă' - 228: 70, # 'ä' - 229: 80, # 'ĺ' - 230: 243, # 'ć' - 231: 72, # 'ç' - 232: 244, # 'č' - 233: 15, # 'é' - 234: 83, # 'ę' - 235: 77, # 'ë' - 236: 84, # 'ě' - 237: 30, # 'í' - 238: 76, # 'î' - 239: 85, # 'ď' - 240: 245, # 'đ' - 241: 246, # 'ń' - 242: 247, # 'ň' - 243: 25, # 'ó' - 244: 73, # 'ô' - 245: 42, # 'ő' - 246: 24, # 'ö' - 247: 248, # '÷' - 248: 249, # 'ř' - 249: 250, # 'ů' - 250: 31, # 'ú' - 251: 56, # 'ű' - 252: 29, # 'ü' - 253: 251, # 'ý' - 254: 252, # 'ţ' - 255: 253, # '˙' -} - -ISO_8859_2_HUNGARIAN_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-2", - language="Hungarian", - char_to_order_map=ISO_8859_2_HUNGARIAN_CHAR_TO_ORDER, - language_model=HUNGARIAN_LANG_MODEL, - typical_positive_ratio=0.947368, - keep_ascii_letters=True, - alphabet="ABCDEFGHIJKLMNOPRSTUVZabcdefghijklmnoprstuvzÁÉÍÓÖÚÜáéíóöúüŐőŰű", -) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_wrap.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_wrap.py deleted file mode 100644 index c45f193f74ad7385c84f3b935663198415cfaa4b..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_wrap.py +++ /dev/null @@ -1,56 +0,0 @@ -import re -from typing import Iterable, List, Tuple - -from ._loop import loop_last -from .cells import cell_len, chop_cells - -re_word = re.compile(r"\s*\S+\s*") - - -def words(text: str) -> Iterable[Tuple[int, int, str]]: - position = 0 - word_match = re_word.match(text, position) - while word_match is not None: - start, end = word_match.span() - word = word_match.group(0) - yield start, end, word - word_match = re_word.match(text, end) - - -def divide_line(text: str, width: int, fold: bool = True) -> List[int]: - divides: List[int] = [] - append = divides.append - line_position = 0 - _cell_len = cell_len - for start, _end, word in words(text): - word_length = _cell_len(word.rstrip()) - if line_position + word_length > width: - if word_length > width: - if fold: - chopped_words = chop_cells(word, max_size=width, position=0) - for last, line in loop_last(chopped_words): - if start: - append(start) - - if last: - line_position = _cell_len(line) - else: - start += len(line) - else: - if start: - append(start) - line_position = _cell_len(word) - elif line_position and start: - append(start) - line_position = _cell_len(word) - else: - line_position += _cell_len(word) - return divides - - -if __name__ == "__main__": # pragma: no cover - from .console import Console - - console = Console(width=10) - console.print("12345 abcdefghijklmnopqrstuvwyxzABCDEFGHIJKLMNOPQRSTUVWXYZ 12345") - print(chop_cells("abcdefghijklmnopqrstuvwxyz", 10, position=2)) diff --git a/spaces/plzdontcry/dakubettergpt/src/assets/icons/FileTextIcon.tsx b/spaces/plzdontcry/dakubettergpt/src/assets/icons/FileTextIcon.tsx deleted file mode 100644 index 7f291b620f0153157821b216916936571b376259..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/assets/icons/FileTextIcon.tsx +++ /dev/null @@ -1,17 +0,0 @@ -import React from 'react'; - -const FileTextIcon = (props: React.SVGProps) => { - return ( - - - - ); -}; - -export default FileTextIcon; diff --git a/spaces/prerna9811/Chord/portaudio/examples/paex_wmme_ac3.c b/spaces/prerna9811/Chord/portaudio/examples/paex_wmme_ac3.c deleted file mode 100644 index 74daa96fd843f3bf0be691af406a97b8d23cf20c..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/examples/paex_wmme_ac3.c +++ /dev/null @@ -1,220 +0,0 @@ -/** @file paex_wmme_ac3.c - @ingroup examples_src - @brief Use WMME-specific interface to send raw AC3 data to a S/PDIF output. - @author Ross Bencina -*/ -/* - * $Id: $ - * Portable Audio I/O Library - * Windows MME ac3 sound output test - * - * Copyright (c) 2009 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include - -#include /* required when using pa_win_wmme.h */ -#include /* required when using pa_win_wmme.h */ - -#include "portaudio.h" -#include "pa_win_wmme.h" - -#define NUM_SECONDS (20) -#define SAMPLE_RATE (48000) -#define FRAMES_PER_BUFFER (64) - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -#define TABLE_SIZE (100) - -#define CHANNEL_COUNT (2) - - - -typedef struct -{ - short *buffer; - int bufferSampleCount; - int playbackIndex; -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - short *out = (short*)outputBuffer; - unsigned long i,j; - - (void) timeInfo; /* Prevent unused variable warnings. */ - (void) statusFlags; - (void) inputBuffer; - - /* stream out contents of data->buffer looping at end */ - - for( i=0; ibuffer[ data->playbackIndex++ ]; - - if( data->playbackIndex >= data->bufferSampleCount ) - data->playbackIndex = 0; /* loop at end of buffer */ - } - } - - return paContinue; -} - -/*******************************************************************/ -int main(int argc, char* argv[]) -{ - PaStreamParameters outputParameters; - PaWinMmeStreamInfo wmmeStreamInfo; - PaStream *stream; - PaError err; - paTestData data; - int deviceIndex; - FILE *fp; - const char *fileName = "c:\\test_48k.ac3.spdif"; - data.buffer = NULL; - - printf("usage: patest_wmme_ac3 fileName [paDeviceIndex]\n"); - printf("**IMPORTANT*** The provided file must include the spdif preamble at the start of every AC-3 frame. Using a normal ac3 file won't work.\n"); - printf("PortAudio Test: output a raw spdif ac3 stream. SR = %d, BufSize = %d, Chans = %d\n", - SAMPLE_RATE, FRAMES_PER_BUFFER, CHANNEL_COUNT); - - - if( argc >= 2 ) - fileName = argv[1]; - - printf( "reading spdif ac3 raw stream file %s\n", fileName ); - - fp = fopen( fileName, "rb" ); - if( !fp ){ - fprintf( stderr, "error opening spdif ac3 file.\n" ); - return -1; - } - /* get file size */ - fseek( fp, 0, SEEK_END ); - data.bufferSampleCount = ftell( fp ) / sizeof(short); - fseek( fp, 0, SEEK_SET ); - - /* allocate buffer, read the whole file into memory */ - data.buffer = (short*)malloc( data.bufferSampleCount * sizeof(short) ); - if( !data.buffer ){ - fprintf( stderr, "error allocating buffer.\n" ); - return -1; - } - - fread( data.buffer, sizeof(short), data.bufferSampleCount, fp ); - fclose( fp ); - - data.playbackIndex = 0; - - err = Pa_Initialize(); - if( err != paNoError ) goto error; - - deviceIndex = Pa_GetHostApiInfo( Pa_HostApiTypeIdToHostApiIndex( paMME ) )->defaultOutputDevice; - if( argc >= 3 ){ - sscanf( argv[1], "%d", &deviceIndex ); - } - - printf( "using device id %d (%s)\n", deviceIndex, Pa_GetDeviceInfo(deviceIndex)->name ); - - - outputParameters.device = deviceIndex; - outputParameters.channelCount = CHANNEL_COUNT; - outputParameters.sampleFormat = paInt16; /* IMPORTANT must use paInt16 for WMME AC3 */ - outputParameters.suggestedLatency = Pa_GetDeviceInfo( outputParameters.device )->defaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - wmmeStreamInfo.size = sizeof(PaWinMmeStreamInfo); - wmmeStreamInfo.hostApiType = paMME; - wmmeStreamInfo.version = 1; - wmmeStreamInfo.flags = paWinMmeWaveFormatDolbyAc3Spdif; - outputParameters.hostApiSpecificStreamInfo = &wmmeStreamInfo; - - - if( Pa_IsFormatSupported( 0, &outputParameters, SAMPLE_RATE ) == paFormatIsSupported ){ - printf( "Pa_IsFormatSupported reports device will support %d channels.\n", CHANNEL_COUNT ); - }else{ - printf( "Pa_IsFormatSupported reports device will not support %d channels.\n", CHANNEL_COUNT ); - } - - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - 0, - patestCallback, - &data ); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - printf("Play for %d seconds.\n", NUM_SECONDS ); - Pa_Sleep( NUM_SECONDS * 1000 ); - - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - - Pa_Terminate(); - free( data.buffer ); - printf("Test finished.\n"); - - return err; -error: - Pa_Terminate(); - free( data.buffer ); - - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/prerna9811/Chord/portaudio/include/pa_win_wasapi.h b/spaces/prerna9811/Chord/portaudio/include/pa_win_wasapi.h deleted file mode 100644 index c046afd709e26429540ab3a23aa5c4a37382c9db..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/include/pa_win_wasapi.h +++ /dev/null @@ -1,729 +0,0 @@ -#ifndef PA_WIN_WASAPI_H -#define PA_WIN_WASAPI_H -/* - * $Id: $ - * PortAudio Portable Real-Time Audio Library - * WASAPI specific extensions - * - * Copyright (c) 1999-2018 Ross Bencina and Phil Burk - * Copyright (c) 2006-2010 David Viens - * Copyright (c) 2010-2018 Dmitry Kostjuchenko - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup public_header - @brief WASAPI-specific PortAudio API extension header file. -*/ - -#include "portaudio.h" -#include "pa_win_waveformat.h" - -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ - - -/* Stream setup flags. */ -typedef enum PaWasapiFlags -{ - /* put WASAPI into exclusive mode */ - paWinWasapiExclusive = (1 << 0), - - /* allow to skip internal PA processing completely */ - paWinWasapiRedirectHostProcessor = (1 << 1), - - /* assign custom channel mask */ - paWinWasapiUseChannelMask = (1 << 2), - - /* select non-Event driven method of data read/write - Note: WASAPI Event driven core is capable of 2ms latency!!!, but Polling - method can only provide 15-20ms latency. */ - paWinWasapiPolling = (1 << 3), - - /* force custom thread priority setting, must be used if PaWasapiStreamInfo::threadPriority - is set to a custom value */ - paWinWasapiThreadPriority = (1 << 4), - - /* force explicit sample format and do not allow PA to select suitable working format, API will - fail if provided sample format is not supported by audio hardware in Exclusive mode - or system mixer in Shared mode */ - paWinWasapiExplicitSampleFormat = (1 << 5), - - /* allow API to insert system-level channel matrix mixer and sample rate converter to allow - playback formats that do not match the current configured system settings. - this is in particular required for streams not matching the system mixer sample rate. - only applies in Shared mode. */ - paWinWasapiAutoConvert = (1 << 6) -} -PaWasapiFlags; -#define paWinWasapiExclusive (paWinWasapiExclusive) -#define paWinWasapiRedirectHostProcessor (paWinWasapiRedirectHostProcessor) -#define paWinWasapiUseChannelMask (paWinWasapiUseChannelMask) -#define paWinWasapiPolling (paWinWasapiPolling) -#define paWinWasapiThreadPriority (paWinWasapiThreadPriority) -#define paWinWasapiExplicitSampleFormat (paWinWasapiExplicitSampleFormat) -#define paWinWasapiAutoConvert (paWinWasapiAutoConvert) - - -/* Stream state. - - @note Multiple states can be united into a bitmask. - @see PaWasapiStreamStateCallback, PaWasapi_SetStreamStateHandler -*/ -typedef enum PaWasapiStreamState -{ - /* state change was caused by the error: - - Example: - 1) If thread execution stopped due to AUDCLNT_E_RESOURCES_INVALIDATED then state - value will contain paWasapiStreamStateError|paWasapiStreamStateThreadStop. - */ - paWasapiStreamStateError = (1 << 0), - - /* processing thread is preparing to start execution */ - paWasapiStreamStateThreadPrepare = (1 << 1), - - /* processing thread started execution (enters its loop) */ - paWasapiStreamStateThreadStart = (1 << 2), - - /* processing thread stopped execution */ - paWasapiStreamStateThreadStop = (1 << 3) -} -PaWasapiStreamState; -#define paWasapiStreamStateError (paWasapiStreamStateError) -#define paWasapiStreamStateThreadPrepare (paWasapiStreamStateThreadPrepare) -#define paWasapiStreamStateThreadStart (paWasapiStreamStateThreadStart) -#define paWasapiStreamStateThreadStop (paWasapiStreamStateThreadStop) - - -/* Host processor. - - Allows to skip internal PA processing completely. paWinWasapiRedirectHostProcessor flag - must be set to the PaWasapiStreamInfo::flags member in order to have host processor - redirected to this callback. - - Use with caution! inputFrames and outputFrames depend solely on final device setup. - To query max values of inputFrames/outputFrames use PaWasapi_GetFramesPerHostBuffer. -*/ -typedef void (*PaWasapiHostProcessorCallback) (void *inputBuffer, long inputFrames, - void *outputBuffer, long outputFrames, void *userData); - - -/* Stream state handler. - - @param pStream Pointer to PaStream object. - @param stateFlags State flags, a collection of values from PaWasapiStreamState enum. - @param errorId Error id provided by system API (HRESULT). - @param userData Pointer to user data. - - @see PaWasapiStreamState -*/ -typedef void (*PaWasapiStreamStateCallback) (PaStream *pStream, unsigned int stateFlags, - unsigned int errorId, void *pUserData); - - -/* Device role. */ -typedef enum PaWasapiDeviceRole -{ - eRoleRemoteNetworkDevice = 0, - eRoleSpeakers, - eRoleLineLevel, - eRoleHeadphones, - eRoleMicrophone, - eRoleHeadset, - eRoleHandset, - eRoleUnknownDigitalPassthrough, - eRoleSPDIF, - eRoleHDMI, - eRoleUnknownFormFactor -} -PaWasapiDeviceRole; - - -/* Jack connection type. */ -typedef enum PaWasapiJackConnectionType -{ - eJackConnTypeUnknown, - eJackConnType3Point5mm, - eJackConnTypeQuarter, - eJackConnTypeAtapiInternal, - eJackConnTypeRCA, - eJackConnTypeOptical, - eJackConnTypeOtherDigital, - eJackConnTypeOtherAnalog, - eJackConnTypeMultichannelAnalogDIN, - eJackConnTypeXlrProfessional, - eJackConnTypeRJ11Modem, - eJackConnTypeCombination -} -PaWasapiJackConnectionType; - - -/* Jack geometric location. */ -typedef enum PaWasapiJackGeoLocation -{ - eJackGeoLocUnk = 0, - eJackGeoLocRear = 0x1, /* matches EPcxGeoLocation::eGeoLocRear */ - eJackGeoLocFront, - eJackGeoLocLeft, - eJackGeoLocRight, - eJackGeoLocTop, - eJackGeoLocBottom, - eJackGeoLocRearPanel, - eJackGeoLocRiser, - eJackGeoLocInsideMobileLid, - eJackGeoLocDrivebay, - eJackGeoLocHDMI, - eJackGeoLocOutsideMobileLid, - eJackGeoLocATAPI, - eJackGeoLocReserved5, - eJackGeoLocReserved6, -} -PaWasapiJackGeoLocation; - - -/* Jack general location. */ -typedef enum PaWasapiJackGenLocation -{ - eJackGenLocPrimaryBox = 0, - eJackGenLocInternal, - eJackGenLocSeparate, - eJackGenLocOther -} -PaWasapiJackGenLocation; - - -/* Jack's type of port. */ -typedef enum PaWasapiJackPortConnection -{ - eJackPortConnJack = 0, - eJackPortConnIntegratedDevice, - eJackPortConnBothIntegratedAndJack, - eJackPortConnUnknown -} -PaWasapiJackPortConnection; - - -/* Thread priority. */ -typedef enum PaWasapiThreadPriority -{ - eThreadPriorityNone = 0, - eThreadPriorityAudio, //!< Default for Shared mode. - eThreadPriorityCapture, - eThreadPriorityDistribution, - eThreadPriorityGames, - eThreadPriorityPlayback, - eThreadPriorityProAudio, //!< Default for Exclusive mode. - eThreadPriorityWindowManager -} -PaWasapiThreadPriority; - - -/* Stream descriptor. */ -typedef struct PaWasapiJackDescription -{ - unsigned long channelMapping; - unsigned long color; /* derived from macro: #define RGB(r,g,b) ((COLORREF)(((BYTE)(r)|((WORD)((BYTE)(g))<<8))|(((DWORD)(BYTE)(b))<<16))) */ - PaWasapiJackConnectionType connectionType; - PaWasapiJackGeoLocation geoLocation; - PaWasapiJackGenLocation genLocation; - PaWasapiJackPortConnection portConnection; - unsigned int isConnected; -} -PaWasapiJackDescription; - - -/** Stream category. - Note: - - values are equal to WASAPI AUDIO_STREAM_CATEGORY enum - - supported since Windows 8.0, noop on earlier versions - - values 1,2 are deprecated on Windows 10 and not included into enumeration - - @version Available as of 19.6.0 -*/ -typedef enum PaWasapiStreamCategory -{ - eAudioCategoryOther = 0, - eAudioCategoryCommunications = 3, - eAudioCategoryAlerts = 4, - eAudioCategorySoundEffects = 5, - eAudioCategoryGameEffects = 6, - eAudioCategoryGameMedia = 7, - eAudioCategoryGameChat = 8, - eAudioCategorySpeech = 9, - eAudioCategoryMovie = 10, - eAudioCategoryMedia = 11 -} -PaWasapiStreamCategory; - - -/** Stream option. - Note: - - values are equal to WASAPI AUDCLNT_STREAMOPTIONS enum - - supported since Windows 8.1, noop on earlier versions - - @version Available as of 19.6.0 -*/ -typedef enum PaWasapiStreamOption -{ - eStreamOptionNone = 0, //!< default - eStreamOptionRaw = 1, //!< bypass WASAPI Audio Engine DSP effects, supported since Windows 8.1 - eStreamOptionMatchFormat = 2 //!< force WASAPI Audio Engine into a stream format, supported since Windows 10 -} -PaWasapiStreamOption; - - -/* Stream descriptor. */ -typedef struct PaWasapiStreamInfo -{ - unsigned long size; /**< sizeof(PaWasapiStreamInfo) */ - PaHostApiTypeId hostApiType; /**< paWASAPI */ - unsigned long version; /**< 1 */ - - unsigned long flags; /**< collection of PaWasapiFlags */ - - /** Support for WAVEFORMATEXTENSIBLE channel masks. If flags contains - paWinWasapiUseChannelMask this allows you to specify which speakers - to address in a multichannel stream. Constants for channelMask - are specified in pa_win_waveformat.h. Will be used only if - paWinWasapiUseChannelMask flag is specified. - */ - PaWinWaveFormatChannelMask channelMask; - - /** Delivers raw data to callback obtained from GetBuffer() methods skipping - internal PortAudio processing inventory completely. userData parameter will - be the same that was passed to Pa_OpenStream method. Will be used only if - paWinWasapiRedirectHostProcessor flag is specified. - */ - PaWasapiHostProcessorCallback hostProcessorOutput; - PaWasapiHostProcessorCallback hostProcessorInput; - - /** Specifies thread priority explicitly. Will be used only if paWinWasapiThreadPriority flag - is specified. - - Please note, if Input/Output streams are opened simultaneously (Full-Duplex mode) - you shall specify same value for threadPriority or othervise one of the values will be used - to setup thread priority. - */ - PaWasapiThreadPriority threadPriority; - - /** Stream category. - @see PaWasapiStreamCategory - @version Available as of 19.6.0 - */ - PaWasapiStreamCategory streamCategory; - - /** Stream option. - @see PaWasapiStreamOption - @version Available as of 19.6.0 - */ - PaWasapiStreamOption streamOption; -} -PaWasapiStreamInfo; - - -/** Returns pointer to WASAPI's IAudioClient object of the stream. - - @param pStream Pointer to PaStream object. - @param pAudioClient Pointer to pointer of IAudioClient. - @param bOutput TRUE (1) for output stream, FALSE (0) for input stream. - - @return Error code indicating success or failure. -*/ -PaError PaWasapi_GetAudioClient( PaStream *pStream, void **pAudioClient, int bOutput ); - - -/** Update device list. - - This function is available if PA_WASAPI_MAX_CONST_DEVICE_COUNT is defined during compile time - with maximum constant WASAPI device count (recommended value - 32). - If PA_WASAPI_MAX_CONST_DEVICE_COUNT is set to 0 (or not defined) during compile time the implementation - will not define PaWasapi_UpdateDeviceList() and thus updating device list can only be possible by calling - Pa_Terminate() and then Pa_Initialize(). - - @return Error code indicating success or failure. -*/ -PaError PaWasapi_UpdateDeviceList(); - - -/** Get current audio format of the device assigned to the opened stream. - - Format is represented by PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure. - Use this function to reconfirm format if PA's processor is overridden and - paWinWasapiRedirectHostProcessor flag is specified. - - @param pStream Pointer to PaStream object. - @param pFormat Pointer to PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure. - @param formatSize Size of PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure in bytes. - @param bOutput TRUE (1) for output stream, FALSE (0) for input stream. - - @return Non-negative value indicating the number of bytes copied into format descriptor - or, a PaErrorCode (which is always negative) if PortAudio is not initialized - or an error is encountered. -*/ -int PaWasapi_GetDeviceCurrentFormat( PaStream *pStream, void *pFormat, unsigned int formatSize, int bOutput ); - - -/** Get default audio format for the device in Shared Mode. - - Format is represented by PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure and obtained - by getting the device property with a PKEY_AudioEngine_DeviceFormat key. - - @param pFormat Pointer to PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure. - @param formatSize Size of PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure in bytes. - @param device Device index. - - @return Non-negative value indicating the number of bytes copied into format descriptor - or, a PaErrorCode (which is always negative) if PortAudio is not initialized - or an error is encountered. -*/ -int PaWasapi_GetDeviceDefaultFormat( void *pFormat, unsigned int formatSize, PaDeviceIndex device ); - - -/** Get mix audio format for the device in Shared Mode. - - Format is represented by PaWinWaveFormat or WAVEFORMATEXTENSIBLE structureand obtained by - IAudioClient::GetMixFormat. - - @param pFormat Pointer to PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure. - @param formatSize Size of PaWinWaveFormat or WAVEFORMATEXTENSIBLE structure in bytes. - @param device Device index. - - @return Non-negative value indicating the number of bytes copied into format descriptor - or, a PaErrorCode (which is always negative) if PortAudio is not initialized - or an error is encountered. -*/ -int PaWasapi_GetDeviceMixFormat( void *pFormat, unsigned int formatSize, PaDeviceIndex device ); - - -/** Get device role (PaWasapiDeviceRole enum). - - @param device Device index. - - @return Non-negative value indicating device role or, a PaErrorCode (which is always negative) - if PortAudio is not initialized or an error is encountered. -*/ -int/*PaWasapiDeviceRole*/ PaWasapi_GetDeviceRole( PaDeviceIndex device ); - - -/** Get device IMMDevice pointer - - @param device Device index. - @param pAudioClient Pointer to pointer of IMMDevice. - - @return Error code indicating success or failure. -*/ -PaError PaWasapi_GetIMMDevice( PaDeviceIndex device, void **pIMMDevice ); - - -/** Boost thread priority of calling thread (MMCSS). - - Use it for Blocking Interface only inside the thread which makes calls to Pa_WriteStream/Pa_ReadStream. - - @param pTask Handle to pointer to priority task. Must be used with PaWasapi_RevertThreadPriority - method to revert thread priority to initial state. - - @param priorityClass Id of thread priority of PaWasapiThreadPriority type. Specifying - eThreadPriorityNone does nothing. - - @return Error code indicating success or failure. - @see PaWasapi_RevertThreadPriority -*/ -PaError PaWasapi_ThreadPriorityBoost( void **pTask, PaWasapiThreadPriority priorityClass ); - - -/** Boost thread priority of calling thread (MMCSS). - - Use it for Blocking Interface only inside the thread which makes calls to Pa_WriteStream/Pa_ReadStream. - - @param pTask Task handle obtained by PaWasapi_BoostThreadPriority method. - - @return Error code indicating success or failure. - @see PaWasapi_BoostThreadPriority -*/ -PaError PaWasapi_ThreadPriorityRevert( void *pTask ); - - -/** Get number of frames per host buffer. - - It is max value of frames of WASAPI buffer which can be locked for operations. - Use this method as helper to find out max values of inputFrames/outputFrames - of PaWasapiHostProcessorCallback. - - @param pStream Pointer to PaStream object. - @param pInput Pointer to variable to receive number of input frames. Can be NULL. - @param pOutput Pointer to variable to receive number of output frames. Can be NULL. - - @return Error code indicating success or failure. - @see PaWasapiHostProcessorCallback -*/ -PaError PaWasapi_GetFramesPerHostBuffer( PaStream *pStream, unsigned int *pInput, unsigned int *pOutput ); - - -/** Get number of jacks associated with a WASAPI device. - - Use this method to determine if there are any jacks associated with the provided WASAPI device. - Not all audio devices will support this capability. This is valid for both input and output devices. - - @note Not available on UWP platform. - - @param device Device index. - @param pJackCount Pointer to variable to receive number of jacks. - - @return Error code indicating success or failure. - @see PaWasapi_GetJackDescription - */ -PaError PaWasapi_GetJackCount( PaDeviceIndex device, int *pJackCount ); - - -/** Get the jack description associated with a WASAPI device and jack number. - - Before this function is called, use PaWasapi_GetJackCount to determine the - number of jacks associated with device. If jcount is greater than zero, then - each jack from 0 to jcount can be queried with this function to get the jack - description. - - @note Not available on UWP platform. - - @param device Device index. - @param jackIndex Jack index. - @param pJackDescription Pointer to PaWasapiJackDescription. - - @return Error code indicating success or failure. - @see PaWasapi_GetJackCount - */ -PaError PaWasapi_GetJackDescription( PaDeviceIndex device, int jackIndex, PaWasapiJackDescription *pJackDescription ); - - -/** Set stream state handler. - - @param pStream Pointer to PaStream object. - @param fnStateHandler Pointer to state handling function. - @param pUserData Pointer to user data. - - @return Error code indicating success or failure. -*/ -PaError PaWasapi_SetStreamStateHandler( PaStream *pStream, PaWasapiStreamStateCallback fnStateHandler, void *pUserData ); - - -/** Set default device Id. - - By default implementation will use the DEVINTERFACE_AUDIO_RENDER and - DEVINTERFACE_AUDIO_CAPTURE Ids if device Id is not provided explicitly. These default Ids - will not allow to use Exclusive mode on UWP/WinRT platform and thus you must provide - device Id explicitly via this API before calling the Pa_OpenStream(). - - Device Ids on UWP platform are obtainable via: - Windows::Media::Devices::MediaDevice::GetDefaultAudioRenderId() or - Windows::Media::Devices::MediaDevice::GetDefaultAudioCaptureId() API. - - After the call completes, memory referenced by pointers can be freed, as implementation keeps its own copy. - - Call this function before calling Pa_IsFormatSupported() when Exclusive mode is requested. - - See an example in the IMPORTANT notes. - - @note UWP/WinRT platform only. - - @param pId Device Id, pointer to the 16-bit Unicode string (WCHAR). If NULL then device Id - will be reset to the default, e.g. DEVINTERFACE_AUDIO_RENDER or DEVINTERFACE_AUDIO_CAPTURE. - @param bOutput TRUE (1) for output (render), FALSE (0) for input (capture). - - @return Error code indicating success or failure. Will return paIncompatibleStreamHostApi if library is not compiled - for UWP/WinRT platform. If Id is longer than PA_WASAPI_DEVICE_ID_LEN characters paBufferTooBig will - be returned. -*/ -PaError PaWasapiWinrt_SetDefaultDeviceId( const unsigned short *pId, int bOutput ); - - -/** Populate the device list. - - By default the implementation will rely on DEVINTERFACE_AUDIO_RENDER and DEVINTERFACE_AUDIO_CAPTURE as - default devices. If device Id is provided by PaWasapiWinrt_SetDefaultDeviceId() then those - device Ids will be used as default and only devices for the device list. - - By populating the device list you can provide an additional available audio devices of the system to PA - which are obtainable by: - Windows::Devices::Enumeration::DeviceInformation::FindAllAsync(selector) where selector is obtainable by - Windows::Media::Devices::MediaDevice::GetAudioRenderSelector() or - Windows::Media::Devices::MediaDevice::GetAudioCaptureSelector() API. - - After the call completes, memory referenced by pointers can be freed, as implementation keeps its own copy. - - You must call PaWasapi_UpdateDeviceList() to update the internal device list of the implementation after - calling this function. - - See an example in the IMPORTANT notes. - - @note UWP/WinRT platform only. - - @param pId Array of device Ids, pointer to the array of pointers of 16-bit Unicode string (WCHAR). If NULL - and count is also 0 then device Ids will be reset to the default. Required. - @param pName Array of device Names, pointer to the array of pointers of 16-bit Unicode string (WCHAR). Optional. - @param pRole Array of device Roles, see PaWasapiDeviceRole and PaWasapi_GetDeviceRole() for more details. Optional. - @param count Number of devices, the number of array elements (pId, pName, pRole). Maximum count of devices - is limited by PA_WASAPI_DEVICE_MAX_COUNT. - @param bOutput TRUE (1) for output (render), FALSE (0) for input (capture). - - @return Error code indicating success or failure. Will return paIncompatibleStreamHostApi if library is not compiled - for UWP/WinRT platform. If Id is longer than PA_WASAPI_DEVICE_ID_LEN characters paBufferTooBig will - be returned. If Name is longer than PA_WASAPI_DEVICE_NAME_LEN characters paBufferTooBig will - be returned. -*/ -PaError PaWasapiWinrt_PopulateDeviceList( const unsigned short **pId, const unsigned short **pName, - const PaWasapiDeviceRole *pRole, unsigned int count, int bOutput ); - - -/* - IMPORTANT: - - WASAPI is implemented for Callback and Blocking interfaces. It supports Shared and Exclusive - share modes. - - Exclusive Mode: - - Exclusive mode allows to deliver audio data directly to hardware bypassing - software mixing. - Exclusive mode is specified by 'paWinWasapiExclusive' flag. - - Callback Interface: - - Provides best audio quality with low latency. Callback interface is implemented in - two versions: - - 1) Event-Driven: - This is the most powerful WASAPI implementation which provides glitch-free - audio at around 3ms latency in Exclusive mode. Lowest possible latency for this mode is - 3 ms for HD Audio class audio chips. For the Shared mode latency can not be - lower than 20 ms. - - 2) Poll-Driven: - Polling is another 2-nd method to operate with WASAPI. It is less efficient than Event-Driven - and provides latency at around 10-13ms. Polling must be used to overcome a system bug - under Windows Vista x64 when application is WOW64(32-bit) and Event-Driven method simply - times out (event handle is never signalled on buffer completion). Please note, such WOW64 bug - does not exist in Vista x86 or Windows 7. - Polling can be setup by specifying 'paWinWasapiPolling' flag. Our WASAPI implementation detects - WOW64 bug and sets 'paWinWasapiPolling' automatically. - - Thread priority: - - Normally thread priority is set automatically and does not require modification. Although - if user wants some tweaking thread priority can be modified by setting 'paWinWasapiThreadPriority' - flag and specifying 'PaWasapiStreamInfo::threadPriority' with value from PaWasapiThreadPriority - enum. - - Blocking Interface: - - Blocking interface is implemented but due to above described Poll-Driven method can not - deliver lowest possible latency. Specifying too low latency in Shared mode will result in - distorted audio although Exclusive mode adds stability. - - 8.24 format: - - If paCustomFormat is specified as sample format then the implementation will understand it - as valid 24-bits inside 32-bit container (e.g. wBitsPerSample = 32, Samples.wValidBitsPerSample = 24). - - By using paCustomFormat there will be small optimization when samples are be copied - with Copy_24_To_24 by PA processor instead of conversion from packed 3-byte (24-bit) data - with Int24_To_Int32. - - Pa_IsFormatSupported: - - To check format with correct Share Mode (Exclusive/Shared) you must supply PaWasapiStreamInfo - with flags paWinWasapiExclusive set through member of PaStreamParameters::hostApiSpecificStreamInfo - structure. - - If paWinWasapiExplicitSampleFormat flag is provided then implementation will not try to select - suitable close format and will return an error instead of paFormatIsSupported. By specifying - paWinWasapiExplicitSampleFormat flag it is possible to find out what sample formats are - supported by Exclusive or Shared modes. - - Pa_OpenStream: - - To set desired Share Mode (Exclusive/Shared) you must supply - PaWasapiStreamInfo with flags paWinWasapiExclusive set through member of - PaStreamParameters::hostApiSpecificStreamInfo structure. - - Coding style for parameters and structure members of the public API: - - 1) bXXX - boolean, [1 (TRUE), 0 (FALSE)] - 2) pXXX - pointer - 3) fnXXX - pointer to function - 4) structure members are never prefixed with a type distinguisher - - - UWP/WinRT: - - This platform has number of limitations which do not allow to enumerate audio devices without - an additional external help. Enumeration is possible though from C++/CX, check the related API - Windows::Devices::Enumeration::DeviceInformation::FindAllAsync(). - - The main limitation is an absence of the device enumeration from inside the PA's implementation. - This problem can be solved by using the following functions: - - PaWasapiWinrt_SetDefaultDeviceId() - to set default input/output device, - PaWasapiWinrt_PopulateDeviceList() - to populate device list with devices. - - Here is an example of populating the device list which can also be updated dynamically depending on - whether device was removed from or added to the system: - - ---------------- - - std::vector ids, names; - std::vector role; - - ids.resize(count); - names.resize(count); - role.resize(count); - - for (UINT32 i = 0; i < count; ++i) - { - ids[i] = (const UINT16 *)device_ids[i].c_str(); - names[i] = (const UINT16 *)device_names[i].c_str(); - role[i] = eRoleUnknownFormFactor; - } - - PaWasapiWinrt_SetDefaultDeviceId((const UINT16 *)default_device_id.c_str(), !capture); - PaWasapiWinrt_PopulateDeviceList(ids.data(), names.data(), role.data(), count, !capture); - PaWasapi_UpdateDeviceList(); - - ---------------- -*/ - -#ifdef __cplusplus -} -#endif /* __cplusplus */ - -#endif /* PA_WIN_WASAPI_H */ diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/PropIdl.h b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/PropIdl.h deleted file mode 100644 index 84832d9b4cd149716e497166a3663d324d6e69ed..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/PropIdl.h +++ /dev/null @@ -1,19 +0,0 @@ -/** - * This file has no copyright assigned and is placed in the Public Domain. - * This file is part of the PortAudio library. - */ -#ifndef _INC_PROPIDL_PA -#define _INC_PROPIDL_PA - -#ifndef COM_NO_WINDOWS_H -#include "windows.h" -#include "ole2.h" -#endif - -typedef const PROPVARIANT *REFPROPVARIANT; - -#define PropVariantInit(VAR) memset((VAR), 0, sizeof(PROPVARIANT)) -WINOLEAPI PropVariantClear(PROPVARIANT *pvar); - -#endif /* _INC_PROPIDL_PA */ - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-98fc2b2c.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-98fc2b2c.css deleted file mode 100644 index cee82ea831d77ca0e001baf10a07f84e176679f0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-98fc2b2c.css +++ /dev/null @@ -1 +0,0 @@ -.gallery.svelte-1ayixqk{padding:var(--size-1) var(--size-2)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/module-260a78dd.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/module-260a78dd.js deleted file mode 100644 index 47c8570a1ec24a7c7aa404032edab8237b70b05b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/module-260a78dd.js +++ /dev/null @@ -1,9 +0,0 @@ -import{a as tn}from"./Index-c74a8b7c.js";import"./index-50ad4c77.js";import"./svelte/svelte.js";const sr=e=>t=>{const n=e(t);return t.add(n),n},ar=e=>(t,n)=>(e.set(t,n),n),It=Number.MAX_SAFE_INTEGER===void 0?9007199254740991:Number.MAX_SAFE_INTEGER,nn=536870912,St=nn*2,ir=(e,t)=>n=>{const r=t.get(n);let o=r===void 0?n.size:rIt)throw new Error("Congratulations, you created a collection of unique numbers which uses all available integers!");for(;n.has(o);)o=Math.floor(Math.random()*It);return e(n,o)},rn=new WeakMap,cr=ar(rn),dt=ir(cr,rn),ur=sr(dt),lr=e=>typeof e.start=="function",kt=new WeakMap,dr=e=>({...e,connect:({call:t})=>async()=>{const{port1:n,port2:r}=new MessageChannel,o=await t("connect",{port:n},[n]);return kt.set(r,o),r},disconnect:({call:t})=>async n=>{const r=kt.get(n);if(r===void 0)throw new Error("The given port is not connected.");await t("disconnect",{portId:r})},isSupported:({call:t})=>()=>t("isSupported")}),Qe=new WeakMap,fr=e=>{if(Qe.has(e))return Qe.get(e);const t=new Map;return Qe.set(e,t),t},hr=e=>{const t=dr(e);return n=>{const r=fr(n);n.addEventListener("message",({data:i})=>{const{id:c}=i;if(c!==null&&r.has(c)){const{reject:u,resolve:d}=r.get(c);r.delete(c),i.error===void 0?d(i.result):u(new Error(i.error.message))}}),lr(n)&&n.start();const o=(i,c=null,u=[])=>new Promise((d,l)=>{const w=dt(r);r.set(w,{reject:l,resolve:d}),c===null?n.postMessage({id:w,method:i},u):n.postMessage({id:w,method:i,params:c},u)}),s=(i,c,u=[])=>{n.postMessage({id:null,method:i,params:c},u)};let a={};for(const[i,c]of Object.entries(t))a={...a,[i]:c({call:o,notify:s})};return{...a}}},Lt=new Set,pr=hr({encode:({call:e})=>async(t,n)=>{const r=await e("encode",{encoderId:t,timeslice:n});return Lt.delete(t),r},instantiate:({call:e})=>async(t,n)=>{const r=ur(Lt),o=await e("instantiate",{encoderId:r,mimeType:t,sampleRate:n});return{encoderId:r,port:o}},register:({call:e})=>t=>e("register",{port:t},[t])}),mr=e=>{const t=new Worker(e);return pr(t)},gr=`(()=>{var e={881:e=>{"use strict";e.exports=(e,t)=>{if("string"!=typeof e)throw new TypeError("expected a string");return e.trim().replace(/([a-z])([A-Z])/g,"$1-$2").replace(/\\W/g,(e=>/[À-ž]/.test(e)?e:"-")).replace(/^-+|-+$/g,"").replace(/-{2,}/g,(e=>t&&t.condense?"-":e)).toLowerCase()}},507:e=>{var t=function(e){var t,r,n=/\\w+/.exec(e);if(!n)return"an";var o=(r=n[0]).toLowerCase(),s=["honest","hour","hono"];for(t in s)if(0==o.indexOf(s[t]))return"an";if(1==o.length)return"aedhilmnorsx".indexOf(o)>=0?"an":"a";if(r.match(/(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]/))return"an";var a=[/^e[uw]/,/^onc?e\\b/,/^uni([^nmd]|mo)/,/^u[bcfhjkqrst][aeiou]/];for(t=0;t=0?"an":"a":"aeiou".indexOf(o[0])>=0||o.match(/^y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)/)?"an":"a"};void 0!==e.exports?e.exports=t:window.indefiniteArticle=t}},t={};function r(n){var o=t[n];if(void 0!==o)return o.exports;var s=t[n]={exports:{}};return e[n](s,s.exports,r),s.exports}r.n=e=>{var t=e&&e.__esModule?()=>e.default:()=>e;return r.d(t,{a:t}),t},r.d=(e,t)=>{for(var n in t)r.o(t,n)&&!r.o(e,n)&&Object.defineProperty(e,n,{enumerable:!0,get:t[n]})},r.o=(e,t)=>Object.prototype.hasOwnProperty.call(e,t),(()=>{"use strict";var e=r(881),t=r.n(e),n=r(507),o=r.n(n);const s=(e,r)=>void 0===r?e:r.reduce(((e,r)=>{if("capitalize"===r){const t=e.charAt(0).toUpperCase(),r=e.slice(1);return"".concat(t).concat(r)}return"dashify"===r?t()(e):"prependIndefiniteArticle"===r?"".concat(o()(e)," ").concat(e):e}),e),a=(e,t)=>{const r=/\\\${([^.}]+)((\\.[^(]+\\(\\))*)}/g,n=[];let o=r.exec(e);for(;null!==o;){const t={modifiers:[],name:o[1]};if(void 0!==o[3]){const e=/\\.[^(]+\\(\\)/g;let r=e.exec(o[2]);for(;null!==r;)t.modifiers.push(r[0].slice(1,-2)),r=e.exec(o[2])}n.push(t),o=r.exec(e)}const a=n.reduce(((e,r)=>e.map((e=>"string"==typeof e?e.split((e=>{const t=e.name+e.modifiers.map((e=>"\\\\.".concat(e,"\\\\(\\\\)"))).join("");return new RegExp("\\\\$\\\\{".concat(t,"}"),"g")})(r)).reduce(((e,n,o)=>0===o?[n]:r.name in t?[...e,s(t[r.name],r.modifiers),n]:[...e,e=>s(e[r.name],r.modifiers),n]),[]):[e])).reduce(((e,t)=>[...e,...t]),[])),[e]);return e=>a.reduce(((t,r)=>"string"==typeof r?[...t,r]:[...t,r(e)]),[]).join("")},i=function(e){let t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{};const r=void 0===e.code?void 0:a(e.code,t),n=void 0===e.message?void 0:a(e.message,t);return function(){let t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},o=arguments.length>1?arguments[1]:void 0;const s=void 0===o&&(t instanceof Error||void 0!==t.code&&"Exception"===t.code.slice(-9)),{cause:a,missingParameters:i}=s?{cause:t,missingParameters:{}}:{cause:o,missingParameters:t},c=void 0===n?new Error:new Error(n(i));return null!==a&&(c.cause=a),void 0!==r&&(c.code=r(i)),void 0!==e.status&&(c.status=e.status),c}},c=-32603,d=-32602,l=i({message:'The requested method called "\${method}" is not supported.',status:-32601}),u=i({message:'The handler of the method called "\${method}" returned no required result.',status:c}),h=i({message:'The handler of the method called "\${method}" returned an unexpected result.',status:c}),m=i({message:'The specified parameter called "portId" with the given value "\${portId}" does not identify a port connected to this worker.',status:d}),p=void 0===Number.MAX_SAFE_INTEGER?9007199254740991:Number.MAX_SAFE_INTEGER,f=536870912,g=1073741824,w=new WeakMap;var v;const y=((e,t)=>r=>{const n=t.get(r);let o=void 0===n?r.size:np)throw new Error("Congratulations, you created a collection of unique numbers which uses all available integers!");for(;r.has(o);)o=Math.floor(Math.random()*p);return e(r,o)})((v=w,(e,t)=>(v.set(e,t),t)),w),M=((e=>{})(y),new Map),E=(e,t,r)=>({...t,connect:r=>{let{port:n}=r;n.start();const o=e(n,t),s=y(M);return M.set(s,(()=>{o(),n.close(),M.delete(s)})),{result:s}},disconnect:e=>{let{portId:t}=e;const r=M.get(t);if(void 0===r)throw m({portId:t.toString()});return r(),{result:null}},isSupported:async()=>{if(await new Promise((e=>{const t=new ArrayBuffer(0),{port1:r,port2:n}=new MessageChannel;r.onmessage=t=>{let{data:r}=t;return e(null!==r)},n.postMessage(t,[t])}))){const e=r();return{result:e instanceof Promise?await e:e}}return{result:!1}}}),x=function(e,t){const r=E(x,t,arguments.length>2&&void 0!==arguments[2]?arguments[2]:()=>!0),n=((e,t)=>async r=>{let{data:{id:n,method:o,params:s}}=r;const a=t[o];try{if(void 0===a)throw l({method:o});const t=void 0===s?a():a(s);if(void 0===t)throw u({method:o});const r=t instanceof Promise?await t:t;if(null===n){if(void 0!==r.result)throw h({method:o})}else{if(void 0===r.result)throw h({method:o});const{result:t,transferables:s=[]}=r;e.postMessage({id:n,result:t},s)}}catch(t){const{message:r,status:o=-32603}=t;e.postMessage({error:{code:o,message:r},id:n})}})(e,r);return e.addEventListener("message",n),()=>e.removeEventListener("message",n)},b=e=>{e.onmessage=null,e.close()},A=new WeakMap,T=new WeakMap,I=(e=>{const t=(r=e,{...r,connect:e=>{let{call:t}=e;return async()=>{const{port1:e,port2:r}=new MessageChannel,n=await t("connect",{port:e},[e]);return A.set(r,n),r}},disconnect:e=>{let{call:t}=e;return async e=>{const r=A.get(e);if(void 0===r)throw new Error("The given port is not connected.");await t("disconnect",{portId:r})}},isSupported:e=>{let{call:t}=e;return()=>t("isSupported")}});var r;return e=>{const r=(e=>{if(T.has(e))return T.get(e);const t=new Map;return T.set(e,t),t})(e);e.addEventListener("message",(e=>{let{data:t}=e;const{id:n}=t;if(null!==n&&r.has(n)){const{reject:e,resolve:o}=r.get(n);r.delete(n),void 0===t.error?o(t.result):e(new Error(t.error.message))}})),(e=>"function"==typeof e.start)(e)&&e.start();const n=function(t){let n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:null,o=arguments.length>2&&void 0!==arguments[2]?arguments[2]:[];return new Promise(((s,a)=>{const i=y(r);r.set(i,{reject:a,resolve:s}),null===n?e.postMessage({id:i,method:t},o):e.postMessage({id:i,method:t,params:n},o)}))},o=function(t,r){let n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:[];e.postMessage({id:null,method:t,params:r},n)};let s={};for(const[e,r]of Object.entries(t))s={...s,[e]:r({call:n,notify:o})};return{...s}}})({characterize:e=>{let{call:t}=e;return()=>t("characterize")},encode:e=>{let{call:t}=e;return(e,r)=>t("encode",{recordingId:e,timeslice:r})},record:e=>{let{call:t}=e;return async(e,r,n)=>{await t("record",{recordingId:e,sampleRate:r,typedArrays:n},n.map((e=>{let{buffer:t}=e;return t})))}}}),O=async(e,t)=>{const r=I(t),n=await r.characterize(),o=n.toString();if(e.has(o))throw new Error("There is already an encoder stored which handles exactly the same mime types.");return e.set(o,[n,r]),n},L=new Map,P=(e=>t=>{const r=e.get(t);if(void 0===r)throw new Error("There was no instance of an encoder stored with the given id.");return r})(L),S=((e,t)=>r=>{const n=t(r);return e.delete(r),n})(L,P),N=new Map,C=((e,t)=>r=>{const[n,o,s,a]=t(r);return s?new Promise((t=>{o.onmessage=s=>{let{data:i}=s;0===i.length?(e(o),t(n.encode(r,null))):n.record(r,a,i)}})):n.encode(r,null)})(b,S),R=(e=>t=>{for(const[r,n]of Array.from(e.values()))if(r.test(t))return n;throw new Error("There is no encoder registered which could handle the given mimeType.")})(N),$=((e,t,r)=>(n,o,s)=>{if(t.has(n))throw new Error('There is already an encoder registered with an id called "'.concat(n,'".'));const a=r(o),{port1:i,port2:c}=new MessageChannel,d=[a,i,!0,s];return t.set(n,d),i.onmessage=t=>{let{data:r}=t;0===r.length?(e(i),d[2]=!1):a.record(n,s,r.map((e=>"number"==typeof e?new Float32Array(e):e)))},c})(b,L,R),j=(e=>(t,r)=>{const[n]=e(t);return n.encode(t,r)})(P);x(self,{encode:async e=>{let{encoderId:t,timeslice:r}=e;const n=null===r?await C(t):await j(t,r);return{result:n,transferables:n}},instantiate:e=>{let{encoderId:t,mimeType:r,sampleRate:n}=e;const o=$(t,r,n);return{result:o,transferables:[o]}},register:async e=>{let{port:t}=e;return{result:await O(N,t)}}})})()})();`,wr=new Blob([gr],{type:"application/javascript; charset=utf-8"}),on=URL.createObjectURL(wr),ft=mr(on),Se=ft.encode,sn=ft.instantiate,vr=ft.register;URL.revokeObjectURL(on);const _r=e=>(t,n)=>{if(e===null)throw new Error("A native BlobEvent could not be created.");return new e(t,n)},Er=(e,t)=>(n,r,o)=>{const s=[];let a=r,i=0;for(;iclass{constructor(r=null){this._listeners=new WeakMap,this._nativeEventTarget=r===null?e():r}addEventListener(r,o,s){if(o!==null){let a=this._listeners.get(o);a===void 0&&(a=t(this,o),typeof o=="function"&&this._listeners.set(o,a)),this._nativeEventTarget.addEventListener(r,a,s)}}dispatchEvent(r){return this._nativeEventTarget.dispatchEvent(r)}removeEventListener(r,o,s){const a=o===null?void 0:this._listeners.get(o);this._nativeEventTarget.removeEventListener(r,a===void 0?null:a,s)}},Ar=e=>()=>{if(e===null)throw new Error("A native EventTarget could not be created.");return e.document.createElement("p")},br=(e="")=>{try{return new DOMException(e,"InvalidModificationError")}catch(t){return t.code=13,t.message=e,t.name="InvalidModificationError",t}},Cr=()=>{try{return new DOMException("","InvalidStateError")}catch(e){return e.code=11,e.name="InvalidStateError",e}},Tr=e=>{if(e!==null&&e.BlobEvent!==void 0&&e.MediaStream!==void 0&&(e.MediaRecorder===void 0||e.MediaRecorder.isTypeSupported!==void 0)){if(e.MediaRecorder===void 0)return Promise.resolve(!0);const t=e.document.createElement("canvas"),n=t.getContext("2d");if(n===null||typeof t.captureStream!="function")return Promise.resolve(!1);const r=t.captureStream();return Promise.all([new Promise(o=>{const s="audio/webm";try{const a=new e.MediaRecorder(r,{mimeType:s});a.addEventListener("dataavailable",({data:i})=>o(i.type===s)),a.start(),setTimeout(()=>a.stop(),10)}catch(a){o(a.name==="NotSupportedError")}}),new Promise(o=>{const s=new e.MediaRecorder(r);let a=!1,i=!1;s.addEventListener("dataavailable",()=>a=!0),s.addEventListener("error",c=>{o(!a&&!i&&"error"in c&&c.error!==null&&typeof c.error=="object"&&"name"in c.error&&c.error.name!=="UnknownError")}),s.addEventListener("stop",()=>i=!0),s.start(),n.fillRect(0,0,1,1),r.removeTrack(r.getVideoTracks()[0])})]).then(o=>o.every(s=>s))}return Promise.resolve(!1)},Mr=(e,t,n,r,o,s,a)=>class extends s{constructor(c,u={}){const{mimeType:d}=u;if(a!==null&&(d===void 0||a.isTypeSupported!==void 0&&a.isTypeSupported(d))){const l=e(a,c,u);super(l),this._internalMediaRecorder=l}else if(d!==void 0&&o.some(l=>l.test(d)))super(),a!==null&&a.isTypeSupported!==void 0&&a.isTypeSupported("audio/webm;codecs=pcm")?this._internalMediaRecorder=r(this,a,c,d):this._internalMediaRecorder=n(this,c,d);else throw a!==null&&e(a,c,u),t();this._ondataavailable=null,this._onerror=null,this._onpause=null,this._onresume=null,this._onstart=null,this._onstop=null}get mimeType(){return this._internalMediaRecorder.mimeType}get ondataavailable(){return this._ondataavailable===null?this._ondataavailable:this._ondataavailable[0]}set ondataavailable(c){if(this._ondataavailable!==null&&this.removeEventListener("dataavailable",this._ondataavailable[1]),typeof c=="function"){const u=c.bind(this);this.addEventListener("dataavailable",u),this._ondataavailable=[c,u]}else this._ondataavailable=null}get onerror(){return this._onerror===null?this._onerror:this._onerror[0]}set onerror(c){if(this._onerror!==null&&this.removeEventListener("error",this._onerror[1]),typeof c=="function"){const u=c.bind(this);this.addEventListener("error",u),this._onerror=[c,u]}else this._onerror=null}get onpause(){return this._onpause===null?this._onpause:this._onpause[0]}set onpause(c){if(this._onpause!==null&&this.removeEventListener("pause",this._onpause[1]),typeof c=="function"){const u=c.bind(this);this.addEventListener("pause",u),this._onpause=[c,u]}else this._onpause=null}get onresume(){return this._onresume===null?this._onresume:this._onresume[0]}set onresume(c){if(this._onresume!==null&&this.removeEventListener("resume",this._onresume[1]),typeof c=="function"){const u=c.bind(this);this.addEventListener("resume",u),this._onresume=[c,u]}else this._onresume=null}get onstart(){return this._onstart===null?this._onstart:this._onstart[0]}set onstart(c){if(this._onstart!==null&&this.removeEventListener("start",this._onstart[1]),typeof c=="function"){const u=c.bind(this);this.addEventListener("start",u),this._onstart=[c,u]}else this._onstart=null}get onstop(){return this._onstop===null?this._onstop:this._onstop[0]}set onstop(c){if(this._onstop!==null&&this.removeEventListener("stop",this._onstop[1]),typeof c=="function"){const u=c.bind(this);this.addEventListener("stop",u),this._onstop=[c,u]}else this._onstop=null}get state(){return this._internalMediaRecorder.state}pause(){return this._internalMediaRecorder.pause()}resume(){return this._internalMediaRecorder.resume()}start(c){return this._internalMediaRecorder.start(c)}stop(){return this._internalMediaRecorder.stop()}static isTypeSupported(c){return a!==null&&a.isTypeSupported!==void 0&&a.isTypeSupported(c)||o.some(u=>u.test(c))}},Nr=e=>e!==null&&e.BlobEvent!==void 0?e.BlobEvent:null,Or=(e,t,n)=>{const r=new Map,o=new WeakMap,s=new WeakMap,a=new e(t,n),i=new WeakMap;let c=!1;return a.addEventListener=(u=>(d,l,w)=>{let g=l;if(typeof l=="function")if(d==="dataavailable"){const h=[];g=f=>{c&&a.state==="inactive"?h.push(f):l.call(a,f)},r.set(l,h),o.set(l,g)}else d==="error"?(g=h=>{h instanceof ErrorEvent?l.call(a,h):l.call(a,new ErrorEvent("error",{error:h.error}))},s.set(l,g)):d==="stop"&&(g=h=>{for(const[f,m]of r.entries())if(m.length>0){const[p]=m;m.length>1&&Object.defineProperty(p,"data",{value:new Blob(m.map(({data:b})=>b),{type:p.data.type})}),m.length=0,f.call(a,p)}c=!1,l.call(a,h)},i.set(l,g));return u.call(a,d,g,w)})(a.addEventListener),a.removeEventListener=(u=>(d,l,w)=>{let g=l;if(typeof l=="function"){if(d==="dataavailable"){r.delete(l);const h=o.get(l);h!==void 0&&(g=h)}else if(d==="error"){const h=s.get(l);h!==void 0&&(g=h)}else if(d==="stop"){const h=i.get(l);h!==void 0&&(g=h)}}return u.call(a,d,g,w)})(a.removeEventListener),a.start=(u=>d=>(c=d!==void 0,d===void 0?u.call(a):u.call(a,d)))(a.start),a},Rr=e=>e===null||e.MediaRecorder===void 0?null:e.MediaRecorder,an=()=>{try{return new DOMException("","NotSupportedError")}catch(e){return e.code=9,e.name="NotSupportedError",e}},Ir=e=>(t,n,r,o=2)=>{const s=e(t,n);if(s===null)return s;const{length:a,value:i}=s;if(r==="master")return{content:null,length:a};if(n+a+i>t.byteLength)return null;if(r==="binary"){const c=(i/Float32Array.BYTES_PER_ELEMENT-1)/o,u=Array.from({length:o},()=>new Float32Array(c));for(let d=0;d(t,n)=>{const r=e(t,n);if(r===null)return r;const{length:o,value:s}=r;return s===35?{length:o,type:"binary"}:s===46||s===97||s===88713574||s===106212971||s===139690087||s===172351395||s===256095861?{length:o,type:"master"}:{length:o,type:"unknown"}},kr=e=>(t,n)=>{const r=e(t,n);if(r===null)return r;const o=n+Math.floor((r-1)/8);if(o+r>t.byteLength)return null;let a=t.getUint8(o)&(1<<8-r%8)-1;for(let i=1;i{},Bt=e=>{throw e};function Pr(e){return e?e.next&&e.error&&e.complete?e:{complete:(e.complete??Ne).bind(e),error:(e.error??Bt).bind(e),next:(e.next??Ne).bind(e)}:{complete:Ne,error:Bt,next:Ne}}const Br=e=>(t,n,r)=>e(o=>{const s=a=>o.next(a);return t.addEventListener(n,s,r),()=>t.removeEventListener(n,s,r)}),Ur=(e,t)=>{const n=()=>{},r=o=>typeof o[0]=="function";return o=>{const s=(...a)=>{const i=o(r(a)?t({next:a[0]}):t(...a));return i!==void 0?i:n};return s[Symbol.observable]=()=>({subscribe:(...a)=>({unsubscribe:s(...a)})}),e(s)}},xr=Ur(Lr,Pr),cn=Br(xr);/*! - * dashify - * - * Copyright (c) 2015-2017, Jon Schlinkert. - * Released under the MIT License. - */var Dr=(e,t)=>{if(typeof e!="string")throw new TypeError("expected a string");return e.trim().replace(/([a-z])([A-Z])/g,"$1-$2").replace(/\W/g,n=>/[À-ž]/.test(n)?n:"-").replace(/^-+|-+$/g,"").replace(/-{2,}/g,n=>t&&t.condense?"-":n).toLowerCase()};const Wr=tn(Dr);var un={exports:{}};(function(e){var t=function(n){var r,o,s=/\w+/.exec(n);if(s)o=s[0];else return"an";var a=o.toLowerCase(),i=["honest","hour","hono"];for(r in i)if(a.indexOf(i[r])==0)return"an";if(a.length==1)return"aedhilmnorsx".indexOf(a)>=0?"an":"a";if(o.match(/(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]/))return"an";var c=[/^e[uw]/,/^onc?e\b/,/^uni([^nmd]|mo)/,/^u[bcfhjkqrst][aeiou]/];for(r=0;r=0?"an":"a":"aeiou".indexOf(a[0])>=0||a.match(/^y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)/)?"an":"a"};e.exports=t})(un);var Vr=un.exports;const Fr=tn(Vr),Ut=(e,t)=>t===void 0?e:t.reduce((n,r)=>{if(r==="capitalize"){const o=n.charAt(0).toUpperCase(),s=n.slice(1);return`${o}${s}`}return r==="dashify"?Wr(n):r==="prependIndefiniteArticle"?`${Fr(n)} ${n}`:n},e),jr=e=>{const t=e.name+e.modifiers.map(n=>`\\.${n}\\(\\)`).join("");return new RegExp(`\\$\\{${t}}`,"g")},xt=(e,t)=>{const n=/\${([^.}]+)((\.[^(]+\(\))*)}/g,r=[];let o=n.exec(e);for(;o!==null;){const a={modifiers:[],name:o[1]};if(o[3]!==void 0){const i=/\.[^(]+\(\)/g;let c=i.exec(o[2]);for(;c!==null;)a.modifiers.push(c[0].slice(1,-2)),c=i.exec(o[2])}r.push(a),o=n.exec(e)}const s=r.reduce((a,i)=>a.map(c=>typeof c=="string"?c.split(jr(i)).reduce((u,d,l)=>l===0?[d]:i.name in t?[...u,Ut(t[i.name],i.modifiers),d]:[...u,w=>Ut(w[i.name],i.modifiers),d],[]):[c]).reduce((c,u)=>[...c,...u],[]),[e]);return a=>s.reduce((i,c)=>typeof c=="string"?[...i,c]:[...i,c(a)],[]).join("")},We=(e,t={})=>{const n=e.code===void 0?void 0:xt(e.code,t),r=e.message===void 0?void 0:xt(e.message,t);function o(s={},a){const i=a===void 0&&(s instanceof Error||s.code!==void 0&&s.code.slice(-9)==="Exception"),{cause:c,missingParameters:u}=i?{cause:s,missingParameters:{}}:{cause:a,missingParameters:s},d=r===void 0?new Error:new Error(r(u));return c!==null&&(d.cause=c),n!==void 0&&(d.code=n(u)),e.status!==void 0&&(d.status=e.status),d}return o},Ve={INTERNAL_ERROR:-32603,INVALID_PARAMS:-32602,METHOD_NOT_FOUND:-32601};We({message:'The requested method called "${method}" is not supported.',status:Ve.METHOD_NOT_FOUND});We({message:'The handler of the method called "${method}" returned no required result.',status:Ve.INTERNAL_ERROR});We({message:'The handler of the method called "${method}" returned an unexpected result.',status:Ve.INTERNAL_ERROR});We({message:'The specified parameter called "portId" with the given value "${portId}" does not identify a port connected to this worker.',status:Ve.INVALID_PARAMS});const $r=(e,t,n)=>async r=>{const o=new e([n],{type:"application/javascript; charset=utf-8"}),s=t.createObjectURL(o);try{await r(s)}finally{t.revokeObjectURL(s)}},Gr=e=>({data:t})=>{const{id:n}=t;if(n!==null){const r=e.get(n);if(r!==void 0){const{reject:o,resolve:s}=r;e.delete(n),t.error===void 0?s(t.result):o(new Error(t.error.message))}}},qr=e=>(t,n)=>(r,o=[])=>new Promise((s,a)=>{const i=e(t);t.set(i,{reject:a,resolve:s}),n.postMessage({id:i,...r},o)}),zr=(e,t,n,r)=>(o,s,a={})=>{const i=new o(s,"recorder-audio-worklet-processor",{...a,channelCountMode:"explicit",numberOfInputs:1,numberOfOutputs:0}),c=new Map,u=t(c,i.port),d=n(i.port,"message")(e(c));i.port.start();let l="inactive";return Object.defineProperties(i,{pause:{get(){return async()=>(r(["recording"],l),l="paused",u({method:"pause"}))}},port:{get(){throw new Error("The port of a RecorderAudioWorkletNode can't be accessed.")}},record:{get(){return async w=>(r(["inactive"],l),l="recording",u({method:"record",params:{encoderPort:w}},[w]))}},resume:{get(){return async()=>(r(["paused"],l),l="recording",u({method:"resume"}))}},stop:{get(){return async()=>{r(["paused","recording"],l),l="stopped";try{await u({method:"stop"})}finally{d()}}}}}),i},Hr=(e,t)=>{if(!e.includes(t))throw new Error(`Expected the state to be ${e.map(n=>`"${n}"`).join(" or ")} but it was "${t}".`)},Xr='(()=>{"use strict";class e extends AudioWorkletProcessor{constructor(){super(),this._encoderPort=null,this._numberOfChannels=0,this._state="inactive",this.port.onmessage=e=>{let{data:t}=e;"pause"===t.method?"active"===this._state||"recording"===this._state?(this._state="paused",this._sendAcknowledgement(t.id)):this._sendUnexpectedStateError(t.id):"record"===t.method?"inactive"===this._state?(this._encoderPort=t.params.encoderPort,this._state="active",this._sendAcknowledgement(t.id)):this._sendUnexpectedStateError(t.id):"resume"===t.method?"paused"===this._state?(this._state="active",this._sendAcknowledgement(t.id)):this._sendUnexpectedStateError(t.id):"stop"===t.method?"active"!==this._state&&"paused"!==this._state&&"recording"!==this._state||null===this._encoderPort?this._sendUnexpectedStateError(t.id):(this._stop(this._encoderPort),this._sendAcknowledgement(t.id)):"number"==typeof t.id&&this.port.postMessage({error:{code:-32601,message:"The requested method is not supported."},id:t.id})}}process(e){let[t]=e;if("inactive"===this._state||"paused"===this._state)return!0;if("active"===this._state){if(void 0===t)throw new Error("No channelData was received for the first input.");if(0===t.length)return!0;this._state="recording"}if("recording"===this._state&&null!==this._encoderPort){if(void 0===t)throw new Error("No channelData was received for the first input.");return 0===t.length?this._encoderPort.postMessage(Array.from({length:this._numberOfChannels},(()=>128))):(this._encoderPort.postMessage(t,t.map((e=>{let{buffer:t}=e;return t}))),this._numberOfChannels=t.length),!0}return!1}_sendAcknowledgement(e){this.port.postMessage({id:e,result:null})}_sendUnexpectedStateError(e){this.port.postMessage({error:{code:-32603,message:"The internal state does not allow to process the given message."},id:e})}_stop(e){e.postMessage([]),e.close(),this._encoderPort=null,this._state="stopped"}}e.parameterDescriptors=[],registerProcessor("recorder-audio-worklet-processor",e)})();',Yr=$r(Blob,URL,Xr),Zr=zr(Gr,qr(dt),cn,Hr),Dt=(e,t,n)=>({endTime:t,insertTime:n,type:"exponentialRampToValue",value:e}),Wt=(e,t,n)=>({endTime:t,insertTime:n,type:"linearRampToValue",value:e}),tt=(e,t)=>({startTime:t,type:"setValue",value:e}),ln=(e,t,n)=>({duration:n,startTime:t,type:"setValueCurve",values:e}),dn=(e,t,{startTime:n,target:r,timeConstant:o})=>r+(t-r)*Math.exp((n-e)/o),me=e=>e.type==="exponentialRampToValue",ke=e=>e.type==="linearRampToValue",re=e=>me(e)||ke(e),ht=e=>e.type==="setValue",J=e=>e.type==="setValueCurve",Le=(e,t,n,r)=>{const o=e[t];return o===void 0?r:re(o)||ht(o)?o.value:J(o)?o.values[o.values.length-1]:dn(n,Le(e,t-1,o.startTime,r),o)},Vt=(e,t,n,r,o)=>n===void 0?[r.insertTime,o]:re(n)?[n.endTime,n.value]:ht(n)?[n.startTime,n.value]:J(n)?[n.startTime+n.duration,n.values[n.values.length-1]]:[n.startTime,Le(e,t-1,n.startTime,o)],nt=e=>e.type==="cancelAndHold",rt=e=>e.type==="cancelScheduledValues",ne=e=>nt(e)||rt(e)?e.cancelTime:me(e)||ke(e)?e.endTime:e.startTime,Ft=(e,t,n,{endTime:r,value:o})=>n===o?o:0n+(e-t)/(r-t)*(o-n),Kr=(e,t)=>{const n=Math.floor(t),r=Math.ceil(t);return n===r?e[n]:(1-(t-n))*e[n]+(1-(r-t))*e[r]},Qr=(e,{duration:t,startTime:n,values:r})=>{const o=(e-n)/t*(r.length-1);return Kr(r,o)},Oe=e=>e.type==="setTarget";class Jr{constructor(t){this._automationEvents=[],this._currenTime=0,this._defaultValue=t}[Symbol.iterator](){return this._automationEvents[Symbol.iterator]()}add(t){const n=ne(t);if(nt(t)||rt(t)){const r=this._automationEvents.findIndex(s=>rt(t)&&J(s)?s.startTime+s.duration>=n:ne(s)>=n),o=this._automationEvents[r];if(r!==-1&&(this._automationEvents=this._automationEvents.slice(0,r)),nt(t)){const s=this._automationEvents[this._automationEvents.length-1];if(o!==void 0&&re(o)){if(s!==void 0&&Oe(s))throw new Error("The internal list is malformed.");const a=s===void 0?o.insertTime:J(s)?s.startTime+s.duration:ne(s),i=s===void 0?this._defaultValue:J(s)?s.values[s.values.length-1]:s.value,c=me(o)?Ft(n,a,i,o):jt(n,a,i,o),u=me(o)?Dt(c,n,this._currenTime):Wt(c,n,this._currenTime);this._automationEvents.push(u)}if(s!==void 0&&Oe(s)&&this._automationEvents.push(tt(this.getValue(n),n)),s!==void 0&&J(s)&&s.startTime+s.duration>n){const a=n-s.startTime,i=(s.values.length-1)/s.duration,c=Math.max(2,1+Math.ceil(a*i)),u=a/(c-1)*i,d=s.values.slice(0,c);if(u<1)for(let l=1;lne(a)>n),o=r===-1?this._automationEvents[this._automationEvents.length-1]:this._automationEvents[r-1];if(o!==void 0&&J(o)&&ne(o)+o.duration>n)return!1;const s=me(t)?Dt(t.value,t.endTime,this._currenTime):ke(t)?Wt(t.value,n,this._currenTime):t;if(r===-1)this._automationEvents.push(s);else{if(J(t)&&n+t.duration>ne(this._automationEvents[r]))return!1;this._automationEvents.splice(r,0,s)}}return!0}flush(t){const n=this._automationEvents.findIndex(r=>ne(r)>t);if(n>1){const r=this._automationEvents.slice(n-1),o=r[0];Oe(o)&&r.unshift(tt(Le(this._automationEvents,n-2,o.startTime,this._defaultValue),o.startTime)),this._automationEvents=r}}getValue(t){if(this._automationEvents.length===0)return this._defaultValue;const n=this._automationEvents.findIndex(a=>ne(a)>t),r=this._automationEvents[n],o=(n===-1?this._automationEvents.length:n)-1,s=this._automationEvents[o];if(s!==void 0&&Oe(s)&&(r===void 0||!re(r)||r.insertTime>t))return dn(t,Le(this._automationEvents,o-1,s.startTime,this._defaultValue),s);if(s!==void 0&&ht(s)&&(r===void 0||!re(r)))return s.value;if(s!==void 0&&J(s)&&(r===void 0||!re(r)||s.startTime+s.duration>t))return t({cancelTime:e,type:"cancelAndHold"}),to=e=>({cancelTime:e,type:"cancelScheduledValues"}),no=(e,t)=>({endTime:t,type:"exponentialRampToValue",value:e}),ro=(e,t)=>({endTime:t,type:"linearRampToValue",value:e}),oo=(e,t,n)=>({startTime:t,target:e,timeConstant:n,type:"setTarget"}),so=()=>new DOMException("","AbortError"),ao=e=>(t,n,[r,o,s],a)=>{e(t[o],[n,r,s],i=>i[0]===n&&i[1]===r,a)},io=e=>(t,n,r)=>{const o=[];for(let s=0;s(t,n)=>{e.set(t,{activeInputs:new Set,passiveInputs:new WeakMap,renderer:n})},ge=new WeakSet,fn=new WeakMap,hn=new WeakMap,pn=new WeakMap,mn=new WeakMap,gn=new WeakMap,wn=new WeakMap,ot=new WeakMap,st=new WeakMap,at=new WeakMap,vn={construct(){return vn}},uo=e=>{try{const t=new Proxy(e,vn);new t}catch{return!1}return!0},$t=/^import(?:(?:[\s]+[\w]+|(?:[\s]+[\w]+[\s]*,)?[\s]*\{[\s]*[\w]+(?:[\s]+as[\s]+[\w]+)?(?:[\s]*,[\s]*[\w]+(?:[\s]+as[\s]+[\w]+)?)*[\s]*}|(?:[\s]+[\w]+[\s]*,)?[\s]*\*[\s]+as[\s]+[\w]+)[\s]+from)?(?:[\s]*)("([^"\\]|\\.)+"|'([^'\\]|\\.)+')(?:[\s]*);?/,Gt=(e,t)=>{const n=[];let r=e.replace(/^[\s]+/,""),o=r.match($t);for(;o!==null;){const s=o[1].slice(1,-1),a=o[0].replace(/([\s]+)?;?$/,"").replace(s,new URL(s,t).toString());n.push(a),r=r.slice(o[0].length).replace(/^[\s]+/,""),o=r.match($t)}return[n.join(";"),r]},qt=e=>{if(e!==void 0&&!Array.isArray(e))throw new TypeError("The parameterDescriptors property of given value for processorCtor is not an array.")},zt=e=>{if(!uo(e))throw new TypeError("The given value for processorCtor should be a constructor.");if(e.prototype===null||typeof e.prototype!="object")throw new TypeError("The given value for processorCtor should have a prototype.")},lo=(e,t,n,r,o,s,a,i,c,u,d,l,w)=>{let g=0;return(h,f,m={credentials:"omit"})=>{const p=d.get(h);if(p!==void 0&&p.has(f))return Promise.resolve();const b=u.get(h);if(b!==void 0){const y=b.get(f);if(y!==void 0)return y}const _=s(h),T=_.audioWorklet===void 0?o(f).then(([y,A])=>{const[E,v]=Gt(y,A),N=`${E};((a,b)=>{(a[b]=a[b]||[]).push((AudioWorkletProcessor,global,registerProcessor,sampleRate,self,window)=>{${v} -})})(window,'_AWGS')`;return n(N)}).then(()=>{const y=w._AWGS.pop();if(y===void 0)throw new SyntaxError;r(_.currentTime,_.sampleRate,()=>y(class{},void 0,(A,E)=>{if(A.trim()==="")throw t();const v=st.get(_);if(v!==void 0){if(v.has(A))throw t();zt(E),qt(E.parameterDescriptors),v.set(A,E)}else zt(E),qt(E.parameterDescriptors),st.set(_,new Map([[A,E]]))},_.sampleRate,void 0,void 0))}):Promise.all([o(f),Promise.resolve(e(l,l))]).then(([[y,A],E])=>{const v=g+1;g=v;const[N,I]=Gt(y,A),D=`${N};((AudioWorkletProcessor,registerProcessor)=>{${I} -})(${E?"AudioWorkletProcessor":"class extends AudioWorkletProcessor {__b=new WeakSet();constructor(){super();(p=>p.postMessage=(q=>(m,t)=>q.call(p,m,t?t.filter(u=>!this.__b.has(u)):t))(p.postMessage))(this.port)}}"},(n,p)=>registerProcessor(n,class extends p{${E?"":"__c = (a) => a.forEach(e=>this.__b.add(e.buffer));"}process(i,o,p){${E?"":"i.forEach(this.__c);o.forEach(this.__c);this.__c(Object.values(p));"}return super.process(i.map(j=>j.some(k=>k.length===0)?[]:j),o,p)}}));registerProcessor('__sac${v}',class extends AudioWorkletProcessor{process(){return !1}})`,B=new Blob([D],{type:"application/javascript; charset=utf-8"}),S=URL.createObjectURL(B);return _.audioWorklet.addModule(S,m).then(()=>{if(i(_))return _;const L=a(_);return L.audioWorklet.addModule(S,m).then(()=>L)}).then(L=>{if(c===null)throw new SyntaxError;try{new c(L,`__sac${v}`)}catch{throw new SyntaxError}}).finally(()=>URL.revokeObjectURL(S))});return b===void 0?u.set(h,new Map([[f,T]])):b.set(f,T),T.then(()=>{const y=d.get(h);y===void 0?d.set(h,new Set([f])):y.add(f)}).finally(()=>{const y=u.get(h);y!==void 0&&y.delete(f)}),T}},K=(e,t)=>{const n=e.get(t);if(n===void 0)throw new Error("A value with the given key could not be found.");return n},Fe=(e,t)=>{const n=Array.from(e).filter(t);if(n.length>1)throw Error("More than one element was found.");if(n.length===0)throw Error("No element was found.");const[r]=n;return e.delete(r),r},_n=(e,t,n,r)=>{const o=K(e,t),s=Fe(o,a=>a[0]===n&&a[1]===r);return o.size===0&&e.delete(t),s},Ae=e=>K(wn,e),Pe=e=>{if(ge.has(e))throw new Error("The AudioNode is already stored.");ge.add(e),Ae(e).forEach(t=>t(!0))},En=e=>"port"in e,pt=e=>{if(!ge.has(e))throw new Error("The AudioNode is not stored.");ge.delete(e),Ae(e).forEach(t=>t(!1))},it=(e,t)=>{!En(e)&&t.every(n=>n.size===0)&&pt(e)},fo=(e,t,n,r,o,s,a,i,c,u,d,l,w)=>{const g=new WeakMap;return(h,f,m,p,b)=>{const{activeInputs:_,passiveInputs:T}=s(f),{outputs:y}=s(h),A=i(h),E=v=>{const N=c(f),I=c(h);if(v){const M=_n(T,h,m,p);e(_,h,M,!1),!b&&!l(h)&&n(I,N,m,p),w(f)&&Pe(f)}else{const M=r(_,h,m,p);t(T,p,M,!1),!b&&!l(h)&&o(I,N,m,p);const U=a(f);if(U===0)d(f)&&it(f,_);else{const k=g.get(f);k!==void 0&&clearTimeout(k),g.set(f,setTimeout(()=>{d(f)&&it(f,_)},U*1e3))}}};return u(y,[f,m,p],v=>v[0]===f&&v[1]===m&&v[2]===p,!0)?(A.add(E),d(h)?e(_,h,[m,p,E],!0):t(T,p,[h,m,E],!0),!0):!1}},ho=e=>(t,n,[r,o,s],a)=>{const i=t.get(r);i===void 0?t.set(r,new Set([[o,n,s]])):e(i,[o,n,s],c=>c[0]===o&&c[1]===n,a)},po=e=>(t,n)=>{const r=e(t,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});n.connect(r).connect(t.destination);const o=()=>{n.removeEventListener("ended",o),n.disconnect(r),r.disconnect()};n.addEventListener("ended",o)},mo=e=>(t,n)=>{e(t).add(n)},yn=(e,t)=>e.context===t,Ht=e=>{try{e.copyToChannel(new Float32Array(1),0,-1)}catch{return!1}return!0},ue=()=>new DOMException("","IndexSizeError"),go=e=>{e.getChannelData=(t=>n=>{try{return t.call(e,n)}catch(r){throw r.code===12?ue():r}})(e.getChannelData)},wo={numberOfChannels:1},vo=(e,t,n,r,o,s,a,i)=>{let c=null;return class An{constructor(d){if(o===null)throw new Error("Missing the native OfflineAudioContext constructor.");const{length:l,numberOfChannels:w,sampleRate:g}={...wo,...d};c===null&&(c=new o(1,1,44100));const h=r!==null&&t(s,s)?new r({length:l,numberOfChannels:w,sampleRate:g}):c.createBuffer(w,l,g);if(h.numberOfChannels===0)throw n();return typeof h.copyFromChannel!="function"?(a(h),go(h)):t(Ht,()=>Ht(h))||i(h),e.add(h),h}static[Symbol.hasInstance](d){return d!==null&&typeof d=="object"&&Object.getPrototypeOf(d)===An.prototype||e.has(d)}}},je=-34028234663852886e22,mt=-je,ae=e=>ge.has(e),_o={buffer:null,channelCount:2,channelCountMode:"max",channelInterpretation:"speakers",loop:!1,loopEnd:0,loopStart:0,playbackRate:1},Eo=(e,t,n,r,o,s,a,i)=>class extends e{constructor(u,d){const l=s(u),w={..._o,...d},g=o(l,w),h=a(l),f=h?t():null;super(u,!1,g,f),this._audioBufferSourceNodeRenderer=f,this._isBufferNullified=!1,this._isBufferSet=w.buffer!==null,this._nativeAudioBufferSourceNode=g,this._onended=null,this._playbackRate=n(this,h,g.playbackRate,mt,je)}get buffer(){return this._isBufferNullified?null:this._nativeAudioBufferSourceNode.buffer}set buffer(u){if(this._nativeAudioBufferSourceNode.buffer=u,u!==null){if(this._isBufferSet)throw r();this._isBufferSet=!0}}get loop(){return this._nativeAudioBufferSourceNode.loop}set loop(u){this._nativeAudioBufferSourceNode.loop=u}get loopEnd(){return this._nativeAudioBufferSourceNode.loopEnd}set loopEnd(u){this._nativeAudioBufferSourceNode.loopEnd=u}get loopStart(){return this._nativeAudioBufferSourceNode.loopStart}set loopStart(u){this._nativeAudioBufferSourceNode.loopStart=u}get onended(){return this._onended}set onended(u){const d=typeof u=="function"?i(this,u):null;this._nativeAudioBufferSourceNode.onended=d;const l=this._nativeAudioBufferSourceNode.onended;this._onended=l!==null&&l===d?u:l}get playbackRate(){return this._playbackRate}start(u=0,d=0,l){if(this._nativeAudioBufferSourceNode.start(u,d,l),this._audioBufferSourceNodeRenderer!==null&&(this._audioBufferSourceNodeRenderer.start=l===void 0?[u,d]:[u,d,l]),this.context.state!=="closed"){Pe(this);const w=()=>{this._nativeAudioBufferSourceNode.removeEventListener("ended",w),ae(this)&&pt(this)};this._nativeAudioBufferSourceNode.addEventListener("ended",w)}}stop(u=0){this._nativeAudioBufferSourceNode.stop(u),this._audioBufferSourceNodeRenderer!==null&&(this._audioBufferSourceNodeRenderer.stop=u)}},yo=(e,t,n,r,o)=>()=>{const s=new WeakMap;let a=null,i=null;const c=async(u,d)=>{let l=n(u);const w=yn(l,d);if(!w){const g={buffer:l.buffer,channelCount:l.channelCount,channelCountMode:l.channelCountMode,channelInterpretation:l.channelInterpretation,loop:l.loop,loopEnd:l.loopEnd,loopStart:l.loopStart,playbackRate:l.playbackRate.value};l=t(d,g),a!==null&&l.start(...a),i!==null&&l.stop(i)}return s.set(d,l),w?await e(d,u.playbackRate,l.playbackRate):await r(d,u.playbackRate,l.playbackRate),await o(u,d,l),l};return{set start(u){a=u},set stop(u){i=u},render(u,d){const l=s.get(d);return l!==void 0?Promise.resolve(l):c(u,d)}}},Ao=e=>"playbackRate"in e,bo=e=>"frequency"in e&&"gain"in e,Co=e=>"offset"in e,To=e=>!("frequency"in e)&&"gain"in e,Mo=e=>"detune"in e&&"frequency"in e,No=e=>"pan"in e,z=e=>K(fn,e),be=e=>K(pn,e),ct=(e,t)=>{const{activeInputs:n}=z(e);n.forEach(o=>o.forEach(([s])=>{t.includes(e)||ct(s,[...t,e])}));const r=Ao(e)?[e.playbackRate]:En(e)?Array.from(e.parameters.values()):bo(e)?[e.Q,e.detune,e.frequency,e.gain]:Co(e)?[e.offset]:To(e)?[e.gain]:Mo(e)?[e.detune,e.frequency]:No(e)?[e.pan]:[];for(const o of r){const s=be(o);s!==void 0&&s.activeInputs.forEach(([a])=>ct(a,t))}ae(e)&&pt(e)},Oo=e=>{ct(e.destination,[])},Ro=e=>e===void 0||typeof e=="number"||typeof e=="string"&&(e==="balanced"||e==="interactive"||e==="playback"),Io=(e,t,n,r,o,s,a,i)=>class extends e{constructor(u,d){const l=s(u),w=a(l),g=o(l,d,w),h=w?t(i):null;super(u,!1,g,h),this._isNodeOfNativeOfflineAudioContext=w,this._nativeAudioDestinationNode=g}get channelCount(){return this._nativeAudioDestinationNode.channelCount}set channelCount(u){if(this._isNodeOfNativeOfflineAudioContext)throw r();if(u>this._nativeAudioDestinationNode.maxChannelCount)throw n();this._nativeAudioDestinationNode.channelCount=u}get channelCountMode(){return this._nativeAudioDestinationNode.channelCountMode}set channelCountMode(u){if(this._isNodeOfNativeOfflineAudioContext)throw r();this._nativeAudioDestinationNode.channelCountMode=u}get maxChannelCount(){return this._nativeAudioDestinationNode.maxChannelCount}},So=e=>{const t=new WeakMap,n=async(r,o)=>{const s=o.destination;return t.set(o,s),await e(r,o,s),s};return{render(r,o){const s=t.get(o);return s!==void 0?Promise.resolve(s):n(r,o)}}},ko=(e,t,n,r,o,s,a,i)=>(c,u)=>{const d=u.listener,l=()=>{const y=new Float32Array(1),A=t(u,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:9}),E=a(u);let v=!1,N=[0,0,-1,0,1,0],I=[0,0,0];const M=()=>{if(v)return;v=!0;const B=r(u,256,9,0);B.onaudioprocess=({inputBuffer:S})=>{const L=[s(S,y,0),s(S,y,1),s(S,y,2),s(S,y,3),s(S,y,4),s(S,y,5)];L.some((O,P)=>O!==N[P])&&(d.setOrientation(...L),N=L);const x=[s(S,y,6),s(S,y,7),s(S,y,8)];x.some((O,P)=>O!==I[P])&&(d.setPosition(...x),I=x)},A.connect(B)},U=B=>S=>{S!==N[B]&&(N[B]=S,d.setOrientation(...N))},k=B=>S=>{S!==I[B]&&(I[B]=S,d.setPosition(...I))},D=(B,S,L)=>{const x=n(u,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",offset:S});x.connect(A,0,B),x.start(),Object.defineProperty(x.offset,"defaultValue",{get(){return S}});const O=e({context:c},E,x.offset,mt,je);return i(O,"value",P=>()=>P.call(O),P=>V=>{try{P.call(O,V)}catch(G){if(G.code!==9)throw G}M(),E&&L(V)}),O.cancelAndHoldAtTime=(P=>E?()=>{throw o()}:(...V)=>{const G=P.apply(O,V);return M(),G})(O.cancelAndHoldAtTime),O.cancelScheduledValues=(P=>E?()=>{throw o()}:(...V)=>{const G=P.apply(O,V);return M(),G})(O.cancelScheduledValues),O.exponentialRampToValueAtTime=(P=>E?()=>{throw o()}:(...V)=>{const G=P.apply(O,V);return M(),G})(O.exponentialRampToValueAtTime),O.linearRampToValueAtTime=(P=>E?()=>{throw o()}:(...V)=>{const G=P.apply(O,V);return M(),G})(O.linearRampToValueAtTime),O.setTargetAtTime=(P=>E?()=>{throw o()}:(...V)=>{const G=P.apply(O,V);return M(),G})(O.setTargetAtTime),O.setValueAtTime=(P=>E?()=>{throw o()}:(...V)=>{const G=P.apply(O,V);return M(),G})(O.setValueAtTime),O.setValueCurveAtTime=(P=>E?()=>{throw o()}:(...V)=>{const G=P.apply(O,V);return M(),G})(O.setValueCurveAtTime),O};return{forwardX:D(0,0,U(0)),forwardY:D(1,0,U(1)),forwardZ:D(2,-1,U(2)),positionX:D(6,0,k(0)),positionY:D(7,0,k(1)),positionZ:D(8,0,k(2)),upX:D(3,0,U(3)),upY:D(4,1,U(4)),upZ:D(5,0,U(5))}},{forwardX:w,forwardY:g,forwardZ:h,positionX:f,positionY:m,positionZ:p,upX:b,upY:_,upZ:T}=d.forwardX===void 0?l():d;return{get forwardX(){return w},get forwardY(){return g},get forwardZ(){return h},get positionX(){return f},get positionY(){return m},get positionZ(){return p},get upX(){return b},get upY(){return _},get upZ(){return T}}},Be=e=>"context"in e,Ce=e=>Be(e[0]),le=(e,t,n,r)=>{for(const o of e)if(n(o)){if(r)return!1;throw Error("The set contains at least one similar element.")}return e.add(t),!0},Xt=(e,t,[n,r],o)=>{le(e,[t,n,r],s=>s[0]===t&&s[1]===n,o)},Yt=(e,[t,n,r],o)=>{const s=e.get(t);s===void 0?e.set(t,new Set([[n,r]])):le(s,[n,r],a=>a[0]===n,o)},bn=e=>"inputs"in e,ut=(e,t,n,r)=>{if(bn(t)){const o=t.inputs[r];return e.connect(o,n,0),[o,n,0]}return e.connect(t,n,r),[t,n,r]},Cn=(e,t,n)=>{for(const r of e)if(r[0]===t&&r[1]===n)return e.delete(r),r;return null},Lo=(e,t,n)=>Fe(e,r=>r[0]===t&&r[1]===n),Tn=(e,t)=>{if(!Ae(e).delete(t))throw new Error("Missing the expected event listener.")},Mn=(e,t,n)=>{const r=K(e,t),o=Fe(r,s=>s[0]===n);return r.size===0&&e.delete(t),o},lt=(e,t,n,r)=>{bn(t)?e.disconnect(t.inputs[r],n,0):e.disconnect(t,n,r)},Y=e=>K(hn,e),Ee=e=>K(mn,e),ie=e=>ot.has(e),Ie=e=>!ge.has(e),Zt=(e,t)=>new Promise(n=>{if(t!==null)n(!0);else{const r=e.createScriptProcessor(256,1,1),o=e.createGain(),s=e.createBuffer(1,2,44100),a=s.getChannelData(0);a[0]=1,a[1]=1;const i=e.createBufferSource();i.buffer=s,i.loop=!0,i.connect(r).connect(e.destination),i.connect(o),i.disconnect(o),r.onaudioprocess=c=>{const u=c.inputBuffer.getChannelData(0);Array.prototype.some.call(u,d=>d===1)?n(!0):n(!1),i.stop(),r.onaudioprocess=null,i.disconnect(r),r.disconnect(e.destination)},i.start()}}),Je=(e,t)=>{const n=new Map;for(const r of e)for(const o of r){const s=n.get(o);n.set(o,s===void 0?1:s+1)}n.forEach((r,o)=>t(o,r))},Ue=e=>"context"in e,Po=e=>{const t=new Map;e.connect=(n=>(r,o=0,s=0)=>{const a=Ue(r)?n(r,o,s):n(r,o),i=t.get(r);return i===void 0?t.set(r,[{input:s,output:o}]):i.every(c=>c.input!==s||c.output!==o)&&i.push({input:s,output:o}),a})(e.connect.bind(e)),e.disconnect=(n=>(r,o,s)=>{if(n.apply(e),r===void 0)t.clear();else if(typeof r=="number")for(const[a,i]of t){const c=i.filter(u=>u.output!==r);c.length===0?t.delete(a):t.set(a,c)}else if(t.has(r))if(o===void 0)t.delete(r);else{const a=t.get(r);if(a!==void 0){const i=a.filter(c=>c.output!==o&&(c.input!==s||s===void 0));i.length===0?t.delete(r):t.set(r,i)}}for(const[a,i]of t)i.forEach(c=>{Ue(a)?e.connect(a,c.output,c.input):e.connect(a,c.output)})})(e.disconnect)},Bo=(e,t,n,r)=>{const{activeInputs:o,passiveInputs:s}=be(t),{outputs:a}=z(e),i=Ae(e),c=u=>{const d=Y(e),l=Ee(t);if(u){const w=Mn(s,e,n);Xt(o,e,w,!1),!r&&!ie(e)&&d.connect(l,n)}else{const w=Lo(o,e,n);Yt(s,w,!1),!r&&!ie(e)&&d.disconnect(l,n)}};return le(a,[t,n],u=>u[0]===t&&u[1]===n,!0)?(i.add(c),ae(e)?Xt(o,e,[n,c],!0):Yt(s,[e,n,c],!0),!0):!1},Uo=(e,t,n,r)=>{const{activeInputs:o,passiveInputs:s}=z(t),a=Cn(o[r],e,n);return a===null?[_n(s,e,n,r)[2],!1]:[a[2],!0]},xo=(e,t,n)=>{const{activeInputs:r,passiveInputs:o}=be(t),s=Cn(r,e,n);return s===null?[Mn(o,e,n)[1],!1]:[s[2],!0]},gt=(e,t,n,r,o)=>{const[s,a]=Uo(e,n,r,o);if(s!==null&&(Tn(e,s),a&&!t&&!ie(e)&<(Y(e),Y(n),r,o)),ae(n)){const{activeInputs:i}=z(n);it(n,i)}},wt=(e,t,n,r)=>{const[o,s]=xo(e,n,r);o!==null&&(Tn(e,o),s&&!t&&!ie(e)&&Y(e).disconnect(Ee(n),r))},Do=(e,t)=>{const n=z(e),r=[];for(const o of n.outputs)Ce(o)?gt(e,t,...o):wt(e,t,...o),r.push(o[0]);return n.outputs.clear(),r},Wo=(e,t,n)=>{const r=z(e),o=[];for(const s of r.outputs)s[1]===n&&(Ce(s)?gt(e,t,...s):wt(e,t,...s),o.push(s[0]),r.outputs.delete(s));return o},Vo=(e,t,n,r,o)=>{const s=z(e);return Array.from(s.outputs).filter(a=>a[0]===n&&(r===void 0||a[1]===r)&&(o===void 0||a[2]===o)).map(a=>(Ce(a)?gt(e,t,...a):wt(e,t,...a),s.outputs.delete(a),a[0]))},Fo=(e,t,n,r,o,s,a,i,c,u,d,l,w,g,h,f)=>class extends u{constructor(p,b,_,T){super(_),this._context=p,this._nativeAudioNode=_;const y=d(p);l(y)&&n(Zt,()=>Zt(y,f))!==!0&&Po(_),hn.set(this,_),wn.set(this,new Set),p.state!=="closed"&&b&&Pe(this),e(this,T,_)}get channelCount(){return this._nativeAudioNode.channelCount}set channelCount(p){this._nativeAudioNode.channelCount=p}get channelCountMode(){return this._nativeAudioNode.channelCountMode}set channelCountMode(p){this._nativeAudioNode.channelCountMode=p}get channelInterpretation(){return this._nativeAudioNode.channelInterpretation}set channelInterpretation(p){this._nativeAudioNode.channelInterpretation=p}get context(){return this._context}get numberOfInputs(){return this._nativeAudioNode.numberOfInputs}get numberOfOutputs(){return this._nativeAudioNode.numberOfOutputs}connect(p,b=0,_=0){if(b<0||b>=this._nativeAudioNode.numberOfOutputs)throw o();const T=d(this._context),y=h(T);if(w(p)||g(p))throw s();if(Be(p)){const v=Y(p);try{const I=ut(this._nativeAudioNode,v,b,_),M=Ie(this);(y||M)&&this._nativeAudioNode.disconnect(...I),this.context.state!=="closed"&&!M&&Ie(p)&&Pe(p)}catch(I){throw I.code===12?s():I}if(t(this,p,b,_,y)){const I=c([this],p);Je(I,r(y))}return p}const A=Ee(p);if(A.name==="playbackRate"&&A.maxValue===1024)throw a();try{this._nativeAudioNode.connect(A,b),(y||Ie(this))&&this._nativeAudioNode.disconnect(A,b)}catch(v){throw v.code===12?s():v}if(Bo(this,p,b,y)){const v=c([this],p);Je(v,r(y))}}disconnect(p,b,_){let T;const y=d(this._context),A=h(y);if(p===void 0)T=Do(this,A);else if(typeof p=="number"){if(p<0||p>=this.numberOfOutputs)throw o();T=Wo(this,A,p)}else{if(b!==void 0&&(b<0||b>=this.numberOfOutputs)||Be(p)&&_!==void 0&&(_<0||_>=p.numberOfInputs))throw o();if(T=Vo(this,A,p,b,_),T.length===0)throw s()}for(const E of T){const v=c([this],E);Je(v,i)}}},jo=(e,t,n,r,o,s,a,i,c,u,d,l,w)=>(g,h,f,m=null,p=null)=>{const b=f.value,_=new Jr(b),T=h?r(_):null,y={get defaultValue(){return b},get maxValue(){return m===null?f.maxValue:m},get minValue(){return p===null?f.minValue:p},get value(){return f.value},set value(A){f.value=A,y.setValueAtTime(A,g.context.currentTime)},cancelAndHoldAtTime(A){if(typeof f.cancelAndHoldAtTime=="function")T===null&&_.flush(g.context.currentTime),_.add(o(A)),f.cancelAndHoldAtTime(A);else{const E=Array.from(_).pop();T===null&&_.flush(g.context.currentTime),_.add(o(A));const v=Array.from(_).pop();f.cancelScheduledValues(A),E!==v&&v!==void 0&&(v.type==="exponentialRampToValue"?f.exponentialRampToValueAtTime(v.value,v.endTime):v.type==="linearRampToValue"?f.linearRampToValueAtTime(v.value,v.endTime):v.type==="setValue"?f.setValueAtTime(v.value,v.startTime):v.type==="setValueCurve"&&f.setValueCurveAtTime(v.values,v.startTime,v.duration))}return y},cancelScheduledValues(A){return T===null&&_.flush(g.context.currentTime),_.add(s(A)),f.cancelScheduledValues(A),y},exponentialRampToValueAtTime(A,E){if(A===0)throw new RangeError;if(!Number.isFinite(E)||E<0)throw new RangeError;const v=g.context.currentTime;return T===null&&_.flush(v),Array.from(_).length===0&&(_.add(u(b,v)),f.setValueAtTime(b,v)),_.add(a(A,E)),f.exponentialRampToValueAtTime(A,E),y},linearRampToValueAtTime(A,E){const v=g.context.currentTime;return T===null&&_.flush(v),Array.from(_).length===0&&(_.add(u(b,v)),f.setValueAtTime(b,v)),_.add(i(A,E)),f.linearRampToValueAtTime(A,E),y},setTargetAtTime(A,E,v){return T===null&&_.flush(g.context.currentTime),_.add(c(A,E,v)),f.setTargetAtTime(A,E,v),y},setValueAtTime(A,E){return T===null&&_.flush(g.context.currentTime),_.add(u(A,E)),f.setValueAtTime(A,E),y},setValueCurveAtTime(A,E,v){const N=A instanceof Float32Array?A:new Float32Array(A);if(l!==null&&l.name==="webkitAudioContext"){const I=E+v,M=g.context.sampleRate,U=Math.ceil(E*M),k=Math.floor(I*M),D=k-U,B=new Float32Array(D);for(let L=0;L({replay(t){for(const n of e)if(n.type==="exponentialRampToValue"){const{endTime:r,value:o}=n;t.exponentialRampToValueAtTime(o,r)}else if(n.type==="linearRampToValue"){const{endTime:r,value:o}=n;t.linearRampToValueAtTime(o,r)}else if(n.type==="setTarget"){const{startTime:r,target:o,timeConstant:s}=n;t.setTargetAtTime(o,r,s)}else if(n.type==="setValue"){const{startTime:r,value:o}=n;t.setValueAtTime(o,r)}else if(n.type==="setValueCurve"){const{duration:r,startTime:o,values:s}=n;t.setValueCurveAtTime(s,o,r)}else throw new Error("Can't apply an unknown automation.")}});class Nn{constructor(t){this._map=new Map(t)}get size(){return this._map.size}entries(){return this._map.entries()}forEach(t,n=null){return this._map.forEach((r,o)=>t.call(n,r,o,this))}get(t){return this._map.get(t)}has(t){return this._map.has(t)}keys(){return this._map.keys()}values(){return this._map.values()}}const Go={channelCount:2,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:1,numberOfOutputs:1,parameterData:{},processorOptions:{}},qo=(e,t,n,r,o,s,a,i,c,u,d,l,w,g)=>class extends t{constructor(f,m,p){var b;const _=i(f),T=c(_),y=d({...Go,...p});w(y);const A=st.get(_),E=A?.get(m),v=T||_.state!=="closed"?_:(b=a(_))!==null&&b!==void 0?b:_,N=o(v,T?null:f.baseLatency,u,m,E,y),I=T?r(m,y,E):null;super(f,!0,N,I);const M=[];N.parameters.forEach((k,D)=>{const B=n(this,T,k);M.push([D,B])}),this._nativeAudioWorkletNode=N,this._onprocessorerror=null,this._parameters=new Nn(M),T&&e(_,this);const{activeInputs:U}=s(this);l(N,U)}get onprocessorerror(){return this._onprocessorerror}set onprocessorerror(f){const m=typeof f=="function"?g(this,f):null;this._nativeAudioWorkletNode.onprocessorerror=m;const p=this._nativeAudioWorkletNode.onprocessorerror;this._onprocessorerror=p!==null&&p===m?f:p}get parameters(){return this._parameters===null?this._nativeAudioWorkletNode.parameters:this._parameters}get port(){return this._nativeAudioWorkletNode.port}};function xe(e,t,n,r,o){if(typeof e.copyFromChannel=="function")t[n].byteLength===0&&(t[n]=new Float32Array(128)),e.copyFromChannel(t[n],r,o);else{const s=e.getChannelData(r);if(t[n].byteLength===0)t[n]=s.slice(o,o+128);else{const a=new Float32Array(s.buffer,o*Float32Array.BYTES_PER_ELEMENT,128);t[n].set(a)}}}const On=(e,t,n,r,o)=>{typeof e.copyToChannel=="function"?t[n].byteLength!==0&&e.copyToChannel(t[n],r,o):t[n].byteLength!==0&&e.getChannelData(r).set(t[n],o)},De=(e,t)=>{const n=[];for(let r=0;r{const n=K(at,e),r=Y(t);return K(n,r)},Ho=async(e,t,n,r,o,s,a)=>{const i=t===null?Math.ceil(e.context.length/128)*128:t.length,c=r.channelCount*r.numberOfInputs,u=o.reduce((m,p)=>m+p,0),d=u===0?null:n.createBuffer(u,i,n.sampleRate);if(s===void 0)throw new Error("Missing the processor constructor.");const l=z(e),w=await zo(n,e),g=De(r.numberOfInputs,r.channelCount),h=De(r.numberOfOutputs,o),f=Array.from(e.parameters.keys()).reduce((m,p)=>({...m,[p]:new Float32Array(128)}),{});for(let m=0;m0&&t!==null)for(let p=0;p{xe(t,f,p,c+b,m)});for(let p=0;pl.activeInputs[T].size===0?[]:_),b=a(m/n.sampleRate,n.sampleRate,()=>w.process(p,h,f));if(d!==null)for(let _=0,T=0;_(m,p,b)=>{const _=new WeakMap;let T=null;const y=async(A,E)=>{let v=d(A),N=null;const I=yn(v,E),M=Array.isArray(p.outputChannelCount)?p.outputChannelCount:Array.from(p.outputChannelCount);if(l===null){const U=M.reduce((S,L)=>S+L,0),k=o(E,{channelCount:Math.max(1,U),channelCountMode:"explicit",channelInterpretation:"discrete",numberOfOutputs:Math.max(1,U)}),D=[];for(let S=0;S{const V=new w(O,Math.ceil(A.context.length/128)*128,E.sampleRate),G=[],fe=[];for(let j=0;j{const H=s(V,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",offset:j.value});return await g(V,j,H.offset),H})),pe=r(V,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:Math.max(1,L+x)});for(let j=0;jh(A,V,j))),f(V)})(),E,p,M,b,u)}const U=await T,k=n(E,{buffer:null,channelCount:2,channelCountMode:"max",channelInterpretation:"speakers",loop:!1,loopEnd:0,loopStart:0,playbackRate:1}),[D,B,S]=N;U!==null&&(k.buffer=U,k.start(0)),k.connect(D);for(let L=0,x=0;L(n,r)=>{const o=t.get(n);if(o!==void 0)return o;const s=e.get(n);if(s!==void 0)return s;try{const a=r();return a instanceof Promise?(e.set(n,a),a.catch(()=>!1).then(i=>(e.delete(n),t.set(n,i),i))):(t.set(n,a),a)}catch{return t.set(n,!1),!1}},Zo=e=>(t,n,r)=>e(n,t,r),Ko=e=>(t,n,r=0,o=0)=>{const s=t[r];if(s===void 0)throw e();return Ue(n)?s.connect(n,0,o):s.connect(n,0)},Qo=e=>t=>(e[0]=t,e[0]),Jo=(e,t,n,r,o,s,a,i)=>(c,u)=>{const d=t.get(c);if(d===void 0)throw new Error("Missing the expected cycle count.");const l=s(c.context),w=i(l);if(d===u){if(t.delete(c),!w&&a(c)){const g=r(c),{outputs:h}=n(c);for(const f of h)if(Ce(f)){const m=r(f[0]);e(g,m,f[1],f[2])}else{const m=o(f[0]);g.connect(m,f[1])}}}else t.set(c,d-u)},es=e=>(t,n,r,o)=>e(t[o],s=>s[0]===n&&s[1]===r),ts=e=>(t,n)=>{e(t).delete(n)},ns=e=>"delayTime"in e,rs=(e,t,n)=>function r(o,s){const a=Be(s)?s:n(e,s);if(ns(a))return[];if(o[0]===a)return[o];if(o.includes(a))return[];const{outputs:i}=t(a);return Array.from(i).map(c=>r([...o,a],c[0])).reduce((c,u)=>c.concat(u),[])},Re=(e,t,n)=>{const r=t[n];if(r===void 0)throw e();return r},os=e=>(t,n=void 0,r=void 0,o=0)=>n===void 0?t.forEach(s=>s.disconnect()):typeof n=="number"?Re(e,t,n).disconnect():Ue(n)?r===void 0?t.forEach(s=>s.disconnect(n)):o===void 0?Re(e,t,r).disconnect(n,0):Re(e,t,r).disconnect(n,0,o):r===void 0?t.forEach(s=>s.disconnect(n)):Re(e,t,r).disconnect(n,0),ss=e=>t=>new Promise((n,r)=>{if(e===null){r(new SyntaxError);return}const o=e.document.head;if(o===null)r(new SyntaxError);else{const s=e.document.createElement("script"),a=new Blob([t],{type:"application/javascript"}),i=URL.createObjectURL(a),c=e.onerror,u=()=>{e.onerror=c,URL.revokeObjectURL(i)};e.onerror=(d,l,w,g,h)=>{if(l===i||l===e.location.href&&w===1&&g===1)return u(),r(h),!1;if(c!==null)return c(d,l,w,g,h)},s.onerror=()=>{u(),r(new SyntaxError)},s.onload=()=>{u(),n()},s.src=i,s.type="module",o.appendChild(s)}}),as=e=>class{constructor(n){this._nativeEventTarget=n,this._listeners=new WeakMap}addEventListener(n,r,o){if(r!==null){let s=this._listeners.get(r);s===void 0&&(s=e(this,r),typeof r=="function"&&this._listeners.set(r,s)),this._nativeEventTarget.addEventListener(n,s,o)}}dispatchEvent(n){return this._nativeEventTarget.dispatchEvent(n)}removeEventListener(n,r,o){const s=r===null?void 0:this._listeners.get(r);this._nativeEventTarget.removeEventListener(n,s===void 0?null:s,o)}},is=e=>(t,n,r)=>{Object.defineProperties(e,{currentFrame:{configurable:!0,get(){return Math.round(t*n)}},currentTime:{configurable:!0,get(){return t}}});try{return r()}finally{e!==null&&(delete e.currentFrame,delete e.currentTime)}},cs=e=>async t=>{try{const n=await fetch(t);if(n.ok)return[await n.text(),n.url]}catch{}throw e()},us=(e,t)=>n=>t(e,n),ls=e=>t=>{const n=e(t);if(n.renderer===null)throw new Error("Missing the renderer of the given AudioNode in the audio graph.");return n.renderer},ds=e=>t=>{var n;return(n=e.get(t))!==null&&n!==void 0?n:0},fs=e=>t=>{const n=e(t);if(n.renderer===null)throw new Error("Missing the renderer of the given AudioParam in the audio graph.");return n.renderer},hs=e=>t=>e.get(t),Z=()=>new DOMException("","InvalidStateError"),ps=e=>t=>{const n=e.get(t);if(n===void 0)throw Z();return n},ms=(e,t)=>n=>{let r=e.get(n);if(r!==void 0)return r;if(t===null)throw new Error("Missing the native OfflineAudioContext constructor.");return r=new t(1,1,44100),e.set(n,r),r},gs=e=>t=>{const n=e.get(t);if(n===void 0)throw new Error("The context has no set of AudioWorkletNodes.");return n},ws=()=>new DOMException("","InvalidAccessError"),vs=(e,t,n,r,o,s)=>a=>(i,c)=>{const u=e.get(i);if(u===void 0){if(!a&&s(i)){const d=r(i),{outputs:l}=n(i);for(const w of l)if(Ce(w)){const g=r(w[0]);t(d,g,w[1],w[2])}else{const g=o(w[0]);d.disconnect(g,w[1])}}e.set(i,c)}else e.set(i,u+c)},_s=e=>t=>e!==null&&t instanceof e,Es=e=>t=>e!==null&&typeof e.AudioNode=="function"&&t instanceof e.AudioNode,ys=e=>t=>e!==null&&typeof e.AudioParam=="function"&&t instanceof e.AudioParam,As=e=>t=>e!==null&&t instanceof e,bs=e=>e!==null&&e.isSecureContext,Cs=(e,t,n,r)=>class extends e{constructor(s,a){const i=n(s),c=t(i,a);if(r(i))throw new TypeError;super(s,!0,c,null),this._nativeMediaStreamAudioSourceNode=c}get mediaStream(){return this._nativeMediaStreamAudioSourceNode.mediaStream}},Ts=(e,t,n,r,o)=>class extends r{constructor(a={}){if(o===null)throw new Error("Missing the native AudioContext constructor.");let i;try{i=new o(a)}catch(d){throw d.code===12&&d.message==="sampleRate is not in range"?t():d}if(i===null)throw n();if(!Ro(a.latencyHint))throw new TypeError(`The provided value '${a.latencyHint}' is not a valid enum value of type AudioContextLatencyCategory.`);if(a.sampleRate!==void 0&&i.sampleRate!==a.sampleRate)throw t();super(i,2);const{latencyHint:c}=a,{sampleRate:u}=i;if(this._baseLatency=typeof i.baseLatency=="number"?i.baseLatency:c==="balanced"?512/u:c==="interactive"||c===void 0?256/u:c==="playback"?1024/u:Math.max(2,Math.min(128,Math.round(c*u/128)))*128/u,this._nativeAudioContext=i,o.name==="webkitAudioContext"?(this._nativeGainNode=i.createGain(),this._nativeOscillatorNode=i.createOscillator(),this._nativeGainNode.gain.value=1e-37,this._nativeOscillatorNode.connect(this._nativeGainNode).connect(i.destination),this._nativeOscillatorNode.start()):(this._nativeGainNode=null,this._nativeOscillatorNode=null),this._state=null,i.state==="running"){this._state="suspended";const d=()=>{this._state==="suspended"&&(this._state=null),i.removeEventListener("statechange",d)};i.addEventListener("statechange",d)}}get baseLatency(){return this._baseLatency}get state(){return this._state!==null?this._state:this._nativeAudioContext.state}close(){return this.state==="closed"?this._nativeAudioContext.close().then(()=>{throw e()}):(this._state==="suspended"&&(this._state=null),this._nativeAudioContext.close().then(()=>{this._nativeGainNode!==null&&this._nativeOscillatorNode!==null&&(this._nativeOscillatorNode.stop(),this._nativeGainNode.disconnect(),this._nativeOscillatorNode.disconnect()),Oo(this)}))}resume(){return this._state==="suspended"?new Promise((a,i)=>{const c=()=>{this._nativeAudioContext.removeEventListener("statechange",c),this._nativeAudioContext.state==="running"?a():this.resume().then(a,i)};this._nativeAudioContext.addEventListener("statechange",c)}):this._nativeAudioContext.resume().catch(a=>{throw a===void 0||a.code===15?e():a})}suspend(){return this._nativeAudioContext.suspend().catch(a=>{throw a===void 0?e():a})}},Ms=(e,t,n,r,o,s)=>class extends n{constructor(i,c){super(i),this._nativeContext=i,gn.set(this,i),r(i)&&o.set(i,new Set),this._destination=new e(this,c),this._listener=t(this,i),this._onstatechange=null}get currentTime(){return this._nativeContext.currentTime}get destination(){return this._destination}get listener(){return this._listener}get onstatechange(){return this._onstatechange}set onstatechange(i){const c=typeof i=="function"?s(this,i):null;this._nativeContext.onstatechange=c;const u=this._nativeContext.onstatechange;this._onstatechange=u!==null&&u===c?i:u}get sampleRate(){return this._nativeContext.sampleRate}get state(){return this._nativeContext.state}},Kt=e=>{const t=new Uint32Array([1179011410,40,1163280727,544501094,16,131073,44100,176400,1048580,1635017060,4,0]);try{const n=e.decodeAudioData(t.buffer,()=>{});return n===void 0?!1:(n.catch(()=>{}),!0)}catch{}return!1},Ns=(e,t)=>(n,r,o)=>{const s=new Set;return n.connect=(a=>(i,c=0,u=0)=>{const d=s.size===0;if(t(i))return a.call(n,i,c,u),e(s,[i,c,u],l=>l[0]===i&&l[1]===c&&l[2]===u,!0),d&&r(),i;a.call(n,i,c),e(s,[i,c],l=>l[0]===i&&l[1]===c,!0),d&&r()})(n.connect),n.disconnect=(a=>(i,c,u)=>{const d=s.size>0;if(i===void 0)a.apply(n),s.clear();else if(typeof i=="number"){a.call(n,i);for(const w of s)w[1]===i&&s.delete(w)}else{t(i)?a.call(n,i,c,u):a.call(n,i,c);for(const w of s)w[0]===i&&(c===void 0||w[1]===c)&&(u===void 0||w[2]===u)&&s.delete(w)}const l=s.size===0;d&&l&&o()})(n.disconnect),n},se=(e,t,n)=>{const r=t[n];r!==void 0&&r!==e[n]&&(e[n]=r)},Te=(e,t)=>{se(e,t,"channelCount"),se(e,t,"channelCountMode"),se(e,t,"channelInterpretation")},Os=e=>e===null?null:e.hasOwnProperty("AudioBuffer")?e.AudioBuffer:null,vt=(e,t,n)=>{const r=t[n];r!==void 0&&r!==e[n].value&&(e[n].value=r)},Rs=e=>{e.start=(t=>{let n=!1;return(r=0,o=0,s)=>{if(n)throw Z();t.call(e,r,o,s),n=!0}})(e.start)},Rn=e=>{e.start=(t=>(n=0,r=0,o)=>{if(typeof o=="number"&&o<0||r<0||n<0)throw new RangeError("The parameters can't be negative.");t.call(e,n,r,o)})(e.start)},In=e=>{e.stop=(t=>(n=0)=>{if(n<0)throw new RangeError("The parameter can't be negative.");t.call(e,n)})(e.stop)},Is=(e,t,n,r,o,s,a,i,c,u,d)=>(l,w)=>{const g=l.createBufferSource();return Te(g,w),vt(g,w,"playbackRate"),se(g,w,"buffer"),se(g,w,"loop"),se(g,w,"loopEnd"),se(g,w,"loopStart"),t(n,()=>n(l))||Rs(g),t(r,()=>r(l))||c(g),t(o,()=>o(l))||u(g,l),t(s,()=>s(l))||Rn(g),t(a,()=>a(l))||d(g,l),t(i,()=>i(l))||In(g),e(l,g),g},Ss=e=>e===null?null:e.hasOwnProperty("AudioContext")?e.AudioContext:e.hasOwnProperty("webkitAudioContext")?e.webkitAudioContext:null,ks=(e,t)=>(n,r,o)=>{const s=n.destination;if(s.channelCount!==r)try{s.channelCount=r}catch{}o&&s.channelCountMode!=="explicit"&&(s.channelCountMode="explicit"),s.maxChannelCount===0&&Object.defineProperty(s,"maxChannelCount",{value:r});const a=e(n,{channelCount:r,channelCountMode:s.channelCountMode,channelInterpretation:s.channelInterpretation,gain:1});return t(a,"channelCount",i=>()=>i.call(a),i=>c=>{i.call(a,c);try{s.channelCount=c}catch(u){if(c>s.maxChannelCount)throw u}}),t(a,"channelCountMode",i=>()=>i.call(a),i=>c=>{i.call(a,c),s.channelCountMode=c}),t(a,"channelInterpretation",i=>()=>i.call(a),i=>c=>{i.call(a,c),s.channelInterpretation=c}),Object.defineProperty(a,"maxChannelCount",{get:()=>s.maxChannelCount}),a.connect(s),a},Ls=e=>e===null?null:e.hasOwnProperty("AudioWorkletNode")?e.AudioWorkletNode:null,Ps=e=>{const{port1:t}=new MessageChannel;try{t.postMessage(e)}finally{t.close()}},Bs=(e,t,n,r,o)=>(s,a,i,c,u,d)=>{if(i!==null)try{const l=new i(s,c,d),w=new Map;let g=null;if(Object.defineProperties(l,{channelCount:{get:()=>d.channelCount,set:()=>{throw e()}},channelCountMode:{get:()=>"explicit",set:()=>{throw e()}},onprocessorerror:{get:()=>g,set:h=>{typeof g=="function"&&l.removeEventListener("processorerror",g),g=typeof h=="function"?h:null,typeof g=="function"&&l.addEventListener("processorerror",g)}}}),l.addEventListener=(h=>(...f)=>{if(f[0]==="processorerror"){const m=typeof f[1]=="function"?f[1]:typeof f[1]=="object"&&f[1]!==null&&typeof f[1].handleEvent=="function"?f[1].handleEvent:null;if(m!==null){const p=w.get(f[1]);p!==void 0?f[1]=p:(f[1]=b=>{b.type==="error"?(Object.defineProperties(b,{type:{value:"processorerror"}}),m(b)):m(new ErrorEvent(f[0],{...b}))},w.set(m,f[1]))}}return h.call(l,"error",f[1],f[2]),h.call(l,...f)})(l.addEventListener),l.removeEventListener=(h=>(...f)=>{if(f[0]==="processorerror"){const m=w.get(f[1]);m!==void 0&&(w.delete(f[1]),f[1]=m)}return h.call(l,"error",f[1],f[2]),h.call(l,f[0],f[1],f[2])})(l.removeEventListener),d.numberOfOutputs!==0){const h=n(s,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});return l.connect(h).connect(s.destination),o(l,()=>h.disconnect(),()=>h.connect(s.destination))}return l}catch(l){throw l.code===11?r():l}if(u===void 0)throw r();return Ps(d),t(s,a,u,d)},Us=(e,t)=>e===null?512:Math.max(512,Math.min(16384,Math.pow(2,Math.round(Math.log2(e*t))))),xs=e=>new Promise((t,n)=>{const{port1:r,port2:o}=new MessageChannel;r.onmessage=({data:s})=>{r.close(),o.close(),t(s)},r.onmessageerror=({data:s})=>{r.close(),o.close(),n(s)},o.postMessage(e)}),Ds=async(e,t)=>{const n=await xs(t);return new e(n)},Ws=(e,t,n,r)=>{let o=at.get(e);o===void 0&&(o=new WeakMap,at.set(e,o));const s=Ds(n,r);return o.set(t,s),s},Vs=(e,t,n,r,o,s,a,i,c,u,d,l,w)=>(g,h,f,m)=>{if(m.numberOfInputs===0&&m.numberOfOutputs===0)throw c();const p=Array.isArray(m.outputChannelCount)?m.outputChannelCount:Array.from(m.outputChannelCount);if(p.some(C=>C<1))throw c();if(p.length!==m.numberOfOutputs)throw t();if(m.channelCountMode!=="explicit")throw c();const b=m.channelCount*m.numberOfInputs,_=p.reduce((C,R)=>C+R,0),T=f.parameterDescriptors===void 0?0:f.parameterDescriptors.length;if(b+T>6||_>6)throw c();const y=new MessageChannel,A=[],E=[];for(let C=0;CC===void 0?0:C},maxValue:{get:()=>R===void 0?mt:R},minValue:{get:()=>q===void 0?je:q}}),v.push(W)}const N=r(g,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:Math.max(1,b+T)}),I=Us(h,g.sampleRate),M=i(g,I,b+T,Math.max(1,_)),U=o(g,{channelCount:Math.max(1,_),channelCountMode:"explicit",channelInterpretation:"discrete",numberOfOutputs:Math.max(1,_)}),k=[];for(let C=0;C{const q=v[R];return q.connect(N,0,b+R),q.start(0),[C,q.offset]}));N.connect(M);let B=m.channelInterpretation,S=null;const L=m.numberOfOutputs===0?[M]:k,x={get bufferSize(){return I},get channelCount(){return m.channelCount},set channelCount(C){throw n()},get channelCountMode(){return m.channelCountMode},set channelCountMode(C){throw n()},get channelInterpretation(){return B},set channelInterpretation(C){for(const R of A)R.channelInterpretation=C;B=C},get context(){return M.context},get inputs(){return A},get numberOfInputs(){return m.numberOfInputs},get numberOfOutputs(){return m.numberOfOutputs},get onprocessorerror(){return S},set onprocessorerror(C){typeof S=="function"&&x.removeEventListener("processorerror",S),S=typeof C=="function"?C:null,typeof S=="function"&&x.addEventListener("processorerror",S)},get parameters(){return D},get port(){return y.port2},addEventListener(...C){return M.addEventListener(C[0],C[1],C[2])},connect:e.bind(null,L),disconnect:u.bind(null,L),dispatchEvent(...C){return M.dispatchEvent(C[0])},removeEventListener(...C){return M.removeEventListener(C[0],C[1],C[2])}},O=new Map;y.port1.addEventListener=(C=>(...R)=>{if(R[0]==="message"){const q=typeof R[1]=="function"?R[1]:typeof R[1]=="object"&&R[1]!==null&&typeof R[1].handleEvent=="function"?R[1].handleEvent:null;if(q!==null){const F=O.get(R[1]);F!==void 0?R[1]=F:(R[1]=W=>{d(g.currentTime,g.sampleRate,()=>q(W))},O.set(q,R[1]))}}return C.call(y.port1,R[0],R[1],R[2])})(y.port1.addEventListener),y.port1.removeEventListener=(C=>(...R)=>{if(R[0]==="message"){const q=O.get(R[1]);q!==void 0&&(O.delete(R[1]),R[1]=q)}return C.call(y.port1,R[0],R[1],R[2])})(y.port1.removeEventListener);let P=null;Object.defineProperty(y.port1,"onmessage",{get:()=>P,set:C=>{typeof P=="function"&&y.port1.removeEventListener("message",P),P=typeof C=="function"?C:null,typeof P=="function"&&(y.port1.addEventListener("message",P),y.port1.start())}}),f.prototype.port=y.port1;let V=null;Ws(g,x,f,m).then(C=>V=C);const fe=De(m.numberOfInputs,m.channelCount),he=De(m.numberOfOutputs,p),pe=f.parameterDescriptors===void 0?[]:f.parameterDescriptors.reduce((C,{name:R})=>({...C,[R]:new Float32Array(128)}),{});let j=!0;const H=()=>{m.numberOfOutputs>0&&M.disconnect(U);for(let C=0,R=0;C{if(V!==null){const q=l(x);for(let F=0;F{xe(C,pe,W,b+$,F)});for(let W=0;W{if(q[te].size>0)return Me.set(te,I/128),X;const Ke=Me.get(te);return Ke===void 0?[]:(X.every(rr=>rr.every(or=>or===0))&&(Ke===1?Me.delete(te):Me.set(te,Ke-1)),X)});j=d(g.currentTime+F/g.sampleRate,g.sampleRate,()=>V.process(W,he,pe));for(let X=0,te=0;XM.connect(Ze).connect(g.destination),Rt=()=>{M.disconnect(Ze),Ze.disconnect()},tr=()=>{if(j){Rt(),m.numberOfOutputs>0&&M.connect(U);for(let C=0,R=0;C{j&&(Ot(),H()),Ye=!1};return Ot(),w(x,tr,nr)},Fs=(e,t)=>(n,r)=>{const o=n.createChannelMerger(r.numberOfInputs);return e!==null&&e.name==="webkitAudioContext"&&t(n,o),Te(o,r),o},js=e=>{const t=e.numberOfOutputs;Object.defineProperty(e,"channelCount",{get:()=>t,set:n=>{if(n!==t)throw Z()}}),Object.defineProperty(e,"channelCountMode",{get:()=>"explicit",set:n=>{if(n!=="explicit")throw Z()}}),Object.defineProperty(e,"channelInterpretation",{get:()=>"discrete",set:n=>{if(n!=="discrete")throw Z()}})},Sn=(e,t)=>{const n=e.createChannelSplitter(t.numberOfOutputs);return Te(n,t),js(n),n},$s=(e,t,n,r,o)=>(s,a)=>{if(s.createConstantSource===void 0)return n(s,a);const i=s.createConstantSource();return Te(i,a),vt(i,a,"offset"),t(r,()=>r(s))||Rn(i),t(o,()=>o(s))||In(i),e(s,i),i},kn=(e,t)=>(e.connect=t.connect.bind(t),e.disconnect=t.disconnect.bind(t),e),Gs=(e,t,n,r)=>(o,{offset:s,...a})=>{const i=o.createBuffer(1,2,44100),c=t(o,{buffer:null,channelCount:2,channelCountMode:"max",channelInterpretation:"speakers",loop:!1,loopEnd:0,loopStart:0,playbackRate:1}),u=n(o,{...a,gain:s}),d=i.getChannelData(0);d[0]=1,d[1]=1,c.buffer=i,c.loop=!0;const l={get bufferSize(){},get channelCount(){return u.channelCount},set channelCount(h){u.channelCount=h},get channelCountMode(){return u.channelCountMode},set channelCountMode(h){u.channelCountMode=h},get channelInterpretation(){return u.channelInterpretation},set channelInterpretation(h){u.channelInterpretation=h},get context(){return u.context},get inputs(){return[]},get numberOfInputs(){return c.numberOfInputs},get numberOfOutputs(){return u.numberOfOutputs},get offset(){return u.gain},get onended(){return c.onended},set onended(h){c.onended=h},addEventListener(...h){return c.addEventListener(h[0],h[1],h[2])},dispatchEvent(...h){return c.dispatchEvent(h[0])},removeEventListener(...h){return c.removeEventListener(h[0],h[1],h[2])},start(h=0){c.start.call(c,h)},stop(h=0){c.stop.call(c,h)}},w=()=>c.connect(u),g=()=>c.disconnect(u);return e(o,c),r(kn(l,u),w,g)},oe=(e,t)=>{const n=e.createGain();return Te(n,t),vt(n,t,"gain"),n},qs=(e,{mediaStream:t})=>{const n=t.getAudioTracks();n.sort((s,a)=>s.ida.id?1:0);const r=n.slice(0,1),o=e.createMediaStreamSource(new MediaStream(r));return Object.defineProperty(o,"mediaStream",{value:t}),o},zs=e=>e===null?null:e.hasOwnProperty("OfflineAudioContext")?e.OfflineAudioContext:e.hasOwnProperty("webkitOfflineAudioContext")?e.webkitOfflineAudioContext:null,_t=(e,t,n,r)=>e.createScriptProcessor(t,n,r),de=()=>new DOMException("","NotSupportedError"),Hs=(e,t)=>(n,r,o)=>(e(r).replay(o),t(r,n,o)),Xs=(e,t,n)=>async(r,o,s)=>{const a=e(r);await Promise.all(a.activeInputs.map((i,c)=>Array.from(i).map(async([u,d])=>{const w=await t(u).render(u,o),g=r.context.destination;!n(u)&&(r!==g||!n(r))&&w.connect(s,d,c)})).reduce((i,c)=>[...i,...c],[]))},Ys=(e,t,n)=>async(r,o,s)=>{const a=t(r);await Promise.all(Array.from(a.activeInputs).map(async([i,c])=>{const d=await e(i).render(i,o);n(i)||d.connect(s,c)}))},Zs=(e,t,n,r)=>o=>e(Kt,()=>Kt(o))?Promise.resolve(e(r,r)).then(s=>{if(!s){const a=n(o,512,0,1);o.oncomplete=()=>{a.onaudioprocess=null,a.disconnect()},a.onaudioprocess=()=>o.currentTime,a.connect(o.destination)}return o.startRendering()}):new Promise(s=>{const a=t(o,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});o.oncomplete=i=>{a.disconnect(),s(i.renderedBuffer)},a.connect(o.destination),o.startRendering()}),Ks=e=>(t,n)=>{e.set(t,n)},Qs=e=>()=>{if(e===null)return!1;try{new e({length:1,sampleRate:44100})}catch{return!1}return!0},Js=(e,t)=>async()=>{if(e===null)return!0;if(t===null)return!1;const n=new Blob(['class A extends AudioWorkletProcessor{process(i){this.port.postMessage(i,[i[0][0].buffer])}}registerProcessor("a",A)'],{type:"application/javascript; charset=utf-8"}),r=new t(1,128,44100),o=URL.createObjectURL(n);let s=!1,a=!1;try{await r.audioWorklet.addModule(o);const i=new e(r,"a",{numberOfOutputs:0}),c=r.createOscillator();i.port.onmessage=()=>s=!0,i.onprocessorerror=()=>a=!0,c.connect(i),c.start(0),await r.startRendering(),await new Promise(u=>setTimeout(u))}catch{}finally{URL.revokeObjectURL(o)}return s&&!a},ea=(e,t)=>()=>{if(t===null)return Promise.resolve(!1);const n=new t(1,1,44100),r=e(n,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});return new Promise(o=>{n.oncomplete=()=>{r.disconnect(),o(n.currentTime!==0)},n.startRendering()})},ta=()=>new DOMException("","UnknownError"),na=()=>typeof window>"u"?null:window,ra=(e,t)=>n=>{n.copyFromChannel=(r,o,s=0)=>{const a=e(s),i=e(o);if(i>=n.numberOfChannels)throw t();const c=n.length,u=n.getChannelData(i),d=r.length;for(let l=a<0?-a:0;l+a{const a=e(s),i=e(o);if(i>=n.numberOfChannels)throw t();const c=n.length,u=n.getChannelData(i),d=r.length;for(let l=a<0?-a:0;l+at=>{t.copyFromChannel=(n=>(r,o,s=0)=>{const a=e(s),i=e(o);if(a(r,o,s=0)=>{const a=e(s),i=e(o);if(a(t,n)=>{const r=n.createBuffer(1,1,44100);t.buffer===null&&(t.buffer=r),e(t,"buffer",o=>()=>{const s=o.call(t);return s===r?null:s},o=>s=>o.call(t,s===null?r:s))},aa=(e,t)=>(n,r)=>{r.channelCount=1,r.channelCountMode="explicit",Object.defineProperty(r,"channelCount",{get:()=>1,set:()=>{throw e()}}),Object.defineProperty(r,"channelCountMode",{get:()=>"explicit",set:()=>{throw e()}});const o=n.createBufferSource();t(r,()=>{const i=r.numberOfInputs;for(let c=0;co.disconnect(r))},ia=(e,t,n)=>e.copyFromChannel===void 0?e.getChannelData(n)[0]:(e.copyFromChannel(t,n),t[0]),Et=(e,t,n,r)=>{let o=e;for(;!o.hasOwnProperty(t);)o=Object.getPrototypeOf(o);const{get:s,set:a}=Object.getOwnPropertyDescriptor(o,t);Object.defineProperty(e,t,{get:n(s),set:r(a)})},ca=e=>({...e,outputChannelCount:e.outputChannelCount!==void 0?e.outputChannelCount:e.numberOfInputs===1&&e.numberOfOutputs===1?[e.channelCount]:Array.from({length:e.numberOfOutputs},()=>1)}),Ln=(e,t,n)=>{try{e.setValueAtTime(t,n)}catch(r){if(r.code!==9)throw r;Ln(e,t,n+1e-7)}},ua=e=>{const t=e.createBufferSource();t.start();try{t.start()}catch{return!0}return!1},la=e=>{const t=e.createBufferSource(),n=e.createBuffer(1,1,44100);t.buffer=n;try{t.start(0,1)}catch{return!1}return!0},da=e=>{const t=e.createBufferSource();t.start();try{t.stop()}catch{return!1}return!0},Pn=e=>{const t=e.createOscillator();try{t.start(-1)}catch(n){return n instanceof RangeError}return!1},fa=e=>{const t=e.createBuffer(1,1,44100),n=e.createBufferSource();n.buffer=t,n.start(),n.stop();try{return n.stop(),!0}catch{return!1}},Bn=e=>{const t=e.createOscillator();try{t.stop(-1)}catch(n){return n instanceof RangeError}return!1},ha=e=>{const{port1:t,port2:n}=new MessageChannel;try{t.postMessage(e)}finally{t.close(),n.close()}},pa=e=>{e.start=(t=>(n=0,r=0,o)=>{const s=e.buffer,a=s===null?r:Math.min(s.duration,r);s!==null&&a>s.duration-.5/e.context.sampleRate?t.call(e,n,0,0):t.call(e,n,a,o)})(e.start)},ma=(e,t)=>{const n=t.createGain();e.connect(n);const r=(o=>()=>{o.call(e,n),e.removeEventListener("ended",r)})(e.disconnect);e.addEventListener("ended",r),kn(e,n),e.stop=(o=>{let s=!1;return(a=0)=>{if(s)try{o.call(e,a)}catch{n.gain.setValueAtTime(0,a)}else o.call(e,a),s=!0}})(e.stop)},$e=(e,t)=>n=>{const r={value:e};return Object.defineProperties(n,{currentTarget:r,target:r}),typeof t=="function"?t.call(e,n):t.handleEvent.call(e,n)},ga=ao(le),wa=ho(le),va=es(Fe),_a=new WeakMap,Ea=ds(_a),we=Yo(new Map,new WeakMap),Q=na(),Un=ls(z),yt=Xs(z,Un,ie),ce=ps(gn),ve=zs(Q),ee=As(ve),xn=new WeakMap,Dn=as($e),Ge=Ss(Q),ya=_s(Ge),Wn=Es(Q),Aa=ys(Q),ye=Ls(Q),qe=Fo(io(fn),fo(ga,wa,ut,va,lt,z,Ea,Ae,Y,le,ae,ie,Ie),we,vs(ot,lt,z,Y,Ee,ae),ue,ws,de,Jo(ut,ot,z,Y,Ee,ce,ae,ee),rs(xn,z,K),Dn,ce,ya,Wn,Aa,ee,ye),ba=new WeakSet,Qt=Os(Q),Vn=Qo(new Uint32Array(1)),Ca=ra(Vn,ue),Ta=oa(Vn),Ma=vo(ba,we,de,Qt,ve,Qs(Qt),Ca,Ta),At=po(oe),Fn=Ys(Un,be,ie),jn=Zo(Fn),ze=Is(At,we,ua,la,da,Pn,fa,Bn,pa,sa(Et),ma),$n=Hs(fs(be),Fn),Na=yo(jn,ze,Y,$n,yt),bt=jo(co(pn),xn,mn,$o,eo,to,no,ro,oo,tt,ln,Ge,Ln),Oa=Eo(qe,Na,bt,Z,ze,ce,ee,$e),Ra=Io(qe,So,ue,Z,ks(oe,Et),ce,ee,yt),He=Ns(le,Wn),Ia=aa(Z,He),Ct=Fs(Ge,Ia),Sa=Gs(At,ze,oe,He),Tt=$s(At,we,Sa,Pn,Bn),ka=Zs(we,oe,_t,ea(oe,ve)),La=ko(bt,Ct,Tt,_t,de,ia,ee,Et),Gn=new WeakMap,Pa=Ms(Ra,La,Dn,ee,Gn,$e),qn=bs(Q),Mt=is(Q),zn=new WeakMap,Ba=ms(zn,ve),Jt=qn?lo(we,de,ss(Q),Mt,cs(so),ce,Ba,ee,ye,new WeakMap,new WeakMap,Js(ye,ve),Q):void 0,Ua=Cs(qe,qs,ce,ee),Hn=gs(Gn),xa=mo(Hn),Xn=Ko(ue),Da=ts(Hn),Yn=os(ue),Zn=new WeakMap,Wa=us(Zn,K),Va=Vs(Xn,ue,Z,Ct,Sn,Tt,oe,_t,de,Yn,Mt,Wa,He),Fa=Bs(Z,Va,oe,de,He),ja=Xo(jn,Xn,ze,Ct,Sn,Tt,oe,Da,Yn,Mt,Y,ye,ve,$n,yt,ka),$a=hs(zn),Ga=Ks(Zn),en=qn?qo(xa,qe,bt,ja,Fa,z,$a,ce,ee,ye,ca,Ga,ha,$e):void 0,qa=Ts(Z,de,ta,Pa,Ge),Kn="Missing AudioWorklet support. Maybe this is not running in a secure context.",za=async(e,t,n,r,o)=>{const{encoderId:s,port:a}=await sn(o,t.sampleRate);if(en===void 0)throw new Error(Kn);const i=new Oa(t,{buffer:e}),c=new Ua(t,{mediaStream:r}),u=Zr(en,t,{channelCount:n});return{audioBufferSourceNode:i,encoderId:s,mediaStreamAudioSourceNode:c,port:a,recorderAudioWorkletNode:u}},Ha=(e,t,n,r)=>(o,s,a)=>{var i;const c=(i=s.getAudioTracks()[0])===null||i===void 0?void 0:i.getSettings().sampleRate,u=new qa({latencyHint:"playback",sampleRate:c}),d=Math.max(1024,Math.ceil(u.baseLatency*u.sampleRate)),l=new Ma({length:d,sampleRate:u.sampleRate}),w=[],g=Yr(v=>{if(Jt===void 0)throw new Error(Kn);return Jt(u,v)});let h=null,f=null,m=null,p=null,b=!0;const _=v=>{o.dispatchEvent(e("dataavailable",{data:new Blob(v,{type:a})}))},T=async(v,N)=>{const I=await Se(v,N);m===null?w.push(...I):(_(I),p=T(v,N))},y=()=>(b=!0,u.resume()),A=()=>{m!==null&&(h!==null&&(s.removeEventListener("addtrack",h),s.removeEventListener("removetrack",h)),f!==null&&clearTimeout(f),m.then(async({encoderId:v,mediaStreamAudioSourceNode:N,recorderAudioWorkletNode:I})=>{p!==null&&(p.catch(()=>{}),p=null),await I.stop(),N.disconnect(I);const M=await Se(v,null);m===null&&await E(),_([...w,...M]),w.length=0,o.dispatchEvent(new Event("stop"))}),m=null)},E=()=>(b=!1,u.suspend());return E(),{get mimeType(){return a},get state(){return m===null?"inactive":b?"recording":"paused"},pause(){if(m===null)throw n();b&&(E(),o.dispatchEvent(new Event("pause")))},resume(){if(m===null)throw n();b||(y(),o.dispatchEvent(new Event("resume")))},start(v){var N;if(m!==null)throw n();if(s.getVideoTracks().length>0)throw r();o.dispatchEvent(new Event("start"));const I=s.getAudioTracks(),M=I.length===0?2:(N=I[0].getSettings().channelCount)!==null&&N!==void 0?N:2;m=Promise.all([y(),g.then(()=>za(l,u,M,s,a))]).then(async([,{audioBufferSourceNode:k,encoderId:D,mediaStreamAudioSourceNode:B,port:S,recorderAudioWorkletNode:L}])=>(B.connect(L),await new Promise(x=>{k.onended=x,k.connect(L),k.start(u.currentTime+d/u.sampleRate)}),k.disconnect(L),await L.record(S),v!==void 0&&(p=T(D,v)),{encoderId:D,mediaStreamAudioSourceNode:B,recorderAudioWorkletNode:L}));const U=s.getTracks();h=()=>{A(),o.dispatchEvent(new ErrorEvent("error",{error:t()}))},s.addEventListener("addtrack",h),s.addEventListener("removetrack",h),f=setInterval(()=>{const k=s.getTracks();(k.length!==U.length||k.some((D,B)=>D!==U[B]))&&h!==null&&h()},1e3)},stop:A}};class et{constructor(t,n=0,r){if(n<0||r!==void 0&&r<0)throw new RangeError;const o=t.reduce((d,l)=>d+l.byteLength,0);if(n>o||r!==void 0&&n+r>o)throw new RangeError;const s=[],a=r===void 0?o-n:r,i=[];let c=0,u=n;for(const d of t)if(i.length===0)if(d.byteLength>u){c=d.byteLength-u;const l=c>a?a:c;s.push(new DataView(d,u,l)),i.push(d)}else u-=d.byteLength;else if(ca?d.byteLength-c+a:d.byteLength;s.push(new DataView(d,0,l)),i.push(d)}this._buffers=i,this._byteLength=a,this._byteOffset=u,this._dataViews=s,this._internalBuffer=new DataView(new ArrayBuffer(8))}get buffers(){return this._buffers}get byteLength(){return this._byteLength}get byteOffset(){return this._byteOffset}getFloat32(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.getFloat32(0,n)}getFloat64(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.setUint8(4,this.getUint8(t+4)),this._internalBuffer.setUint8(5,this.getUint8(t+5)),this._internalBuffer.setUint8(6,this.getUint8(t+6)),this._internalBuffer.setUint8(7,this.getUint8(t+7)),this._internalBuffer.getFloat64(0,n)}getInt16(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.getInt16(0,n)}getInt32(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.getInt32(0,n)}getInt8(t){const[n,r]=this._findDataViewWithOffset(t);return n.getInt8(t-r)}getUint16(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.getUint16(0,n)}getUint32(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.getUint32(0,n)}getUint8(t){const[n,r]=this._findDataViewWithOffset(t);return n.getUint8(t-r)}setFloat32(t,n,r){this._internalBuffer.setFloat32(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3))}setFloat64(t,n,r){this._internalBuffer.setFloat64(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3)),this.setUint8(t+4,this._internalBuffer.getUint8(4)),this.setUint8(t+5,this._internalBuffer.getUint8(5)),this.setUint8(t+6,this._internalBuffer.getUint8(6)),this.setUint8(t+7,this._internalBuffer.getUint8(7))}setInt16(t,n,r){this._internalBuffer.setInt16(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1))}setInt32(t,n,r){this._internalBuffer.setInt32(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3))}setInt8(t,n){const[r,o]=this._findDataViewWithOffset(t);r.setInt8(t-o,n)}setUint16(t,n,r){this._internalBuffer.setUint16(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1))}setUint32(t,n,r){this._internalBuffer.setUint32(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3))}setUint8(t,n){const[r,o]=this._findDataViewWithOffset(t);r.setUint8(t-o,n)}_findDataViewWithOffset(t){let n=0;for(const r of this._dataViews){const o=n+r.byteLength;if(t>=n&&t(r,o,s,a)=>{const i=[],c=new o(s,{mimeType:"audio/webm;codecs=pcm"});let u=null,d=()=>{};const l=h=>{r.dispatchEvent(e("dataavailable",{data:new Blob(h,{type:a})}))},w=async(h,f)=>{const m=await Se(h,f);c.state==="inactive"?i.push(...m):(l(m),u=w(h,f))},g=()=>{c.state!=="inactive"&&(u!==null&&(u.catch(()=>{}),u=null),d(),d=()=>{},c.stop())};return c.addEventListener("error",h=>{g(),r.dispatchEvent(new ErrorEvent("error",{error:h.error}))}),c.addEventListener("pause",()=>r.dispatchEvent(new Event("pause"))),c.addEventListener("resume",()=>r.dispatchEvent(new Event("resume"))),c.addEventListener("start",()=>r.dispatchEvent(new Event("start"))),{get mimeType(){return a},get state(){return c.state},pause(){return c.pause()},resume(){return c.resume()},start(h){const[f]=s.getAudioTracks();if(f!==void 0&&c.state==="inactive"){const{channelCount:m,sampleRate:p}=f.getSettings();if(m===void 0)throw new Error("The channelCount is not defined.");if(p===void 0)throw new Error("The sampleRate is not defined.");let b=!1,_=!1,T=0,y=sn(a,p);d=()=>{_=!0};const A=cn(c,"dataavailable")(({data:E})=>{T+=1,y=y.then(async({dataView:v=null,elementType:N=null,encoderId:I,port:M})=>{const U=await E.arrayBuffer();T-=1;const k=v===null?new et([U]):new et([...v.buffers,U],v.byteOffset);if(!b&&c.state==="recording"&&!_){const x=n(k,0);if(x===null)return{dataView:k,elementType:N,encoderId:I,port:M};const{value:O}=x;if(O!==172351395)return{dataView:v,elementType:N,encoderId:I,port:M};b=!0}const{currentElementType:D,offset:B,contents:S}=t(k,N,m),L=BM.postMessage(x,x.map(({buffer:O})=>O))),T===0&&(c.state==="inactive"||_)&&(Se(I,null).then(x=>{l([...i,...x]),i.length=0,r.dispatchEvent(new Event("stop"))}),M.postMessage([]),M.close(),A()),{dataView:L,elementType:D,encoderId:I,port:M}})});h!==void 0&&y.then(({encoderId:E})=>u=w(E,h))}c.start(100)},stop:g}},Ya=()=>typeof window>"u"?null:window,Qn=(e,t)=>{if(t>=e.byteLength)return null;const n=e.getUint8(t);if(n>127)return 1;if(n>63)return 2;if(n>31)return 3;if(n>15)return 4;if(n>7)return 5;if(n>3)return 6;if(n>1)return 7;if(n>0)return 8;const r=Qn(e,t+1);return r===null?null:r+8},Za=(e,t)=>n=>{const r={value:e};return Object.defineProperties(n,{currentTarget:r,target:r}),typeof t=="function"?t.call(e,n):t.handleEvent.call(e,n)},Jn=[],Xe=Ya(),Ka=Nr(Xe),er=_r(Ka),Qa=Ha(er,br,Cr,an),Nt=kr(Qn),Ja=Ir(Nt),ei=Sr(Nt),ti=Er(Ja,ei),ni=Xa(er,ti,Nt),ri=Ar(Xe),oi=Rr(Xe),_i=Mr(Or,an,Qa,ni,Jn,yr(ri,Za),oi),Ei=()=>Tr(Xe),yi=async e=>{Jn.push(await vr(e))};export{_i as MediaRecorder,Ei as isSupported,yi as register}; -//# sourceMappingURL=module-260a78dd.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_mathtext.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_mathtext.py deleted file mode 100644 index b23cb67116ed4b593d14254a523318bc984120e5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_mathtext.py +++ /dev/null @@ -1,2851 +0,0 @@ -""" -Implementation details for :mod:`.mathtext`. -""" - -from __future__ import annotations - -import abc -import copy -import enum -import functools -import logging -import os -import re -import types -import unicodedata -import string -import typing as T -from typing import NamedTuple - -import numpy as np -from pyparsing import ( - Empty, Forward, Literal, NotAny, oneOf, OneOrMore, Optional, - ParseBaseException, ParseException, ParseExpression, ParseFatalException, - ParserElement, ParseResults, QuotedString, Regex, StringEnd, ZeroOrMore, - pyparsing_common, Group) - -import matplotlib as mpl -from . import cbook -from ._mathtext_data import ( - latex_to_bakoma, stix_glyph_fixes, stix_virtual_fonts, tex2uni) -from .font_manager import FontProperties, findfont, get_font -from .ft2font import FT2Font, FT2Image, KERNING_DEFAULT - -from packaging.version import parse as parse_version -from pyparsing import __version__ as pyparsing_version -if parse_version(pyparsing_version).major < 3: - from pyparsing import nestedExpr as nested_expr -else: - from pyparsing import nested_expr - -if T.TYPE_CHECKING: - from collections.abc import Iterable - from .ft2font import Glyph - -ParserElement.enablePackrat() -_log = logging.getLogger("matplotlib.mathtext") - - -############################################################################## -# FONTS - - -def get_unicode_index(symbol: str) -> int: # Publicly exported. - r""" - Return the integer index (from the Unicode table) of *symbol*. - - Parameters - ---------- - symbol : str - A single (Unicode) character, a TeX command (e.g. r'\pi') or a Type1 - symbol name (e.g. 'phi'). - """ - try: # This will succeed if symbol is a single Unicode char - return ord(symbol) - except TypeError: - pass - try: # Is symbol a TeX symbol (i.e. \alpha) - return tex2uni[symbol.strip("\\")] - except KeyError as err: - raise ValueError( - f"{symbol!r} is not a valid Unicode character or TeX/Type1 symbol" - ) from err - - -class VectorParse(NamedTuple): - """ - The namedtuple type returned by ``MathTextParser("path").parse(...)``. - - Attributes - ---------- - width, height, depth : float - The global metrics. - glyphs : list - The glyphs including their positions. - rect : list - The list of rectangles. - """ - width: float - height: float - depth: float - glyphs: list[tuple[FT2Font, float, int, float, float]] - rects: list[tuple[float, float, float, float]] - -VectorParse.__module__ = "matplotlib.mathtext" - - -class RasterParse(NamedTuple): - """ - The namedtuple type returned by ``MathTextParser("agg").parse(...)``. - - Attributes - ---------- - ox, oy : float - The offsets are always zero. - width, height, depth : float - The global metrics. - image : FT2Image - A raster image. - """ - ox: float - oy: float - width: float - height: float - depth: float - image: FT2Image - -RasterParse.__module__ = "matplotlib.mathtext" - - -class Output: - r""" - Result of `ship`\ping a box: lists of positioned glyphs and rectangles. - - This class is not exposed to end users, but converted to a `VectorParse` or - a `RasterParse` by `.MathTextParser.parse`. - """ - - def __init__(self, box: Box): - self.box = box - self.glyphs: list[tuple[float, float, FontInfo]] = [] # (ox, oy, info) - self.rects: list[tuple[float, float, float, float]] = [] # (x1, y1, x2, y2) - - def to_vector(self) -> VectorParse: - w, h, d = map( - np.ceil, [self.box.width, self.box.height, self.box.depth]) - gs = [(info.font, info.fontsize, info.num, ox, h - oy + info.offset) - for ox, oy, info in self.glyphs] - rs = [(x1, h - y2, x2 - x1, y2 - y1) - for x1, y1, x2, y2 in self.rects] - return VectorParse(w, h + d, d, gs, rs) - - def to_raster(self, *, antialiased: bool) -> RasterParse: - # Metrics y's and mathtext y's are oriented in opposite directions, - # hence the switch between ymin and ymax. - xmin = min([*[ox + info.metrics.xmin for ox, oy, info in self.glyphs], - *[x1 for x1, y1, x2, y2 in self.rects], 0]) - 1 - ymin = min([*[oy - info.metrics.ymax for ox, oy, info in self.glyphs], - *[y1 for x1, y1, x2, y2 in self.rects], 0]) - 1 - xmax = max([*[ox + info.metrics.xmax for ox, oy, info in self.glyphs], - *[x2 for x1, y1, x2, y2 in self.rects], 0]) + 1 - ymax = max([*[oy - info.metrics.ymin for ox, oy, info in self.glyphs], - *[y2 for x1, y1, x2, y2 in self.rects], 0]) + 1 - w = xmax - xmin - h = ymax - ymin - self.box.depth - d = ymax - ymin - self.box.height - image = FT2Image(np.ceil(w), np.ceil(h + max(d, 0))) - - # Ideally, we could just use self.glyphs and self.rects here, shifting - # their coordinates by (-xmin, -ymin), but this yields slightly - # different results due to floating point slop; shipping twice is the - # old approach and keeps baseline images backcompat. - shifted = ship(self.box, (-xmin, -ymin)) - - for ox, oy, info in shifted.glyphs: - info.font.draw_glyph_to_bitmap( - image, ox, oy - info.metrics.iceberg, info.glyph, - antialiased=antialiased) - for x1, y1, x2, y2 in shifted.rects: - height = max(int(y2 - y1) - 1, 0) - if height == 0: - center = (y2 + y1) / 2 - y = int(center - (height + 1) / 2) - else: - y = int(y1) - image.draw_rect_filled(int(x1), y, np.ceil(x2), y + height) - return RasterParse(0, 0, w, h + d, d, image) - - -class FontMetrics(NamedTuple): - """ - Metrics of a font. - - Attributes - ---------- - advance : float - The advance distance (in points) of the glyph. - height : float - The height of the glyph in points. - width : float - The width of the glyph in points. - xmin, xmax, ymin, ymax : float - The ink rectangle of the glyph. - iceberg : float - The distance from the baseline to the top of the glyph. (This corresponds to - TeX's definition of "height".) - slanted : bool - Whether the glyph should be considered as "slanted" (currently used for kerning - sub/superscripts). - """ - advance: float - height: float - width: float - xmin: float - xmax: float - ymin: float - ymax: float - iceberg: float - slanted: bool - - -class FontInfo(NamedTuple): - font: FT2Font - fontsize: float - postscript_name: str - metrics: FontMetrics - num: int - glyph: Glyph - offset: float - - -class Fonts(abc.ABC): - """ - An abstract base class for a system of fonts to use for mathtext. - - The class must be able to take symbol keys and font file names and - return the character metrics. It also delegates to a backend class - to do the actual drawing. - """ - - def __init__(self, default_font_prop: FontProperties, load_glyph_flags: int): - """ - Parameters - ---------- - default_font_prop : `~.font_manager.FontProperties` - The default non-math font, or the base font for Unicode (generic) - font rendering. - load_glyph_flags : int - Flags passed to the glyph loader (e.g. ``FT_Load_Glyph`` and - ``FT_Load_Char`` for FreeType-based fonts). - """ - self.default_font_prop = default_font_prop - self.load_glyph_flags = load_glyph_flags - - def get_kern(self, font1: str, fontclass1: str, sym1: str, fontsize1: float, - font2: str, fontclass2: str, sym2: str, fontsize2: float, - dpi: float) -> float: - """ - Get the kerning distance for font between *sym1* and *sym2*. - - See `~.Fonts.get_metrics` for a detailed description of the parameters. - """ - return 0. - - def _get_font(self, font: str) -> FT2Font: - raise NotImplementedError - - def _get_info(self, font: str, font_class: str, sym: str, fontsize: float, - dpi: float) -> FontInfo: - raise NotImplementedError - - def get_metrics(self, font: str, font_class: str, sym: str, fontsize: float, - dpi: float) -> FontMetrics: - r""" - Parameters - ---------- - font : str - One of the TeX font names: "tt", "it", "rm", "cal", "sf", "bf", - "default", "regular", "bb", "frak", "scr". "default" and "regular" - are synonyms and use the non-math font. - font_class : str - One of the TeX font names (as for *font*), but **not** "bb", - "frak", or "scr". This is used to combine two font classes. The - only supported combination currently is ``get_metrics("frak", "bf", - ...)``. - sym : str - A symbol in raw TeX form, e.g., "1", "x", or "\sigma". - fontsize : float - Font size in points. - dpi : float - Rendering dots-per-inch. - - Returns - ------- - FontMetrics - """ - info = self._get_info(font, font_class, sym, fontsize, dpi) - return info.metrics - - def render_glyph(self, output: Output, ox: float, oy: float, font: str, - font_class: str, sym: str, fontsize: float, dpi: float) -> None: - """ - At position (*ox*, *oy*), draw the glyph specified by the remaining - parameters (see `get_metrics` for their detailed description). - """ - info = self._get_info(font, font_class, sym, fontsize, dpi) - output.glyphs.append((ox, oy, info)) - - def render_rect_filled(self, output: Output, - x1: float, y1: float, x2: float, y2: float) -> None: - """ - Draw a filled rectangle from (*x1*, *y1*) to (*x2*, *y2*). - """ - output.rects.append((x1, y1, x2, y2)) - - def get_xheight(self, font: str, fontsize: float, dpi: float) -> float: - """ - Get the xheight for the given *font* and *fontsize*. - """ - raise NotImplementedError() - - def get_underline_thickness(self, font: str, fontsize: float, dpi: float) -> float: - """ - Get the line thickness that matches the given font. Used as a - base unit for drawing lines such as in a fraction or radical. - """ - raise NotImplementedError() - - def get_sized_alternatives_for_symbol(self, fontname: str, - sym: str) -> list[tuple[str, str]]: - """ - Override if your font provides multiple sizes of the same - symbol. Should return a list of symbols matching *sym* in - various sizes. The expression renderer will select the most - appropriate size for a given situation from this list. - """ - return [(fontname, sym)] - - -class TruetypeFonts(Fonts, metaclass=abc.ABCMeta): - """ - A generic base class for all font setups that use Truetype fonts - (through FT2Font). - """ - - def __init__(self, default_font_prop: FontProperties, load_glyph_flags: int): - super().__init__(default_font_prop, load_glyph_flags) - # Per-instance cache. - self._get_info = functools.cache(self._get_info) # type: ignore[method-assign] - self._fonts = {} - self.fontmap: dict[str | int, str] = {} - - filename = findfont(self.default_font_prop) - default_font = get_font(filename) - self._fonts['default'] = default_font - self._fonts['regular'] = default_font - - def _get_font(self, font: str | int) -> FT2Font: - if font in self.fontmap: - basename = self.fontmap[font] - else: - # NOTE: An int is only passed by subclasses which have placed int keys into - # `self.fontmap`, so we must cast this to confirm it to typing. - basename = T.cast(str, font) - cached_font = self._fonts.get(basename) - if cached_font is None and os.path.exists(basename): - cached_font = get_font(basename) - self._fonts[basename] = cached_font - self._fonts[cached_font.postscript_name] = cached_font - self._fonts[cached_font.postscript_name.lower()] = cached_font - return T.cast(FT2Font, cached_font) # FIXME: Not sure this is guaranteed. - - def _get_offset(self, font: FT2Font, glyph: Glyph, fontsize: float, - dpi: float) -> float: - if font.postscript_name == 'Cmex10': - return (glyph.height / 64 / 2) + (fontsize/3 * dpi/72) - return 0. - - def _get_glyph(self, fontname: str, font_class: str, - sym: str) -> tuple[FT2Font, int, bool]: - raise NotImplementedError - - # The return value of _get_info is cached per-instance. - def _get_info(self, fontname: str, font_class: str, sym: str, fontsize: float, - dpi: float) -> FontInfo: - font, num, slanted = self._get_glyph(fontname, font_class, sym) - font.set_size(fontsize, dpi) - glyph = font.load_char(num, flags=self.load_glyph_flags) - - xmin, ymin, xmax, ymax = [val/64.0 for val in glyph.bbox] - offset = self._get_offset(font, glyph, fontsize, dpi) - metrics = FontMetrics( - advance = glyph.linearHoriAdvance/65536.0, - height = glyph.height/64.0, - width = glyph.width/64.0, - xmin = xmin, - xmax = xmax, - ymin = ymin+offset, - ymax = ymax+offset, - # iceberg is the equivalent of TeX's "height" - iceberg = glyph.horiBearingY/64.0 + offset, - slanted = slanted - ) - - return FontInfo( - font = font, - fontsize = fontsize, - postscript_name = font.postscript_name, - metrics = metrics, - num = num, - glyph = glyph, - offset = offset - ) - - def get_xheight(self, fontname: str, fontsize: float, dpi: float) -> float: - font = self._get_font(fontname) - font.set_size(fontsize, dpi) - pclt = font.get_sfnt_table('pclt') - if pclt is None: - # Some fonts don't store the xHeight, so we do a poor man's xHeight - metrics = self.get_metrics( - fontname, mpl.rcParams['mathtext.default'], 'x', fontsize, dpi) - return metrics.iceberg - xHeight = (pclt['xHeight'] / 64.0) * (fontsize / 12.0) * (dpi / 100.0) - return xHeight - - def get_underline_thickness(self, font: str, fontsize: float, dpi: float) -> float: - # This function used to grab underline thickness from the font - # metrics, but that information is just too un-reliable, so it - # is now hardcoded. - return ((0.75 / 12.0) * fontsize * dpi) / 72.0 - - def get_kern(self, font1: str, fontclass1: str, sym1: str, fontsize1: float, - font2: str, fontclass2: str, sym2: str, fontsize2: float, - dpi: float) -> float: - if font1 == font2 and fontsize1 == fontsize2: - info1 = self._get_info(font1, fontclass1, sym1, fontsize1, dpi) - info2 = self._get_info(font2, fontclass2, sym2, fontsize2, dpi) - font = info1.font - return font.get_kerning(info1.num, info2.num, KERNING_DEFAULT) / 64 - return super().get_kern(font1, fontclass1, sym1, fontsize1, - font2, fontclass2, sym2, fontsize2, dpi) - - -class BakomaFonts(TruetypeFonts): - """ - Use the Bakoma TrueType fonts for rendering. - - Symbols are strewn about a number of font files, each of which has - its own proprietary 8-bit encoding. - """ - _fontmap = { - 'cal': 'cmsy10', - 'rm': 'cmr10', - 'tt': 'cmtt10', - 'it': 'cmmi10', - 'bf': 'cmb10', - 'sf': 'cmss10', - 'ex': 'cmex10', - } - - def __init__(self, default_font_prop: FontProperties, load_glyph_flags: int): - self._stix_fallback = StixFonts(default_font_prop, load_glyph_flags) - - super().__init__(default_font_prop, load_glyph_flags) - for key, val in self._fontmap.items(): - fullpath = findfont(val) - self.fontmap[key] = fullpath - self.fontmap[val] = fullpath - - _slanted_symbols = set(r"\int \oint".split()) - - def _get_glyph(self, fontname: str, font_class: str, - sym: str) -> tuple[FT2Font, int, bool]: - font = None - if fontname in self.fontmap and sym in latex_to_bakoma: - basename, num = latex_to_bakoma[sym] - slanted = (basename == "cmmi10") or sym in self._slanted_symbols - font = self._get_font(basename) - elif len(sym) == 1: - slanted = (fontname == "it") - font = self._get_font(fontname) - if font is not None: - num = ord(sym) - if font is not None and font.get_char_index(num) != 0: - return font, num, slanted - else: - return self._stix_fallback._get_glyph(fontname, font_class, sym) - - # The Bakoma fonts contain many pre-sized alternatives for the - # delimiters. The AutoSizedChar class will use these alternatives - # and select the best (closest sized) glyph. - _size_alternatives = { - '(': [('rm', '('), ('ex', '\xa1'), ('ex', '\xb3'), - ('ex', '\xb5'), ('ex', '\xc3')], - ')': [('rm', ')'), ('ex', '\xa2'), ('ex', '\xb4'), - ('ex', '\xb6'), ('ex', '\x21')], - '{': [('cal', '{'), ('ex', '\xa9'), ('ex', '\x6e'), - ('ex', '\xbd'), ('ex', '\x28')], - '}': [('cal', '}'), ('ex', '\xaa'), ('ex', '\x6f'), - ('ex', '\xbe'), ('ex', '\x29')], - # The fourth size of '[' is mysteriously missing from the BaKoMa - # font, so I've omitted it for both '[' and ']' - '[': [('rm', '['), ('ex', '\xa3'), ('ex', '\x68'), - ('ex', '\x22')], - ']': [('rm', ']'), ('ex', '\xa4'), ('ex', '\x69'), - ('ex', '\x23')], - r'\lfloor': [('ex', '\xa5'), ('ex', '\x6a'), - ('ex', '\xb9'), ('ex', '\x24')], - r'\rfloor': [('ex', '\xa6'), ('ex', '\x6b'), - ('ex', '\xba'), ('ex', '\x25')], - r'\lceil': [('ex', '\xa7'), ('ex', '\x6c'), - ('ex', '\xbb'), ('ex', '\x26')], - r'\rceil': [('ex', '\xa8'), ('ex', '\x6d'), - ('ex', '\xbc'), ('ex', '\x27')], - r'\langle': [('ex', '\xad'), ('ex', '\x44'), - ('ex', '\xbf'), ('ex', '\x2a')], - r'\rangle': [('ex', '\xae'), ('ex', '\x45'), - ('ex', '\xc0'), ('ex', '\x2b')], - r'\__sqrt__': [('ex', '\x70'), ('ex', '\x71'), - ('ex', '\x72'), ('ex', '\x73')], - r'\backslash': [('ex', '\xb2'), ('ex', '\x2f'), - ('ex', '\xc2'), ('ex', '\x2d')], - r'/': [('rm', '/'), ('ex', '\xb1'), ('ex', '\x2e'), - ('ex', '\xcb'), ('ex', '\x2c')], - r'\widehat': [('rm', '\x5e'), ('ex', '\x62'), ('ex', '\x63'), - ('ex', '\x64')], - r'\widetilde': [('rm', '\x7e'), ('ex', '\x65'), ('ex', '\x66'), - ('ex', '\x67')], - r'<': [('cal', 'h'), ('ex', 'D')], - r'>': [('cal', 'i'), ('ex', 'E')] - } - - for alias, target in [(r'\leftparen', '('), - (r'\rightparent', ')'), - (r'\leftbrace', '{'), - (r'\rightbrace', '}'), - (r'\leftbracket', '['), - (r'\rightbracket', ']'), - (r'\{', '{'), - (r'\}', '}'), - (r'\[', '['), - (r'\]', ']')]: - _size_alternatives[alias] = _size_alternatives[target] - - def get_sized_alternatives_for_symbol(self, fontname: str, - sym: str) -> list[tuple[str, str]]: - return self._size_alternatives.get(sym, [(fontname, sym)]) - - -class UnicodeFonts(TruetypeFonts): - """ - An abstract base class for handling Unicode fonts. - - While some reasonably complete Unicode fonts (such as DejaVu) may - work in some situations, the only Unicode font I'm aware of with a - complete set of math symbols is STIX. - - This class will "fallback" on the Bakoma fonts when a required - symbol cannot be found in the font. - """ - - # Some glyphs are not present in the `cmr10` font, and must be brought in - # from `cmsy10`. Map the Unicode indices of those glyphs to the indices at - # which they are found in `cmsy10`. - _cmr10_substitutions = { - 0x00D7: 0x00A3, # Multiplication sign. - 0x2212: 0x00A1, # Minus sign. - } - - def __init__(self, default_font_prop: FontProperties, load_glyph_flags: int): - # This must come first so the backend's owner is set correctly - fallback_rc = mpl.rcParams['mathtext.fallback'] - font_cls: type[TruetypeFonts] | None = { - 'stix': StixFonts, - 'stixsans': StixSansFonts, - 'cm': BakomaFonts - }.get(fallback_rc) - self._fallback_font = (font_cls(default_font_prop, load_glyph_flags) - if font_cls else None) - - super().__init__(default_font_prop, load_glyph_flags) - for texfont in "cal rm tt it bf sf bfit".split(): - prop = mpl.rcParams['mathtext.' + texfont] - font = findfont(prop) - self.fontmap[texfont] = font - prop = FontProperties('cmex10') - font = findfont(prop) - self.fontmap['ex'] = font - - # include STIX sized alternatives for glyphs if fallback is STIX - if isinstance(self._fallback_font, StixFonts): - stixsizedaltfonts = { - 0: 'STIXGeneral', - 1: 'STIXSizeOneSym', - 2: 'STIXSizeTwoSym', - 3: 'STIXSizeThreeSym', - 4: 'STIXSizeFourSym', - 5: 'STIXSizeFiveSym'} - - for size, name in stixsizedaltfonts.items(): - fullpath = findfont(name) - self.fontmap[size] = fullpath - self.fontmap[name] = fullpath - - _slanted_symbols = set(r"\int \oint".split()) - - def _map_virtual_font(self, fontname: str, font_class: str, - uniindex: int) -> tuple[str, int]: - return fontname, uniindex - - def _get_glyph(self, fontname: str, font_class: str, - sym: str) -> tuple[FT2Font, int, bool]: - try: - uniindex = get_unicode_index(sym) - found_symbol = True - except ValueError: - uniindex = ord('?') - found_symbol = False - _log.warning("No TeX to Unicode mapping for %a.", sym) - - fontname, uniindex = self._map_virtual_font( - fontname, font_class, uniindex) - - new_fontname = fontname - - # Only characters in the "Letter" class should be italicized in 'it' - # mode. Greek capital letters should be Roman. - if found_symbol: - if fontname == 'it' and uniindex < 0x10000: - char = chr(uniindex) - if (unicodedata.category(char)[0] != "L" - or unicodedata.name(char).startswith("GREEK CAPITAL")): - new_fontname = 'rm' - - slanted = (new_fontname == 'it') or sym in self._slanted_symbols - found_symbol = False - font = self._get_font(new_fontname) - if font is not None: - if (uniindex in self._cmr10_substitutions - and font.family_name == "cmr10"): - font = get_font( - cbook._get_data_path("fonts/ttf/cmsy10.ttf")) - uniindex = self._cmr10_substitutions[uniindex] - glyphindex = font.get_char_index(uniindex) - if glyphindex != 0: - found_symbol = True - - if not found_symbol: - if self._fallback_font: - if (fontname in ('it', 'regular') - and isinstance(self._fallback_font, StixFonts)): - fontname = 'rm' - - g = self._fallback_font._get_glyph(fontname, font_class, sym) - family = g[0].family_name - if family in list(BakomaFonts._fontmap.values()): - family = "Computer Modern" - _log.info("Substituting symbol %s from %s", sym, family) - return g - - else: - if (fontname in ('it', 'regular') - and isinstance(self, StixFonts)): - return self._get_glyph('rm', font_class, sym) - _log.warning("Font %r does not have a glyph for %a [U+%x], " - "substituting with a dummy symbol.", - new_fontname, sym, uniindex) - font = self._get_font('rm') - uniindex = 0xA4 # currency char, for lack of anything better - slanted = False - - return font, uniindex, slanted - - def get_sized_alternatives_for_symbol(self, fontname: str, - sym: str) -> list[tuple[str, str]]: - if self._fallback_font: - return self._fallback_font.get_sized_alternatives_for_symbol( - fontname, sym) - return [(fontname, sym)] - - -class DejaVuFonts(UnicodeFonts, metaclass=abc.ABCMeta): - _fontmap: dict[str | int, str] = {} - - def __init__(self, default_font_prop: FontProperties, load_glyph_flags: int): - # This must come first so the backend's owner is set correctly - if isinstance(self, DejaVuSerifFonts): - self._fallback_font = StixFonts(default_font_prop, load_glyph_flags) - else: - self._fallback_font = StixSansFonts(default_font_prop, load_glyph_flags) - self.bakoma = BakomaFonts(default_font_prop, load_glyph_flags) - TruetypeFonts.__init__(self, default_font_prop, load_glyph_flags) - # Include Stix sized alternatives for glyphs - self._fontmap.update({ - 1: 'STIXSizeOneSym', - 2: 'STIXSizeTwoSym', - 3: 'STIXSizeThreeSym', - 4: 'STIXSizeFourSym', - 5: 'STIXSizeFiveSym', - }) - for key, name in self._fontmap.items(): - fullpath = findfont(name) - self.fontmap[key] = fullpath - self.fontmap[name] = fullpath - - def _get_glyph(self, fontname: str, font_class: str, - sym: str) -> tuple[FT2Font, int, bool]: - # Override prime symbol to use Bakoma. - if sym == r'\prime': - return self.bakoma._get_glyph(fontname, font_class, sym) - else: - # check whether the glyph is available in the display font - uniindex = get_unicode_index(sym) - font = self._get_font('ex') - if font is not None: - glyphindex = font.get_char_index(uniindex) - if glyphindex != 0: - return super()._get_glyph('ex', font_class, sym) - # otherwise return regular glyph - return super()._get_glyph(fontname, font_class, sym) - - -class DejaVuSerifFonts(DejaVuFonts): - """ - A font handling class for the DejaVu Serif fonts - - If a glyph is not found it will fallback to Stix Serif - """ - _fontmap = { - 'rm': 'DejaVu Serif', - 'it': 'DejaVu Serif:italic', - 'bf': 'DejaVu Serif:weight=bold', - 'bfit': 'DejaVu Serif:italic:bold', - 'sf': 'DejaVu Sans', - 'tt': 'DejaVu Sans Mono', - 'ex': 'DejaVu Serif Display', - 0: 'DejaVu Serif', - } - - -class DejaVuSansFonts(DejaVuFonts): - """ - A font handling class for the DejaVu Sans fonts - - If a glyph is not found it will fallback to Stix Sans - """ - _fontmap = { - 'rm': 'DejaVu Sans', - 'it': 'DejaVu Sans:italic', - 'bf': 'DejaVu Sans:weight=bold', - 'bfit': 'DejaVu Sans:italic:bold', - 'sf': 'DejaVu Sans', - 'tt': 'DejaVu Sans Mono', - 'ex': 'DejaVu Sans Display', - 0: 'DejaVu Sans', - } - - -class StixFonts(UnicodeFonts): - """ - A font handling class for the STIX fonts. - - In addition to what UnicodeFonts provides, this class: - - - supports "virtual fonts" which are complete alpha numeric - character sets with different font styles at special Unicode - code points, such as "Blackboard". - - - handles sized alternative characters for the STIXSizeX fonts. - """ - _fontmap: dict[str | int, str] = { - 'rm': 'STIXGeneral', - 'it': 'STIXGeneral:italic', - 'bf': 'STIXGeneral:weight=bold', - 'bfit': 'STIXGeneral:italic:bold', - 'nonunirm': 'STIXNonUnicode', - 'nonuniit': 'STIXNonUnicode:italic', - 'nonunibf': 'STIXNonUnicode:weight=bold', - 0: 'STIXGeneral', - 1: 'STIXSizeOneSym', - 2: 'STIXSizeTwoSym', - 3: 'STIXSizeThreeSym', - 4: 'STIXSizeFourSym', - 5: 'STIXSizeFiveSym', - } - _fallback_font = None - _sans = False - - def __init__(self, default_font_prop: FontProperties, load_glyph_flags: int): - TruetypeFonts.__init__(self, default_font_prop, load_glyph_flags) - for key, name in self._fontmap.items(): - fullpath = findfont(name) - self.fontmap[key] = fullpath - self.fontmap[name] = fullpath - - def _map_virtual_font(self, fontname: str, font_class: str, - uniindex: int) -> tuple[str, int]: - # Handle these "fonts" that are actually embedded in - # other fonts. - font_mapping = stix_virtual_fonts.get(fontname) - if (self._sans and font_mapping is None - and fontname not in ('regular', 'default')): - font_mapping = stix_virtual_fonts['sf'] - doing_sans_conversion = True - else: - doing_sans_conversion = False - - if isinstance(font_mapping, dict): - try: - mapping = font_mapping[font_class] - except KeyError: - mapping = font_mapping['rm'] - elif isinstance(font_mapping, list): - mapping = font_mapping - else: - mapping = None - - if mapping is not None: - # Binary search for the source glyph - lo = 0 - hi = len(mapping) - while lo < hi: - mid = (lo+hi)//2 - range = mapping[mid] - if uniindex < range[0]: - hi = mid - elif uniindex <= range[1]: - break - else: - lo = mid + 1 - - if range[0] <= uniindex <= range[1]: - uniindex = uniindex - range[0] + range[3] - fontname = range[2] - elif not doing_sans_conversion: - # This will generate a dummy character - uniindex = 0x1 - fontname = mpl.rcParams['mathtext.default'] - - # Fix some incorrect glyphs. - if fontname in ('rm', 'it'): - uniindex = stix_glyph_fixes.get(uniindex, uniindex) - - # Handle private use area glyphs - if fontname in ('it', 'rm', 'bf', 'bfit') and 0xe000 <= uniindex <= 0xf8ff: - fontname = 'nonuni' + fontname - - return fontname, uniindex - - @functools.cache - def get_sized_alternatives_for_symbol( # type: ignore[override] - self, - fontname: str, - sym: str) -> list[tuple[str, str]] | list[tuple[int, str]]: - fixes = { - '\\{': '{', '\\}': '}', '\\[': '[', '\\]': ']', - '<': '\N{MATHEMATICAL LEFT ANGLE BRACKET}', - '>': '\N{MATHEMATICAL RIGHT ANGLE BRACKET}', - } - sym = fixes.get(sym, sym) - try: - uniindex = get_unicode_index(sym) - except ValueError: - return [(fontname, sym)] - alternatives = [(i, chr(uniindex)) for i in range(6) - if self._get_font(i).get_char_index(uniindex) != 0] - # The largest size of the radical symbol in STIX has incorrect - # metrics that cause it to be disconnected from the stem. - if sym == r'\__sqrt__': - alternatives = alternatives[:-1] - return alternatives - - -class StixSansFonts(StixFonts): - """ - A font handling class for the STIX fonts (that uses sans-serif - characters by default). - """ - _sans = True - - -############################################################################## -# TeX-LIKE BOX MODEL - -# The following is based directly on the document 'woven' from the -# TeX82 source code. This information is also available in printed -# form: -# -# Knuth, Donald E.. 1986. Computers and Typesetting, Volume B: -# TeX: The Program. Addison-Wesley Professional. -# -# The most relevant "chapters" are: -# Data structures for boxes and their friends -# Shipping pages out (ship()) -# Packaging (hpack() and vpack()) -# Data structures for math mode -# Subroutines for math mode -# Typesetting math formulas -# -# Many of the docstrings below refer to a numbered "node" in that -# book, e.g., node123 -# -# Note that (as TeX) y increases downward, unlike many other parts of -# matplotlib. - -# How much text shrinks when going to the next-smallest level. -SHRINK_FACTOR = 0.7 -# The number of different sizes of chars to use, beyond which they will not -# get any smaller -NUM_SIZE_LEVELS = 6 - - -class FontConstantsBase: - """ - A set of constants that controls how certain things, such as sub- - and superscripts are laid out. These are all metrics that can't - be reliably retrieved from the font metrics in the font itself. - """ - # Percentage of x-height of additional horiz. space after sub/superscripts - script_space: T.ClassVar[float] = 0.05 - - # Percentage of x-height that sub/superscripts drop below the baseline - subdrop: T.ClassVar[float] = 0.4 - - # Percentage of x-height that superscripts are raised from the baseline - sup1: T.ClassVar[float] = 0.7 - - # Percentage of x-height that subscripts drop below the baseline - sub1: T.ClassVar[float] = 0.3 - - # Percentage of x-height that subscripts drop below the baseline when a - # superscript is present - sub2: T.ClassVar[float] = 0.5 - - # Percentage of x-height that sub/superscripts are offset relative to the - # nucleus edge for non-slanted nuclei - delta: T.ClassVar[float] = 0.025 - - # Additional percentage of last character height above 2/3 of the - # x-height that superscripts are offset relative to the subscript - # for slanted nuclei - delta_slanted: T.ClassVar[float] = 0.2 - - # Percentage of x-height that superscripts and subscripts are offset for - # integrals - delta_integral: T.ClassVar[float] = 0.1 - - -class ComputerModernFontConstants(FontConstantsBase): - script_space = 0.075 - subdrop = 0.2 - sup1 = 0.45 - sub1 = 0.2 - sub2 = 0.3 - delta = 0.075 - delta_slanted = 0.3 - delta_integral = 0.3 - - -class STIXFontConstants(FontConstantsBase): - script_space = 0.1 - sup1 = 0.8 - sub2 = 0.6 - delta = 0.05 - delta_slanted = 0.3 - delta_integral = 0.3 - - -class STIXSansFontConstants(FontConstantsBase): - script_space = 0.05 - sup1 = 0.8 - delta_slanted = 0.6 - delta_integral = 0.3 - - -class DejaVuSerifFontConstants(FontConstantsBase): - pass - - -class DejaVuSansFontConstants(FontConstantsBase): - pass - - -# Maps font family names to the FontConstantBase subclass to use -_font_constant_mapping = { - 'DejaVu Sans': DejaVuSansFontConstants, - 'DejaVu Sans Mono': DejaVuSansFontConstants, - 'DejaVu Serif': DejaVuSerifFontConstants, - 'cmb10': ComputerModernFontConstants, - 'cmex10': ComputerModernFontConstants, - 'cmmi10': ComputerModernFontConstants, - 'cmr10': ComputerModernFontConstants, - 'cmss10': ComputerModernFontConstants, - 'cmsy10': ComputerModernFontConstants, - 'cmtt10': ComputerModernFontConstants, - 'STIXGeneral': STIXFontConstants, - 'STIXNonUnicode': STIXFontConstants, - 'STIXSizeFiveSym': STIXFontConstants, - 'STIXSizeFourSym': STIXFontConstants, - 'STIXSizeThreeSym': STIXFontConstants, - 'STIXSizeTwoSym': STIXFontConstants, - 'STIXSizeOneSym': STIXFontConstants, - # Map the fonts we used to ship, just for good measure - 'Bitstream Vera Sans': DejaVuSansFontConstants, - 'Bitstream Vera': DejaVuSansFontConstants, - } - - -def _get_font_constant_set(state: ParserState) -> type[FontConstantsBase]: - constants = _font_constant_mapping.get( - state.fontset._get_font(state.font).family_name, FontConstantsBase) - # STIX sans isn't really its own fonts, just different code points - # in the STIX fonts, so we have to detect this one separately. - if (constants is STIXFontConstants and - isinstance(state.fontset, StixSansFonts)): - return STIXSansFontConstants - return constants - - -class Node: - """A node in the TeX box model.""" - - def __init__(self) -> None: - self.size = 0 - - def __repr__(self) -> str: - return type(self).__name__ - - def get_kerning(self, next: Node | None) -> float: - return 0.0 - - def shrink(self) -> None: - """ - Shrinks one level smaller. There are only three levels of - sizes, after which things will no longer get smaller. - """ - self.size += 1 - - def render(self, output: Output, x: float, y: float) -> None: - """Render this node.""" - - -class Box(Node): - """A node with a physical location.""" - - def __init__(self, width: float, height: float, depth: float) -> None: - super().__init__() - self.width = width - self.height = height - self.depth = depth - - def shrink(self) -> None: - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.width *= SHRINK_FACTOR - self.height *= SHRINK_FACTOR - self.depth *= SHRINK_FACTOR - - def render(self, output: Output, # type: ignore[override] - x1: float, y1: float, x2: float, y2: float) -> None: - pass - - -class Vbox(Box): - """A box with only height (zero width).""" - - def __init__(self, height: float, depth: float): - super().__init__(0., height, depth) - - -class Hbox(Box): - """A box with only width (zero height and depth).""" - - def __init__(self, width: float): - super().__init__(width, 0., 0.) - - -class Char(Node): - """ - A single character. - - Unlike TeX, the font information and metrics are stored with each `Char` - to make it easier to lookup the font metrics when needed. Note that TeX - boxes have a width, height, and depth, unlike Type1 and TrueType which use - a full bounding box and an advance in the x-direction. The metrics must - be converted to the TeX model, and the advance (if different from width) - must be converted into a `Kern` node when the `Char` is added to its parent - `Hlist`. - """ - - def __init__(self, c: str, state: ParserState): - super().__init__() - self.c = c - self.fontset = state.fontset - self.font = state.font - self.font_class = state.font_class - self.fontsize = state.fontsize - self.dpi = state.dpi - # The real width, height and depth will be set during the - # pack phase, after we know the real fontsize - self._update_metrics() - - def __repr__(self) -> str: - return '`%s`' % self.c - - def _update_metrics(self) -> None: - metrics = self._metrics = self.fontset.get_metrics( - self.font, self.font_class, self.c, self.fontsize, self.dpi) - if self.c == ' ': - self.width = metrics.advance - else: - self.width = metrics.width - self.height = metrics.iceberg - self.depth = -(metrics.iceberg - metrics.height) - - def is_slanted(self) -> bool: - return self._metrics.slanted - - def get_kerning(self, next: Node | None) -> float: - """ - Return the amount of kerning between this and the given character. - - This method is called when characters are strung together into `Hlist` - to create `Kern` nodes. - """ - advance = self._metrics.advance - self.width - kern = 0. - if isinstance(next, Char): - kern = self.fontset.get_kern( - self.font, self.font_class, self.c, self.fontsize, - next.font, next.font_class, next.c, next.fontsize, - self.dpi) - return advance + kern - - def render(self, output: Output, x: float, y: float) -> None: - self.fontset.render_glyph( - output, x, y, - self.font, self.font_class, self.c, self.fontsize, self.dpi) - - def shrink(self) -> None: - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.fontsize *= SHRINK_FACTOR - self.width *= SHRINK_FACTOR - self.height *= SHRINK_FACTOR - self.depth *= SHRINK_FACTOR - - -class Accent(Char): - """ - The font metrics need to be dealt with differently for accents, - since they are already offset correctly from the baseline in - TrueType fonts. - """ - def _update_metrics(self) -> None: - metrics = self._metrics = self.fontset.get_metrics( - self.font, self.font_class, self.c, self.fontsize, self.dpi) - self.width = metrics.xmax - metrics.xmin - self.height = metrics.ymax - metrics.ymin - self.depth = 0 - - def shrink(self) -> None: - super().shrink() - self._update_metrics() - - def render(self, output: Output, x: float, y: float) -> None: - self.fontset.render_glyph( - output, x - self._metrics.xmin, y + self._metrics.ymin, - self.font, self.font_class, self.c, self.fontsize, self.dpi) - - -class List(Box): - """A list of nodes (either horizontal or vertical).""" - - def __init__(self, elements: T.Sequence[Node]): - super().__init__(0., 0., 0.) - self.shift_amount = 0. # An arbitrary offset - self.children = [*elements] # The child nodes of this list - # The following parameters are set in the vpack and hpack functions - self.glue_set = 0. # The glue setting of this list - self.glue_sign = 0 # 0: normal, -1: shrinking, 1: stretching - self.glue_order = 0 # The order of infinity (0 - 3) for the glue - - def __repr__(self) -> str: - return '{}[{}]'.format( - super().__repr__(), - self.width, self.height, - self.depth, self.shift_amount, - ', '.join([repr(x) for x in self.children])) - - def _set_glue(self, x: float, sign: int, totals: list[float], - error_type: str) -> None: - self.glue_order = o = next( - # Highest order of glue used by the members of this list. - (i for i in range(len(totals))[::-1] if totals[i] != 0), 0) - self.glue_sign = sign - if totals[o] != 0.: - self.glue_set = x / totals[o] - else: - self.glue_sign = 0 - self.glue_ratio = 0. - if o == 0: - if len(self.children): - _log.warning("%s %s: %r", - error_type, type(self).__name__, self) - - def shrink(self) -> None: - for child in self.children: - child.shrink() - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.shift_amount *= SHRINK_FACTOR - self.glue_set *= SHRINK_FACTOR - - -class Hlist(List): - """A horizontal list of boxes.""" - - def __init__(self, elements: T.Sequence[Node], w: float = 0.0, - m: T.Literal['additional', 'exactly'] = 'additional', - do_kern: bool = True): - super().__init__(elements) - if do_kern: - self.kern() - self.hpack(w=w, m=m) - - def kern(self) -> None: - """ - Insert `Kern` nodes between `Char` nodes to set kerning. - - The `Char` nodes themselves determine the amount of kerning they need - (in `~Char.get_kerning`), and this function just creates the correct - linked list. - """ - new_children = [] - num_children = len(self.children) - if num_children: - for i in range(num_children): - elem = self.children[i] - if i < num_children - 1: - next = self.children[i + 1] - else: - next = None - - new_children.append(elem) - kerning_distance = elem.get_kerning(next) - if kerning_distance != 0.: - kern = Kern(kerning_distance) - new_children.append(kern) - self.children = new_children - - def hpack(self, w: float = 0.0, - m: T.Literal['additional', 'exactly'] = 'additional') -> None: - r""" - Compute the dimensions of the resulting boxes, and adjust the glue if - one of those dimensions is pre-specified. The computed sizes normally - enclose all of the material inside the new box; but some items may - stick out if negative glue is used, if the box is overfull, or if a - ``\vbox`` includes other boxes that have been shifted left. - - Parameters - ---------- - w : float, default: 0 - A width. - m : {'exactly', 'additional'}, default: 'additional' - Whether to produce a box whose width is 'exactly' *w*; or a box - with the natural width of the contents, plus *w* ('additional'). - - Notes - ----- - The defaults produce a box with the natural width of the contents. - """ - # I don't know why these get reset in TeX. Shift_amount is pretty - # much useless if we do. - # self.shift_amount = 0. - h = 0. - d = 0. - x = 0. - total_stretch = [0.] * 4 - total_shrink = [0.] * 4 - for p in self.children: - if isinstance(p, Char): - x += p.width - h = max(h, p.height) - d = max(d, p.depth) - elif isinstance(p, Box): - x += p.width - if not np.isinf(p.height) and not np.isinf(p.depth): - s = getattr(p, 'shift_amount', 0.) - h = max(h, p.height - s) - d = max(d, p.depth + s) - elif isinstance(p, Glue): - glue_spec = p.glue_spec - x += glue_spec.width - total_stretch[glue_spec.stretch_order] += glue_spec.stretch - total_shrink[glue_spec.shrink_order] += glue_spec.shrink - elif isinstance(p, Kern): - x += p.width - self.height = h - self.depth = d - - if m == 'additional': - w += x - self.width = w - x = w - x - - if x == 0.: - self.glue_sign = 0 - self.glue_order = 0 - self.glue_ratio = 0. - return - if x > 0.: - self._set_glue(x, 1, total_stretch, "Overful") - else: - self._set_glue(x, -1, total_shrink, "Underful") - - -class Vlist(List): - """A vertical list of boxes.""" - - def __init__(self, elements: T.Sequence[Node], h: float = 0.0, - m: T.Literal['additional', 'exactly'] = 'additional'): - super().__init__(elements) - self.vpack(h=h, m=m) - - def vpack(self, h: float = 0.0, - m: T.Literal['additional', 'exactly'] = 'additional', - l: float = np.inf) -> None: - """ - Compute the dimensions of the resulting boxes, and to adjust the glue - if one of those dimensions is pre-specified. - - Parameters - ---------- - h : float, default: 0 - A height. - m : {'exactly', 'additional'}, default: 'additional' - Whether to produce a box whose height is 'exactly' *h*; or a box - with the natural height of the contents, plus *h* ('additional'). - l : float, default: np.inf - The maximum height. - - Notes - ----- - The defaults produce a box with the natural height of the contents. - """ - # I don't know why these get reset in TeX. Shift_amount is pretty - # much useless if we do. - # self.shift_amount = 0. - w = 0. - d = 0. - x = 0. - total_stretch = [0.] * 4 - total_shrink = [0.] * 4 - for p in self.children: - if isinstance(p, Box): - x += d + p.height - d = p.depth - if not np.isinf(p.width): - s = getattr(p, 'shift_amount', 0.) - w = max(w, p.width + s) - elif isinstance(p, Glue): - x += d - d = 0. - glue_spec = p.glue_spec - x += glue_spec.width - total_stretch[glue_spec.stretch_order] += glue_spec.stretch - total_shrink[glue_spec.shrink_order] += glue_spec.shrink - elif isinstance(p, Kern): - x += d + p.width - d = 0. - elif isinstance(p, Char): - raise RuntimeError( - "Internal mathtext error: Char node found in Vlist") - - self.width = w - if d > l: - x += d - l - self.depth = l - else: - self.depth = d - - if m == 'additional': - h += x - self.height = h - x = h - x - - if x == 0: - self.glue_sign = 0 - self.glue_order = 0 - self.glue_ratio = 0. - return - - if x > 0.: - self._set_glue(x, 1, total_stretch, "Overful") - else: - self._set_glue(x, -1, total_shrink, "Underful") - - -class Rule(Box): - """ - A solid black rectangle. - - It has *width*, *depth*, and *height* fields just as in an `Hlist`. - However, if any of these dimensions is inf, the actual value will be - determined by running the rule up to the boundary of the innermost - enclosing box. This is called a "running dimension". The width is never - running in an `Hlist`; the height and depth are never running in a `Vlist`. - """ - - def __init__(self, width: float, height: float, depth: float, state: ParserState): - super().__init__(width, height, depth) - self.fontset = state.fontset - - def render(self, output: Output, # type: ignore[override] - x: float, y: float, w: float, h: float) -> None: - self.fontset.render_rect_filled(output, x, y, x + w, y + h) - - -class Hrule(Rule): - """Convenience class to create a horizontal rule.""" - - def __init__(self, state: ParserState, thickness: float | None = None): - if thickness is None: - thickness = state.get_current_underline_thickness() - height = depth = thickness * 0.5 - super().__init__(np.inf, height, depth, state) - - -class Vrule(Rule): - """Convenience class to create a vertical rule.""" - - def __init__(self, state: ParserState): - thickness = state.get_current_underline_thickness() - super().__init__(thickness, np.inf, np.inf, state) - - -class _GlueSpec(NamedTuple): - width: float - stretch: float - stretch_order: int - shrink: float - shrink_order: int - - -_GlueSpec._named = { # type: ignore[attr-defined] - 'fil': _GlueSpec(0., 1., 1, 0., 0), - 'fill': _GlueSpec(0., 1., 2, 0., 0), - 'filll': _GlueSpec(0., 1., 3, 0., 0), - 'neg_fil': _GlueSpec(0., 0., 0, 1., 1), - 'neg_fill': _GlueSpec(0., 0., 0, 1., 2), - 'neg_filll': _GlueSpec(0., 0., 0, 1., 3), - 'empty': _GlueSpec(0., 0., 0, 0., 0), - 'ss': _GlueSpec(0., 1., 1, -1., 1), -} - - -class Glue(Node): - """ - Most of the information in this object is stored in the underlying - ``_GlueSpec`` class, which is shared between multiple glue objects. - (This is a memory optimization which probably doesn't matter anymore, but - it's easier to stick to what TeX does.) - """ - - def __init__(self, - glue_type: _GlueSpec | T.Literal["fil", "fill", "filll", - "neg_fil", "neg_fill", "neg_filll", - "empty", "ss"]): - super().__init__() - if isinstance(glue_type, str): - glue_spec = _GlueSpec._named[glue_type] # type: ignore[attr-defined] - elif isinstance(glue_type, _GlueSpec): - glue_spec = glue_type - else: - raise ValueError("glue_type must be a glue spec name or instance") - self.glue_spec = glue_spec - - def shrink(self) -> None: - super().shrink() - if self.size < NUM_SIZE_LEVELS: - g = self.glue_spec - self.glue_spec = g._replace(width=g.width * SHRINK_FACTOR) - - -class HCentered(Hlist): - """ - A convenience class to create an `Hlist` whose contents are - centered within its enclosing box. - """ - - def __init__(self, elements: list[Node]): - super().__init__([Glue('ss'), *elements, Glue('ss')], do_kern=False) - - -class VCentered(Vlist): - """ - A convenience class to create a `Vlist` whose contents are - centered within its enclosing box. - """ - - def __init__(self, elements: list[Node]): - super().__init__([Glue('ss'), *elements, Glue('ss')]) - - -class Kern(Node): - """ - A `Kern` node has a width field to specify a (normally - negative) amount of spacing. This spacing correction appears in - horizontal lists between letters like A and V when the font - designer said that it looks better to move them closer together or - further apart. A kern node can also appear in a vertical list, - when its *width* denotes additional spacing in the vertical - direction. - """ - - height = 0 - depth = 0 - - def __init__(self, width: float): - super().__init__() - self.width = width - - def __repr__(self) -> str: - return "k%.02f" % self.width - - def shrink(self) -> None: - super().shrink() - if self.size < NUM_SIZE_LEVELS: - self.width *= SHRINK_FACTOR - - -class AutoHeightChar(Hlist): - """ - A character as close to the given height and depth as possible. - - When using a font with multiple height versions of some characters (such as - the BaKoMa fonts), the correct glyph will be selected, otherwise this will - always just return a scaled version of the glyph. - """ - - def __init__(self, c: str, height: float, depth: float, state: ParserState, - always: bool = False, factor: float | None = None): - alternatives = state.fontset.get_sized_alternatives_for_symbol( - state.font, c) - - xHeight = state.fontset.get_xheight( - state.font, state.fontsize, state.dpi) - - state = state.copy() - target_total = height + depth - for fontname, sym in alternatives: - state.font = fontname - char = Char(sym, state) - # Ensure that size 0 is chosen when the text is regular sized but - # with descender glyphs by subtracting 0.2 * xHeight - if char.height + char.depth >= target_total - 0.2 * xHeight: - break - - shift = 0.0 - if state.font != 0 or len(alternatives) == 1: - if factor is None: - factor = target_total / (char.height + char.depth) - state.fontsize *= factor - char = Char(sym, state) - - shift = (depth - char.depth) - - super().__init__([char]) - self.shift_amount = shift - - -class AutoWidthChar(Hlist): - """ - A character as close to the given width as possible. - - When using a font with multiple width versions of some characters (such as - the BaKoMa fonts), the correct glyph will be selected, otherwise this will - always just return a scaled version of the glyph. - """ - - def __init__(self, c: str, width: float, state: ParserState, always: bool = False, - char_class: type[Char] = Char): - alternatives = state.fontset.get_sized_alternatives_for_symbol( - state.font, c) - - state = state.copy() - for fontname, sym in alternatives: - state.font = fontname - char = char_class(sym, state) - if char.width >= width: - break - - factor = width / char.width - state.fontsize *= factor - char = char_class(sym, state) - - super().__init__([char]) - self.width = char.width - - -def ship(box: Box, xy: tuple[float, float] = (0, 0)) -> Output: - """ - Ship out *box* at offset *xy*, converting it to an `Output`. - - Since boxes can be inside of boxes inside of boxes, the main work of `ship` - is done by two mutually recursive routines, `hlist_out` and `vlist_out`, - which traverse the `Hlist` nodes and `Vlist` nodes inside of horizontal - and vertical boxes. The global variables used in TeX to store state as it - processes have become local variables here. - """ - ox, oy = xy - cur_v = 0. - cur_h = 0. - off_h = ox - off_v = oy + box.height - output = Output(box) - - def clamp(value: float) -> float: - return -1e9 if value < -1e9 else +1e9 if value > +1e9 else value - - def hlist_out(box: Hlist) -> None: - nonlocal cur_v, cur_h, off_h, off_v - - cur_g = 0 - cur_glue = 0. - glue_order = box.glue_order - glue_sign = box.glue_sign - base_line = cur_v - left_edge = cur_h - - for p in box.children: - if isinstance(p, Char): - p.render(output, cur_h + off_h, cur_v + off_v) - cur_h += p.width - elif isinstance(p, Kern): - cur_h += p.width - elif isinstance(p, List): - # node623 - if len(p.children) == 0: - cur_h += p.width - else: - edge = cur_h - cur_v = base_line + p.shift_amount - if isinstance(p, Hlist): - hlist_out(p) - elif isinstance(p, Vlist): - # p.vpack(box.height + box.depth, 'exactly') - vlist_out(p) - else: - assert False, "unreachable code" - cur_h = edge + p.width - cur_v = base_line - elif isinstance(p, Box): - # node624 - rule_height = p.height - rule_depth = p.depth - rule_width = p.width - if np.isinf(rule_height): - rule_height = box.height - if np.isinf(rule_depth): - rule_depth = box.depth - if rule_height > 0 and rule_width > 0: - cur_v = base_line + rule_depth - p.render(output, - cur_h + off_h, cur_v + off_v, - rule_width, rule_height) - cur_v = base_line - cur_h += rule_width - elif isinstance(p, Glue): - # node625 - glue_spec = p.glue_spec - rule_width = glue_spec.width - cur_g - if glue_sign != 0: # normal - if glue_sign == 1: # stretching - if glue_spec.stretch_order == glue_order: - cur_glue += glue_spec.stretch - cur_g = round(clamp(box.glue_set * cur_glue)) - elif glue_spec.shrink_order == glue_order: - cur_glue += glue_spec.shrink - cur_g = round(clamp(box.glue_set * cur_glue)) - rule_width += cur_g - cur_h += rule_width - - def vlist_out(box: Vlist) -> None: - nonlocal cur_v, cur_h, off_h, off_v - - cur_g = 0 - cur_glue = 0. - glue_order = box.glue_order - glue_sign = box.glue_sign - left_edge = cur_h - cur_v -= box.height - top_edge = cur_v - - for p in box.children: - if isinstance(p, Kern): - cur_v += p.width - elif isinstance(p, List): - if len(p.children) == 0: - cur_v += p.height + p.depth - else: - cur_v += p.height - cur_h = left_edge + p.shift_amount - save_v = cur_v - p.width = box.width - if isinstance(p, Hlist): - hlist_out(p) - elif isinstance(p, Vlist): - vlist_out(p) - else: - assert False, "unreachable code" - cur_v = save_v + p.depth - cur_h = left_edge - elif isinstance(p, Box): - rule_height = p.height - rule_depth = p.depth - rule_width = p.width - if np.isinf(rule_width): - rule_width = box.width - rule_height += rule_depth - if rule_height > 0 and rule_depth > 0: - cur_v += rule_height - p.render(output, - cur_h + off_h, cur_v + off_v, - rule_width, rule_height) - elif isinstance(p, Glue): - glue_spec = p.glue_spec - rule_height = glue_spec.width - cur_g - if glue_sign != 0: # normal - if glue_sign == 1: # stretching - if glue_spec.stretch_order == glue_order: - cur_glue += glue_spec.stretch - cur_g = round(clamp(box.glue_set * cur_glue)) - elif glue_spec.shrink_order == glue_order: # shrinking - cur_glue += glue_spec.shrink - cur_g = round(clamp(box.glue_set * cur_glue)) - rule_height += cur_g - cur_v += rule_height - elif isinstance(p, Char): - raise RuntimeError( - "Internal mathtext error: Char node found in vlist") - - assert isinstance(box, Hlist) - hlist_out(box) - return output - - -############################################################################## -# PARSER - - -def Error(msg: str) -> ParserElement: - """Helper class to raise parser errors.""" - def raise_error(s: str, loc: int, toks: ParseResults) -> T.Any: - raise ParseFatalException(s, loc, msg) - - return Empty().setParseAction(raise_error) - - -class ParserState: - """ - Parser state. - - States are pushed and popped from a stack as necessary, and the "current" - state is always at the top of the stack. - - Upon entering and leaving a group { } or math/non-math, the stack is pushed - and popped accordingly. - """ - - def __init__(self, fontset: Fonts, font: str, font_class: str, fontsize: float, - dpi: float): - self.fontset = fontset - self._font = font - self.font_class = font_class - self.fontsize = fontsize - self.dpi = dpi - - def copy(self) -> ParserState: - return copy.copy(self) - - @property - def font(self) -> str: - return self._font - - @font.setter - def font(self, name: str) -> None: - if name in ('rm', 'it', 'bf', 'bfit'): - self.font_class = name - self._font = name - - def get_current_underline_thickness(self) -> float: - """Return the underline thickness for this state.""" - return self.fontset.get_underline_thickness( - self.font, self.fontsize, self.dpi) - - -def cmd(expr: str, args: ParserElement) -> ParserElement: - r""" - Helper to define TeX commands. - - ``cmd("\cmd", args)`` is equivalent to - ``"\cmd" - (args | Error("Expected \cmd{arg}{...}"))`` where the names in - the error message are taken from element names in *args*. If *expr* - already includes arguments (e.g. "\cmd{arg}{...}"), then they are stripped - when constructing the parse element, but kept (and *expr* is used as is) in - the error message. - """ - - def names(elt: ParserElement) -> T.Generator[str, None, None]: - if isinstance(elt, ParseExpression): - for expr in elt.exprs: - yield from names(expr) - elif elt.resultsName: - yield elt.resultsName - - csname = expr.split("{", 1)[0] - err = (csname + "".join("{%s}" % name for name in names(args)) - if expr == csname else expr) - return csname - (args | Error(f"Expected {err}")) - - -class Parser: - """ - A pyparsing-based parser for strings containing math expressions. - - Raw text may also appear outside of pairs of ``$``. - - The grammar is based directly on that in TeX, though it cuts a few corners. - """ - - class _MathStyle(enum.Enum): - DISPLAYSTYLE = 0 - TEXTSTYLE = 1 - SCRIPTSTYLE = 2 - SCRIPTSCRIPTSTYLE = 3 - - _binary_operators = set( - '+ * - \N{MINUS SIGN}' - r''' - \pm \sqcap \rhd - \mp \sqcup \unlhd - \times \vee \unrhd - \div \wedge \oplus - \ast \setminus \ominus - \star \wr \otimes - \circ \diamond \oslash - \bullet \bigtriangleup \odot - \cdot \bigtriangledown \bigcirc - \cap \triangleleft \dagger - \cup \triangleright \ddagger - \uplus \lhd \amalg - \dotplus \dotminus \Cap - \Cup \barwedge \boxdot - \boxminus \boxplus \boxtimes - \curlyvee \curlywedge \divideontimes - \doublebarwedge \leftthreetimes \rightthreetimes - \slash \veebar \barvee - \cupdot \intercal \amalg - \circledcirc \circleddash \circledast - \boxbar \obar \merge - \minuscolon \dotsminusdots - '''.split()) - - _relation_symbols = set(r''' - = < > : - \leq \geq \equiv \models - \prec \succ \sim \perp - \preceq \succeq \simeq \mid - \ll \gg \asymp \parallel - \subset \supset \approx \bowtie - \subseteq \supseteq \cong \Join - \sqsubset \sqsupset \neq \smile - \sqsubseteq \sqsupseteq \doteq \frown - \in \ni \propto \vdash - \dashv \dots \doteqdot \leqq - \geqq \lneqq \gneqq \lessgtr - \leqslant \geqslant \eqgtr \eqless - \eqslantless \eqslantgtr \lesseqgtr \backsim - \backsimeq \lesssim \gtrsim \precsim - \precnsim \gnsim \lnsim \succsim - \succnsim \nsim \lesseqqgtr \gtreqqless - \gtreqless \subseteqq \supseteqq \subsetneqq - \supsetneqq \lessapprox \approxeq \gtrapprox - \precapprox \succapprox \precnapprox \succnapprox - \npreccurlyeq \nsucccurlyeq \nsqsubseteq \nsqsupseteq - \sqsubsetneq \sqsupsetneq \nlesssim \ngtrsim - \nlessgtr \ngtrless \lnapprox \gnapprox - \napprox \approxeq \approxident \lll - \ggg \nparallel \Vdash \Vvdash - \nVdash \nvdash \vDash \nvDash - \nVDash \oequal \simneqq \triangle - \triangleq \triangleeq \triangleleft - \triangleright \ntriangleleft \ntriangleright - \trianglelefteq \ntrianglelefteq \trianglerighteq - \ntrianglerighteq \blacktriangleleft \blacktriangleright - \equalparallel \measuredrightangle \varlrtriangle - \Doteq \Bumpeq \Subset \Supset - \backepsilon \because \therefore \bot - \top \bumpeq \circeq \coloneq - \curlyeqprec \curlyeqsucc \eqcirc \eqcolon - \eqsim \fallingdotseq \gtrdot \gtrless - \ltimes \rtimes \lessdot \ne - \ncong \nequiv \ngeq \ngtr - \nleq \nless \nmid \notin - \nprec \nsubset \nsubseteq \nsucc - \nsupset \nsupseteq \pitchfork \preccurlyeq - \risingdotseq \subsetneq \succcurlyeq \supsetneq - \varpropto \vartriangleleft \scurel - \vartriangleright \rightangle \equal \backcong - \eqdef \wedgeq \questeq \between - \veeeq \disin \varisins \isins - \isindot \varisinobar \isinobar \isinvb - \isinE \nisd \varnis \nis - \varniobar \niobar \bagmember \ratio - \Equiv \stareq \measeq \arceq - \rightassert \rightModels \smallin \smallowns - \notsmallowns \nsimeq'''.split()) - - _arrow_symbols = set(r""" - \leftarrow \longleftarrow \uparrow \Leftarrow \Longleftarrow - \Uparrow \rightarrow \longrightarrow \downarrow \Rightarrow - \Longrightarrow \Downarrow \leftrightarrow \updownarrow - \longleftrightarrow \updownarrow \Leftrightarrow - \Longleftrightarrow \Updownarrow \mapsto \longmapsto \nearrow - \hookleftarrow \hookrightarrow \searrow \leftharpoonup - \rightharpoonup \swarrow \leftharpoondown \rightharpoondown - \nwarrow \rightleftharpoons \leadsto \dashrightarrow - \dashleftarrow \leftleftarrows \leftrightarrows \Lleftarrow - \Rrightarrow \twoheadleftarrow \leftarrowtail \looparrowleft - \leftrightharpoons \curvearrowleft \circlearrowleft \Lsh - \upuparrows \upharpoonleft \downharpoonleft \multimap - \leftrightsquigarrow \rightrightarrows \rightleftarrows - \rightrightarrows \rightleftarrows \twoheadrightarrow - \rightarrowtail \looparrowright \rightleftharpoons - \curvearrowright \circlearrowright \Rsh \downdownarrows - \upharpoonright \downharpoonright \rightsquigarrow \nleftarrow - \nrightarrow \nLeftarrow \nRightarrow \nleftrightarrow - \nLeftrightarrow \to \Swarrow \Searrow \Nwarrow \Nearrow - \leftsquigarrow \overleftarrow \overleftrightarrow \cwopencirclearrow - \downzigzagarrow \cupleftarrow \rightzigzagarrow \twoheaddownarrow - \updownarrowbar \twoheaduparrow \rightarrowbar \updownarrows - \barleftarrow \mapsfrom \mapsdown \mapsup \Ldsh \Rdsh - """.split()) - - _spaced_symbols = _binary_operators | _relation_symbols | _arrow_symbols - - _punctuation_symbols = set(r', ; . ! \ldotp \cdotp'.split()) - - _overunder_symbols = set(r''' - \sum \prod \coprod \bigcap \bigcup \bigsqcup \bigvee - \bigwedge \bigodot \bigotimes \bigoplus \biguplus - '''.split()) - - _overunder_functions = set("lim liminf limsup sup max min".split()) - - _dropsub_symbols = set(r'\int \oint \iint \oiint \iiint \oiiint \iiiint'.split()) - - _fontnames = set("rm cal it tt sf bf bfit " - "default bb frak scr regular".split()) - - _function_names = set(""" - arccos csc ker min arcsin deg lg Pr arctan det lim sec arg dim - liminf sin cos exp limsup sinh cosh gcd ln sup cot hom log tan - coth inf max tanh""".split()) - - _ambi_delims = set(r""" - | \| / \backslash \uparrow \downarrow \updownarrow \Uparrow - \Downarrow \Updownarrow . \vert \Vert""".split()) - _left_delims = set(r""" - ( [ \{ < \lfloor \langle \lceil \lbrace \leftbrace \lbrack \leftparen \lgroup - """.split()) - _right_delims = set(r""" - ) ] \} > \rfloor \rangle \rceil \rbrace \rightbrace \rbrack \rightparen \rgroup - """.split()) - _delims = _left_delims | _right_delims | _ambi_delims - - _small_greek = set([unicodedata.name(chr(i)).split()[-1].lower() for i in - range(ord('\N{GREEK SMALL LETTER ALPHA}'), - ord('\N{GREEK SMALL LETTER OMEGA}') + 1)]) - _latin_alphabets = set(string.ascii_letters) - - def __init__(self) -> None: - p = types.SimpleNamespace() - - def set_names_and_parse_actions() -> None: - for key, val in vars(p).items(): - if not key.startswith('_'): - # Set names on (almost) everything -- very useful for debugging - # token, placeable, and auto_delim are forward references which - # are left without names to ensure useful error messages - if key not in ("token", "placeable", "auto_delim"): - val.setName(key) - # Set actions - if hasattr(self, key): - val.setParseAction(getattr(self, key)) - - # Root definitions. - - # In TeX parlance, a csname is a control sequence name (a "\foo"). - def csnames(group: str, names: Iterable[str]) -> Regex: - ends_with_alpha = [] - ends_with_nonalpha = [] - for name in names: - if name[-1].isalpha(): - ends_with_alpha.append(name) - else: - ends_with_nonalpha.append(name) - return Regex( - r"\\(?P<{group}>(?:{alpha})(?![A-Za-z]){additional}{nonalpha})".format( - group=group, - alpha="|".join(map(re.escape, ends_with_alpha)), - additional="|" if ends_with_nonalpha else "", - nonalpha="|".join(map(re.escape, ends_with_nonalpha)), - ) - ) - - p.float_literal = Regex(r"[-+]?([0-9]+\.?[0-9]*|\.[0-9]+)") - p.space = oneOf(self._space_widths)("space") - - p.style_literal = oneOf( - [str(e.value) for e in self._MathStyle])("style_literal") - - p.symbol = Regex( - r"[a-zA-Z0-9 +\-*/<>=:,.;!\?&'@()\[\]|\U00000080-\U0001ffff]" - r"|\\[%${}\[\]_|]" - + r"|\\(?:{})(?![A-Za-z])".format( - "|".join(map(re.escape, tex2uni))) - )("sym").leaveWhitespace() - p.unknown_symbol = Regex(r"\\[A-Za-z]+")("name") - - p.font = csnames("font", self._fontnames) - p.start_group = Optional(r"\math" + oneOf(self._fontnames)("font")) + "{" - p.end_group = Literal("}") - - p.delim = oneOf(self._delims) - - # Mutually recursive definitions. (Minimizing the number of Forward - # elements is important for speed.) - p.auto_delim = Forward() - p.placeable = Forward() - p.required_group = Forward() - p.optional_group = Forward() - p.token = Forward() - - set_names_and_parse_actions() # for mutually recursive definitions. - - p.optional_group <<= "{" + ZeroOrMore(p.token)("group") + "}" - p.required_group <<= "{" + OneOrMore(p.token)("group") + "}" - - p.customspace = cmd(r"\hspace", "{" + p.float_literal("space") + "}") - - p.accent = ( - csnames("accent", [*self._accent_map, *self._wide_accents]) - - p.placeable("sym")) - - p.function = csnames("name", self._function_names) - - p.group = p.start_group + ZeroOrMore(p.token)("group") + p.end_group - p.unclosed_group = (p.start_group + ZeroOrMore(p.token)("group") + StringEnd()) - - p.frac = cmd(r"\frac", p.required_group("num") + p.required_group("den")) - p.dfrac = cmd(r"\dfrac", p.required_group("num") + p.required_group("den")) - p.binom = cmd(r"\binom", p.required_group("num") + p.required_group("den")) - - p.genfrac = cmd( - r"\genfrac", - "{" + Optional(p.delim)("ldelim") + "}" - + "{" + Optional(p.delim)("rdelim") + "}" - + "{" + p.float_literal("rulesize") + "}" - + "{" + Optional(p.style_literal)("style") + "}" - + p.required_group("num") - + p.required_group("den")) - - p.sqrt = cmd( - r"\sqrt{value}", - Optional("[" + OneOrMore(NotAny("]") + p.token)("root") + "]") - + p.required_group("value")) - - p.overline = cmd(r"\overline", p.required_group("body")) - - p.overset = cmd( - r"\overset", - p.optional_group("annotation") + p.optional_group("body")) - p.underset = cmd( - r"\underset", - p.optional_group("annotation") + p.optional_group("body")) - - p.text = cmd(r"\text", QuotedString('{', '\\', endQuoteChar="}")) - - p.substack = cmd(r"\substack", - nested_expr(opener="{", closer="}", - content=Group(OneOrMore(p.token)) + - ZeroOrMore(Literal("\\\\").suppress()))("parts")) - - p.subsuper = ( - (Optional(p.placeable)("nucleus") - + OneOrMore(oneOf(["_", "^"]) - p.placeable)("subsuper") - + Regex("'*")("apostrophes")) - | Regex("'+")("apostrophes") - | (p.placeable("nucleus") + Regex("'*")("apostrophes")) - ) - - p.simple = p.space | p.customspace | p.font | p.subsuper - - p.token <<= ( - p.simple - | p.auto_delim - | p.unclosed_group - | p.unknown_symbol # Must be last - ) - - p.operatorname = cmd(r"\operatorname", "{" + ZeroOrMore(p.simple)("name") + "}") - - p.boldsymbol = cmd( - r"\boldsymbol", "{" + ZeroOrMore(p.simple)("value") + "}") - - p.placeable <<= ( - p.accent # Must be before symbol as all accents are symbols - | p.symbol # Must be second to catch all named symbols and single - # chars not in a group - | p.function - | p.operatorname - | p.group - | p.frac - | p.dfrac - | p.binom - | p.genfrac - | p.overset - | p.underset - | p.sqrt - | p.overline - | p.text - | p.boldsymbol - | p.substack - ) - - mdelim = r"\middle" - (p.delim("mdelim") | Error("Expected a delimiter")) - p.auto_delim <<= ( - r"\left" - (p.delim("left") | Error("Expected a delimiter")) - + ZeroOrMore(p.simple | p.auto_delim | mdelim)("mid") - + r"\right" - (p.delim("right") | Error("Expected a delimiter")) - ) - - # Leaf definitions. - p.math = OneOrMore(p.token) - p.math_string = QuotedString('$', '\\', unquoteResults=False) - p.non_math = Regex(r"(?:(?:\\[$])|[^$])*").leaveWhitespace() - p.main = ( - p.non_math + ZeroOrMore(p.math_string + p.non_math) + StringEnd() - ) - set_names_and_parse_actions() # for leaf definitions. - - self._expression = p.main - self._math_expression = p.math - - # To add space to nucleus operators after sub/superscripts - self._in_subscript_or_superscript = False - - def parse(self, s: str, fonts_object: Fonts, fontsize: float, dpi: float) -> Hlist: - """ - Parse expression *s* using the given *fonts_object* for - output, at the given *fontsize* and *dpi*. - - Returns the parse tree of `Node` instances. - """ - self._state_stack = [ - ParserState(fonts_object, 'default', 'rm', fontsize, dpi)] - self._em_width_cache: dict[tuple[str, float, float], float] = {} - try: - result = self._expression.parseString(s) - except ParseBaseException as err: - # explain becomes a plain method on pyparsing 3 (err.explain(0)). - raise ValueError("\n" + ParseException.explain(err, 0)) from None - self._state_stack = [] - self._in_subscript_or_superscript = False - # prevent operator spacing from leaking into a new expression - self._em_width_cache = {} - ParserElement.resetCache() - return T.cast(Hlist, result[0]) # Known return type from main. - - def get_state(self) -> ParserState: - """Get the current `State` of the parser.""" - return self._state_stack[-1] - - def pop_state(self) -> None: - """Pop a `State` off of the stack.""" - self._state_stack.pop() - - def push_state(self) -> None: - """Push a new `State` onto the stack, copying the current state.""" - self._state_stack.append(self.get_state().copy()) - - def main(self, toks: ParseResults) -> list[Hlist]: - return [Hlist(toks.asList())] - - def math_string(self, toks: ParseResults) -> ParseResults: - return self._math_expression.parseString(toks[0][1:-1], parseAll=True) - - def math(self, toks: ParseResults) -> T.Any: - hlist = Hlist(toks.asList()) - self.pop_state() - return [hlist] - - def non_math(self, toks: ParseResults) -> T.Any: - s = toks[0].replace(r'\$', '$') - symbols = [Char(c, self.get_state()) for c in s] - hlist = Hlist(symbols) - # We're going into math now, so set font to 'it' - self.push_state() - self.get_state().font = mpl.rcParams['mathtext.default'] - return [hlist] - - float_literal = staticmethod(pyparsing_common.convertToFloat) - - def text(self, toks: ParseResults) -> T.Any: - self.push_state() - state = self.get_state() - state.font = 'rm' - hlist = Hlist([Char(c, state) for c in toks[1]]) - self.pop_state() - return [hlist] - - def _make_space(self, percentage: float) -> Kern: - # In TeX, an em (the unit usually used to measure horizontal lengths) - # is not the width of the character 'm'; it is the same in different - # font styles (e.g. roman or italic). Mathtext, however, uses 'm' in - # the italic style so that horizontal spaces don't depend on the - # current font style. - state = self.get_state() - key = (state.font, state.fontsize, state.dpi) - width = self._em_width_cache.get(key) - if width is None: - metrics = state.fontset.get_metrics( - 'it', mpl.rcParams['mathtext.default'], 'm', - state.fontsize, state.dpi) - width = metrics.advance - self._em_width_cache[key] = width - return Kern(width * percentage) - - _space_widths = { - r'\,': 0.16667, # 3/18 em = 3 mu - r'\thinspace': 0.16667, # 3/18 em = 3 mu - r'\/': 0.16667, # 3/18 em = 3 mu - r'\>': 0.22222, # 4/18 em = 4 mu - r'\:': 0.22222, # 4/18 em = 4 mu - r'\;': 0.27778, # 5/18 em = 5 mu - r'\ ': 0.33333, # 6/18 em = 6 mu - r'~': 0.33333, # 6/18 em = 6 mu, nonbreakable - r'\enspace': 0.5, # 9/18 em = 9 mu - r'\quad': 1, # 1 em = 18 mu - r'\qquad': 2, # 2 em = 36 mu - r'\!': -0.16667, # -3/18 em = -3 mu - } - - def space(self, toks: ParseResults) -> T.Any: - num = self._space_widths[toks["space"]] - box = self._make_space(num) - return [box] - - def customspace(self, toks: ParseResults) -> T.Any: - return [self._make_space(toks["space"])] - - def symbol(self, s: str, loc: int, - toks: ParseResults | dict[str, str]) -> T.Any: - c = toks["sym"] - if c == "-": - # "U+2212 minus sign is the preferred representation of the unary - # and binary minus sign rather than the ASCII-derived U+002D - # hyphen-minus, because minus sign is unambiguous and because it - # is rendered with a more desirable length, usually longer than a - # hyphen." (https://www.unicode.org/reports/tr25/) - c = "\N{MINUS SIGN}" - try: - char = Char(c, self.get_state()) - except ValueError as err: - raise ParseFatalException(s, loc, - "Unknown symbol: %s" % c) from err - - if c in self._spaced_symbols: - # iterate until we find previous character, needed for cases - # such as ${ -2}$, $ -2$, or $ -2$. - prev_char = next((c for c in s[:loc][::-1] if c != ' '), '') - # Binary operators at start of string should not be spaced - # Also, operators in sub- or superscripts should not be spaced - if (self._in_subscript_or_superscript or ( - c in self._binary_operators and ( - len(s[:loc].split()) == 0 or prev_char == '{' or - prev_char in self._left_delims))): - return [char] - else: - return [Hlist([self._make_space(0.2), - char, - self._make_space(0.2)], - do_kern=True)] - elif c in self._punctuation_symbols: - prev_char = next((c for c in s[:loc][::-1] if c != ' '), '') - next_char = next((c for c in s[loc + 1:] if c != ' '), '') - - # Do not space commas between brackets - if c == ',': - if prev_char == '{' and next_char == '}': - return [char] - - # Do not space dots as decimal separators - if c == '.' and prev_char.isdigit() and next_char.isdigit(): - return [char] - else: - return [Hlist([char, self._make_space(0.2)], do_kern=True)] - return [char] - - def unknown_symbol(self, s: str, loc: int, toks: ParseResults) -> T.Any: - raise ParseFatalException(s, loc, f"Unknown symbol: {toks['name']}") - - _accent_map = { - r'hat': r'\circumflexaccent', - r'breve': r'\combiningbreve', - r'bar': r'\combiningoverline', - r'grave': r'\combininggraveaccent', - r'acute': r'\combiningacuteaccent', - r'tilde': r'\combiningtilde', - r'dot': r'\combiningdotabove', - r'ddot': r'\combiningdiaeresis', - r'dddot': r'\combiningthreedotsabove', - r'ddddot': r'\combiningfourdotsabove', - r'vec': r'\combiningrightarrowabove', - r'"': r'\combiningdiaeresis', - r"`": r'\combininggraveaccent', - r"'": r'\combiningacuteaccent', - r'~': r'\combiningtilde', - r'.': r'\combiningdotabove', - r'^': r'\circumflexaccent', - r'overrightarrow': r'\rightarrow', - r'overleftarrow': r'\leftarrow', - r'mathring': r'\circ', - } - - _wide_accents = set(r"widehat widetilde widebar".split()) - - def accent(self, toks: ParseResults) -> T.Any: - state = self.get_state() - thickness = state.get_current_underline_thickness() - accent = toks["accent"] - sym = toks["sym"] - accent_box: Node - if accent in self._wide_accents: - accent_box = AutoWidthChar( - '\\' + accent, sym.width, state, char_class=Accent) - else: - accent_box = Accent(self._accent_map[accent], state) - if accent == 'mathring': - accent_box.shrink() - accent_box.shrink() - centered = HCentered([Hbox(sym.width / 4.0), accent_box]) - centered.hpack(sym.width, 'exactly') - return Vlist([ - centered, - Vbox(0., thickness * 2.0), - Hlist([sym]) - ]) - - def function(self, s: str, loc: int, toks: ParseResults) -> T.Any: - hlist = self.operatorname(s, loc, toks) - hlist.function_name = toks["name"] - return hlist - - def operatorname(self, s: str, loc: int, toks: ParseResults) -> T.Any: - self.push_state() - state = self.get_state() - state.font = 'rm' - hlist_list: list[Node] = [] - # Change the font of Chars, but leave Kerns alone - name = toks["name"] - for c in name: - if isinstance(c, Char): - c.font = 'rm' - c._update_metrics() - hlist_list.append(c) - elif isinstance(c, str): - hlist_list.append(Char(c, state)) - else: - hlist_list.append(c) - next_char_loc = loc + len(name) + 1 - if isinstance(name, ParseResults): - next_char_loc += len('operatorname{}') - next_char = next((c for c in s[next_char_loc:] if c != ' '), '') - delimiters = self._delims | {'^', '_'} - if (next_char not in delimiters and - name not in self._overunder_functions): - # Add thin space except when followed by parenthesis, bracket, etc. - hlist_list += [self._make_space(self._space_widths[r'\,'])] - self.pop_state() - # if followed by a super/subscript, set flag to true - # This flag tells subsuper to add space after this operator - if next_char in {'^', '_'}: - self._in_subscript_or_superscript = True - else: - self._in_subscript_or_superscript = False - - return Hlist(hlist_list) - - def start_group(self, toks: ParseResults) -> T.Any: - self.push_state() - # Deal with LaTeX-style font tokens - if toks.get("font"): - self.get_state().font = toks.get("font") - return [] - - def group(self, toks: ParseResults) -> T.Any: - grp = Hlist(toks.get("group", [])) - return [grp] - - def required_group(self, toks: ParseResults) -> T.Any: - return Hlist(toks.get("group", [])) - - optional_group = required_group - - def end_group(self) -> T.Any: - self.pop_state() - return [] - - def unclosed_group(self, s: str, loc: int, toks: ParseResults) -> T.Any: - raise ParseFatalException(s, len(s), "Expected '}'") - - def font(self, toks: ParseResults) -> T.Any: - self.get_state().font = toks["font"] - return [] - - def is_overunder(self, nucleus: Node) -> bool: - if isinstance(nucleus, Char): - return nucleus.c in self._overunder_symbols - elif isinstance(nucleus, Hlist) and hasattr(nucleus, 'function_name'): - return nucleus.function_name in self._overunder_functions - return False - - def is_dropsub(self, nucleus: Node) -> bool: - if isinstance(nucleus, Char): - return nucleus.c in self._dropsub_symbols - return False - - def is_slanted(self, nucleus: Node) -> bool: - if isinstance(nucleus, Char): - return nucleus.is_slanted() - return False - - def subsuper(self, s: str, loc: int, toks: ParseResults) -> T.Any: - nucleus = toks.get("nucleus", Hbox(0)) - subsuper = toks.get("subsuper", []) - napostrophes = len(toks.get("apostrophes", [])) - - if not subsuper and not napostrophes: - return nucleus - - sub = super = None - while subsuper: - op, arg, *subsuper = subsuper - if op == '_': - if sub is not None: - raise ParseFatalException("Double subscript") - sub = arg - else: - if super is not None: - raise ParseFatalException("Double superscript") - super = arg - - state = self.get_state() - rule_thickness = state.fontset.get_underline_thickness( - state.font, state.fontsize, state.dpi) - xHeight = state.fontset.get_xheight( - state.font, state.fontsize, state.dpi) - - if napostrophes: - if super is None: - super = Hlist([]) - for i in range(napostrophes): - super.children.extend(self.symbol(s, loc, {"sym": "\\prime"})) - # kern() and hpack() needed to get the metrics right after - # extending - super.kern() - super.hpack() - - # Handle over/under symbols, such as sum or prod - if self.is_overunder(nucleus): - vlist = [] - shift = 0. - width = nucleus.width - if super is not None: - super.shrink() - width = max(width, super.width) - if sub is not None: - sub.shrink() - width = max(width, sub.width) - - vgap = rule_thickness * 3.0 - if super is not None: - hlist = HCentered([super]) - hlist.hpack(width, 'exactly') - vlist.extend([hlist, Vbox(0, vgap)]) - hlist = HCentered([nucleus]) - hlist.hpack(width, 'exactly') - vlist.append(hlist) - if sub is not None: - hlist = HCentered([sub]) - hlist.hpack(width, 'exactly') - vlist.extend([Vbox(0, vgap), hlist]) - shift = hlist.height + vgap + nucleus.depth - vlt = Vlist(vlist) - vlt.shift_amount = shift - result = Hlist([vlt]) - return [result] - - # We remove kerning on the last character for consistency (otherwise - # it will compute kerning based on non-shrunk characters and may put - # them too close together when superscripted) - # We change the width of the last character to match the advance to - # consider some fonts with weird metrics: e.g. stix's f has a width of - # 7.75 and a kerning of -4.0 for an advance of 3.72, and we want to put - # the superscript at the advance - last_char = nucleus - if isinstance(nucleus, Hlist): - new_children = nucleus.children - if len(new_children): - # remove last kern - if (isinstance(new_children[-1], Kern) and - hasattr(new_children[-2], '_metrics')): - new_children = new_children[:-1] - last_char = new_children[-1] - if hasattr(last_char, '_metrics'): - last_char.width = last_char._metrics.advance - # create new Hlist without kerning - nucleus = Hlist(new_children, do_kern=False) - else: - if isinstance(nucleus, Char): - last_char.width = last_char._metrics.advance - nucleus = Hlist([nucleus]) - - # Handle regular sub/superscripts - constants = _get_font_constant_set(state) - lc_height = last_char.height - lc_baseline = 0 - if self.is_dropsub(last_char): - lc_baseline = last_char.depth - - # Compute kerning for sub and super - superkern = constants.delta * xHeight - subkern = constants.delta * xHeight - if self.is_slanted(last_char): - superkern += constants.delta * xHeight - superkern += (constants.delta_slanted * - (lc_height - xHeight * 2. / 3.)) - if self.is_dropsub(last_char): - subkern = (3 * constants.delta - - constants.delta_integral) * lc_height - superkern = (3 * constants.delta + - constants.delta_integral) * lc_height - else: - subkern = 0 - - x: List - if super is None: - # node757 - # Note: One of super or sub must be a Node if we're in this function, but - # mypy can't know this, since it can't interpret pyparsing expressions, - # hence the cast. - x = Hlist([Kern(subkern), T.cast(Node, sub)]) - x.shrink() - if self.is_dropsub(last_char): - shift_down = lc_baseline + constants.subdrop * xHeight - else: - shift_down = constants.sub1 * xHeight - x.shift_amount = shift_down - else: - x = Hlist([Kern(superkern), super]) - x.shrink() - if self.is_dropsub(last_char): - shift_up = lc_height - constants.subdrop * xHeight - else: - shift_up = constants.sup1 * xHeight - if sub is None: - x.shift_amount = -shift_up - else: # Both sub and superscript - y = Hlist([Kern(subkern), sub]) - y.shrink() - if self.is_dropsub(last_char): - shift_down = lc_baseline + constants.subdrop * xHeight - else: - shift_down = constants.sub2 * xHeight - # If sub and superscript collide, move super up - clr = (2.0 * rule_thickness - - ((shift_up - x.depth) - (y.height - shift_down))) - if clr > 0.: - shift_up += clr - x = Vlist([ - x, - Kern((shift_up - x.depth) - (y.height - shift_down)), - y]) - x.shift_amount = shift_down - - if not self.is_dropsub(last_char): - x.width += constants.script_space * xHeight - - # Do we need to add a space after the nucleus? - # To find out, check the flag set by operatorname - spaced_nucleus = [nucleus, x] - if self._in_subscript_or_superscript: - spaced_nucleus += [self._make_space(self._space_widths[r'\,'])] - self._in_subscript_or_superscript = False - - result = Hlist(spaced_nucleus) - return [result] - - def _genfrac(self, ldelim: str, rdelim: str, rule: float | None, style: _MathStyle, - num: Hlist, den: Hlist) -> T.Any: - state = self.get_state() - thickness = state.get_current_underline_thickness() - - for _ in range(style.value): - num.shrink() - den.shrink() - cnum = HCentered([num]) - cden = HCentered([den]) - width = max(num.width, den.width) - cnum.hpack(width, 'exactly') - cden.hpack(width, 'exactly') - vlist = Vlist([cnum, # numerator - Vbox(0, thickness * 2.0), # space - Hrule(state, rule), # rule - Vbox(0, thickness * 2.0), # space - cden # denominator - ]) - - # Shift so the fraction line sits in the middle of the - # equals sign - metrics = state.fontset.get_metrics( - state.font, mpl.rcParams['mathtext.default'], - '=', state.fontsize, state.dpi) - shift = (cden.height - - ((metrics.ymax + metrics.ymin) / 2 - - thickness * 3.0)) - vlist.shift_amount = shift - - result = [Hlist([vlist, Hbox(thickness * 2.)])] - if ldelim or rdelim: - if ldelim == '': - ldelim = '.' - if rdelim == '': - rdelim = '.' - return self._auto_sized_delimiter(ldelim, - T.cast(list[T.Union[Box, Char, str]], - result), - rdelim) - return result - - def style_literal(self, toks: ParseResults) -> T.Any: - return self._MathStyle(int(toks["style_literal"])) - - def genfrac(self, toks: ParseResults) -> T.Any: - return self._genfrac( - toks.get("ldelim", ""), toks.get("rdelim", ""), - toks["rulesize"], toks.get("style", self._MathStyle.TEXTSTYLE), - toks["num"], toks["den"]) - - def frac(self, toks: ParseResults) -> T.Any: - return self._genfrac( - "", "", self.get_state().get_current_underline_thickness(), - self._MathStyle.TEXTSTYLE, toks["num"], toks["den"]) - - def dfrac(self, toks: ParseResults) -> T.Any: - return self._genfrac( - "", "", self.get_state().get_current_underline_thickness(), - self._MathStyle.DISPLAYSTYLE, toks["num"], toks["den"]) - - def binom(self, toks: ParseResults) -> T.Any: - return self._genfrac( - "(", ")", 0, - self._MathStyle.TEXTSTYLE, toks["num"], toks["den"]) - - def _genset(self, s: str, loc: int, toks: ParseResults) -> T.Any: - annotation = toks["annotation"] - body = toks["body"] - thickness = self.get_state().get_current_underline_thickness() - - annotation.shrink() - cannotation = HCentered([annotation]) - cbody = HCentered([body]) - width = max(cannotation.width, cbody.width) - cannotation.hpack(width, 'exactly') - cbody.hpack(width, 'exactly') - - vgap = thickness * 3 - if s[loc + 1] == "u": # \underset - vlist = Vlist([cbody, # body - Vbox(0, vgap), # space - cannotation # annotation - ]) - # Shift so the body sits in the same vertical position - vlist.shift_amount = cbody.depth + cannotation.height + vgap - else: # \overset - vlist = Vlist([cannotation, # annotation - Vbox(0, vgap), # space - cbody # body - ]) - - # To add horizontal gap between symbols: wrap the Vlist into - # an Hlist and extend it with an Hbox(0, horizontal_gap) - return vlist - - overset = underset = _genset - - def sqrt(self, toks: ParseResults) -> T.Any: - root = toks.get("root") - body = toks["value"] - state = self.get_state() - thickness = state.get_current_underline_thickness() - - # Determine the height of the body, and add a little extra to - # the height so it doesn't seem cramped - height = body.height - body.shift_amount + thickness * 5.0 - depth = body.depth + body.shift_amount - check = AutoHeightChar(r'\__sqrt__', height, depth, state, always=True) - height = check.height - check.shift_amount - depth = check.depth + check.shift_amount - - # Put a little extra space to the left and right of the body - padded_body = Hlist([Hbox(2 * thickness), body, Hbox(2 * thickness)]) - rightside = Vlist([Hrule(state), Glue('fill'), padded_body]) - # Stretch the glue between the hrule and the body - rightside.vpack(height + (state.fontsize * state.dpi) / (100.0 * 12.0), - 'exactly', depth) - - # Add the root and shift it upward so it is above the tick. - # The value of 0.6 is a hard-coded hack ;) - if not root: - root = Box(check.width * 0.5, 0., 0.) - else: - root = Hlist(root) - root.shrink() - root.shrink() - - root_vlist = Vlist([Hlist([root])]) - root_vlist.shift_amount = -height * 0.6 - - hlist = Hlist([root_vlist, # Root - # Negative kerning to put root over tick - Kern(-check.width * 0.5), - check, # Check - rightside]) # Body - return [hlist] - - def overline(self, toks: ParseResults) -> T.Any: - body = toks["body"] - - state = self.get_state() - thickness = state.get_current_underline_thickness() - - height = body.height - body.shift_amount + thickness * 3.0 - depth = body.depth + body.shift_amount - - # Place overline above body - rightside = Vlist([Hrule(state), Glue('fill'), Hlist([body])]) - - # Stretch the glue between the hrule and the body - rightside.vpack(height + (state.fontsize * state.dpi) / (100.0 * 12.0), - 'exactly', depth) - - hlist = Hlist([rightside]) - return [hlist] - - def _auto_sized_delimiter(self, front: str, - middle: list[Box | Char | str], - back: str) -> T.Any: - state = self.get_state() - if len(middle): - height = max([x.height for x in middle if not isinstance(x, str)]) - depth = max([x.depth for x in middle if not isinstance(x, str)]) - factor = None - for idx, el in enumerate(middle): - if isinstance(el, str) and el == '\\middle': - c = T.cast(str, middle[idx + 1]) # Should be one of p.delims. - if c != '.': - middle[idx + 1] = AutoHeightChar( - c, height, depth, state, factor=factor) - else: - middle.remove(c) - del middle[idx] - # There should only be \middle and its delimiter as str, which have - # just been removed. - middle_part = T.cast(list[T.Union[Box, Char]], middle) - else: - height = 0 - depth = 0 - factor = 1.0 - middle_part = [] - - parts: list[Node] = [] - # \left. and \right. aren't supposed to produce any symbols - if front != '.': - parts.append( - AutoHeightChar(front, height, depth, state, factor=factor)) - parts.extend(middle_part) - if back != '.': - parts.append( - AutoHeightChar(back, height, depth, state, factor=factor)) - hlist = Hlist(parts) - return hlist - - def auto_delim(self, toks: ParseResults) -> T.Any: - return self._auto_sized_delimiter( - toks["left"], - # if "mid" in toks ... can be removed when requiring pyparsing 3. - toks["mid"].asList() if "mid" in toks else [], - toks["right"]) - - def boldsymbol(self, toks: ParseResults) -> T.Any: - self.push_state() - state = self.get_state() - hlist: list[Node] = [] - name = toks["value"] - for c in name: - if isinstance(c, Hlist): - k = c.children[1] - if isinstance(k, Char): - k.font = "bf" - k._update_metrics() - hlist.append(c) - elif isinstance(c, Char): - c.font = "bf" - if (c.c in self._latin_alphabets or - c.c[1:] in self._small_greek): - c.font = "bfit" - c._update_metrics() - c._update_metrics() - hlist.append(c) - else: - hlist.append(c) - self.pop_state() - - return Hlist(hlist) - - def substack(self, toks: ParseResults) -> T.Any: - parts = toks["parts"] - state = self.get_state() - thickness = state.get_current_underline_thickness() - - hlist = [Hlist(k) for k in parts[0]] - max_width = max(map(lambda c: c.width, hlist)) - - vlist = [] - for sub in hlist: - cp = HCentered([sub]) - cp.hpack(max_width, 'exactly') - vlist.append(cp) - - stack = [val - for pair in zip(vlist, [Vbox(0, thickness * 2)] * len(vlist)) - for val in pair] - del stack[-1] - vlt = Vlist(stack) - result = [Hlist([vlt])] - return result diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/_numba/kernels/mean_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/_numba/kernels/mean_.py deleted file mode 100644 index f415804781753372a5715b6ffee6a7ab8cc70b64..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/_numba/kernels/mean_.py +++ /dev/null @@ -1,196 +0,0 @@ -""" -Numba 1D mean kernels that can be shared by -* Dataframe / Series -* groupby -* rolling / expanding - -Mirrors pandas/_libs/window/aggregation.pyx -""" -from __future__ import annotations - -from typing import TYPE_CHECKING - -import numba -import numpy as np - -from pandas.core._numba.kernels.shared import is_monotonic_increasing -from pandas.core._numba.kernels.sum_ import grouped_kahan_sum - -if TYPE_CHECKING: - from pandas._typing import npt - - -@numba.jit(nopython=True, nogil=True, parallel=False) -def add_mean( - val: float, - nobs: int, - sum_x: float, - neg_ct: int, - compensation: float, - num_consecutive_same_value: int, - prev_value: float, -) -> tuple[int, float, int, float, int, float]: - if not np.isnan(val): - nobs += 1 - y = val - compensation - t = sum_x + y - compensation = t - sum_x - y - sum_x = t - if val < 0: - neg_ct += 1 - - if val == prev_value: - num_consecutive_same_value += 1 - else: - num_consecutive_same_value = 1 - prev_value = val - - return nobs, sum_x, neg_ct, compensation, num_consecutive_same_value, prev_value - - -@numba.jit(nopython=True, nogil=True, parallel=False) -def remove_mean( - val: float, nobs: int, sum_x: float, neg_ct: int, compensation: float -) -> tuple[int, float, int, float]: - if not np.isnan(val): - nobs -= 1 - y = -val - compensation - t = sum_x + y - compensation = t - sum_x - y - sum_x = t - if val < 0: - neg_ct -= 1 - return nobs, sum_x, neg_ct, compensation - - -@numba.jit(nopython=True, nogil=True, parallel=False) -def sliding_mean( - values: np.ndarray, - result_dtype: np.dtype, - start: np.ndarray, - end: np.ndarray, - min_periods: int, -) -> tuple[np.ndarray, list[int]]: - N = len(start) - nobs = 0 - sum_x = 0.0 - neg_ct = 0 - compensation_add = 0.0 - compensation_remove = 0.0 - - is_monotonic_increasing_bounds = is_monotonic_increasing( - start - ) and is_monotonic_increasing(end) - - output = np.empty(N, dtype=result_dtype) - - for i in range(N): - s = start[i] - e = end[i] - if i == 0 or not is_monotonic_increasing_bounds: - prev_value = values[s] - num_consecutive_same_value = 0 - - for j in range(s, e): - val = values[j] - ( - nobs, - sum_x, - neg_ct, - compensation_add, - num_consecutive_same_value, - prev_value, - ) = add_mean( - val, - nobs, - sum_x, - neg_ct, - compensation_add, - num_consecutive_same_value, - prev_value, # pyright: ignore[reportGeneralTypeIssues] - ) - else: - for j in range(start[i - 1], s): - val = values[j] - nobs, sum_x, neg_ct, compensation_remove = remove_mean( - val, nobs, sum_x, neg_ct, compensation_remove - ) - - for j in range(end[i - 1], e): - val = values[j] - ( - nobs, - sum_x, - neg_ct, - compensation_add, - num_consecutive_same_value, - prev_value, - ) = add_mean( - val, - nobs, - sum_x, - neg_ct, - compensation_add, - num_consecutive_same_value, - prev_value, # pyright: ignore[reportGeneralTypeIssues] - ) - - if nobs >= min_periods and nobs > 0: - result = sum_x / nobs - if num_consecutive_same_value >= nobs: - result = prev_value - elif neg_ct == 0 and result < 0: - result = 0 - elif neg_ct == nobs and result > 0: - result = 0 - else: - result = np.nan - - output[i] = result - - if not is_monotonic_increasing_bounds: - nobs = 0 - sum_x = 0.0 - neg_ct = 0 - compensation_remove = 0.0 - - # na_position is empty list since float64 can already hold nans - # Do list comprehension, since numba cannot figure out that na_pos is - # empty list of ints on its own - na_pos = [0 for i in range(0)] - return output, na_pos - - -@numba.jit(nopython=True, nogil=True, parallel=False) -def grouped_mean( - values: np.ndarray, - result_dtype: np.dtype, - labels: npt.NDArray[np.intp], - ngroups: int, - min_periods: int, -) -> tuple[np.ndarray, list[int]]: - output, nobs_arr, comp_arr, consecutive_counts, prev_vals = grouped_kahan_sum( - values, result_dtype, labels, ngroups - ) - - # Post-processing, replace sums that don't satisfy min_periods - for lab in range(ngroups): - nobs = nobs_arr[lab] - num_consecutive_same_value = consecutive_counts[lab] - prev_value = prev_vals[lab] - sum_x = output[lab] - if nobs >= min_periods: - if num_consecutive_same_value >= nobs: - result = prev_value * nobs - else: - result = sum_x - else: - result = np.nan - result /= nobs - output[lab] = result - - # na_position is empty list since float64 can already hold nans - # Do list comprehension, since numba cannot figure out that na_pos is - # empty list of ints on its own - na_pos = [0 for i in range(0)] - return output, na_pos diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/apply/test_series_apply_relabeling.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/apply/test_series_apply_relabeling.py deleted file mode 100644 index cdfa054f91c9b67261d715cd7812a53d1b2d4b2f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/apply/test_series_apply_relabeling.py +++ /dev/null @@ -1,39 +0,0 @@ -import pandas as pd -import pandas._testing as tm - - -def test_relabel_no_duplicated_method(): - # this is to test there is no duplicated method used in agg - df = pd.DataFrame({"A": [1, 2, 1, 2], "B": [1, 2, 3, 4]}) - - result = df["A"].agg(foo="sum") - expected = df["A"].agg({"foo": "sum"}) - tm.assert_series_equal(result, expected) - - result = df["B"].agg(foo="min", bar="max") - expected = df["B"].agg({"foo": "min", "bar": "max"}) - tm.assert_series_equal(result, expected) - - msg = "using Series.[sum|min|max]" - with tm.assert_produces_warning(FutureWarning, match=msg): - result = df["B"].agg(foo=sum, bar=min, cat="max") - msg = "using Series.[sum|min|max]" - with tm.assert_produces_warning(FutureWarning, match=msg): - expected = df["B"].agg({"foo": sum, "bar": min, "cat": "max"}) - tm.assert_series_equal(result, expected) - - -def test_relabel_duplicated_method(): - # this is to test with nested renaming, duplicated method can be used - # if they are assigned with different new names - df = pd.DataFrame({"A": [1, 2, 1, 2], "B": [1, 2, 3, 4]}) - - result = df["A"].agg(foo="sum", bar="sum") - expected = pd.Series([6, 6], index=["foo", "bar"], name="A") - tm.assert_series_equal(result, expected) - - msg = "using Series.min" - with tm.assert_produces_warning(FutureWarning, match=msg): - result = df["B"].agg(foo=min, bar="min") - expected = pd.Series([1, 1], index=["foo", "bar"], name="B") - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/__init__.py deleted file mode 100644 index 32c1f0025f0205192e3962e0d4633ef38fd8d2de..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/__init__.py +++ /dev/null @@ -1,166 +0,0 @@ -""" -Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more -""" - -from __future__ import annotations - -# Set default logging handler to avoid "No handler found" warnings. -import logging -import typing -import warnings -from logging import NullHandler - -from . import exceptions -from ._base_connection import _TYPE_BODY -from ._collections import HTTPHeaderDict -from ._version import __version__ -from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url -from .filepost import _TYPE_FIELDS, encode_multipart_formdata -from .poolmanager import PoolManager, ProxyManager, proxy_from_url -from .response import BaseHTTPResponse, HTTPResponse -from .util.request import make_headers -from .util.retry import Retry -from .util.timeout import Timeout - -# Ensure that Python is compiled with OpenSSL 1.1.1+ -# If the 'ssl' module isn't available at all that's -# fine, we only care if the module is available. -try: - import ssl -except ImportError: - pass -else: - if not ssl.OPENSSL_VERSION.startswith("OpenSSL "): # Defensive: - warnings.warn( - "urllib3 v2.0 only supports OpenSSL 1.1.1+, currently " - f"the 'ssl' module is compiled with {ssl.OPENSSL_VERSION!r}. " - "See: https://github.com/urllib3/urllib3/issues/3020", - exceptions.NotOpenSSLWarning, - ) - elif ssl.OPENSSL_VERSION_INFO < (1, 1, 1): # Defensive: - raise ImportError( - "urllib3 v2.0 only supports OpenSSL 1.1.1+, currently " - f"the 'ssl' module is compiled with {ssl.OPENSSL_VERSION!r}. " - "See: https://github.com/urllib3/urllib3/issues/2168" - ) - -# === NOTE TO REPACKAGERS AND VENDORS === -# Please delete this block, this logic is only -# for urllib3 being distributed via PyPI. -# See: https://github.com/urllib3/urllib3/issues/2680 -try: - import urllib3_secure_extra # type: ignore # noqa: F401 -except ModuleNotFoundError: - pass -else: - warnings.warn( - "'urllib3[secure]' extra is deprecated and will be removed " - "in urllib3 v2.1.0. Read more in this issue: " - "https://github.com/urllib3/urllib3/issues/2680", - category=DeprecationWarning, - stacklevel=2, - ) - -__author__ = "Andrey Petrov (andrey.petrov@shazow.net)" -__license__ = "MIT" -__version__ = __version__ - -__all__ = ( - "HTTPConnectionPool", - "HTTPHeaderDict", - "HTTPSConnectionPool", - "PoolManager", - "ProxyManager", - "HTTPResponse", - "Retry", - "Timeout", - "add_stderr_logger", - "connection_from_url", - "disable_warnings", - "encode_multipart_formdata", - "make_headers", - "proxy_from_url", - "request", - "BaseHTTPResponse", -) - -logging.getLogger(__name__).addHandler(NullHandler()) - - -def add_stderr_logger( - level: int = logging.DEBUG, -) -> logging.StreamHandler[typing.TextIO]: - """ - Helper for quickly adding a StreamHandler to the logger. Useful for - debugging. - - Returns the handler after adding it. - """ - # This method needs to be in this __init__.py to get the __name__ correct - # even if urllib3 is vendored within another package. - logger = logging.getLogger(__name__) - handler = logging.StreamHandler() - handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(message)s")) - logger.addHandler(handler) - logger.setLevel(level) - logger.debug("Added a stderr logging handler to logger: %s", __name__) - return handler - - -# ... Clean up. -del NullHandler - - -# All warning filters *must* be appended unless you're really certain that they -# shouldn't be: otherwise, it's very hard for users to use most Python -# mechanisms to silence them. -# SecurityWarning's always go off by default. -warnings.simplefilter("always", exceptions.SecurityWarning, append=True) -# InsecurePlatformWarning's don't vary between requests, so we keep it default. -warnings.simplefilter("default", exceptions.InsecurePlatformWarning, append=True) - - -def disable_warnings(category: type[Warning] = exceptions.HTTPWarning) -> None: - """ - Helper for quickly disabling all urllib3 warnings. - """ - warnings.simplefilter("ignore", category) - - -_DEFAULT_POOL = PoolManager() - - -def request( - method: str, - url: str, - *, - body: _TYPE_BODY | None = None, - fields: _TYPE_FIELDS | None = None, - headers: typing.Mapping[str, str] | None = None, - preload_content: bool | None = True, - decode_content: bool | None = True, - redirect: bool | None = True, - retries: Retry | bool | int | None = None, - timeout: Timeout | float | int | None = 3, - json: typing.Any | None = None, -) -> BaseHTTPResponse: - """ - A convenience, top-level request method. It uses a module-global ``PoolManager`` instance. - Therefore, its side effects could be shared across dependencies relying on it. - To avoid side effects create a new ``PoolManager`` instance and use it instead. - The method does not accept low-level ``**urlopen_kw`` keyword arguments. - """ - - return _DEFAULT_POOL.request( - method, - url, - body=body, - fields=fields, - headers=headers, - preload_content=preload_content, - decode_content=decode_content, - redirect=redirect, - retries=retries, - timeout=timeout, - json=json, - ) diff --git a/spaces/pyimagesearch/gif-creator/app.py b/spaces/pyimagesearch/gif-creator/app.py deleted file mode 100644 index 55897bea2b48b9d422e5cdbc365df7bc012f39a4..0000000000000000000000000000000000000000 --- a/spaces/pyimagesearch/gif-creator/app.py +++ /dev/null @@ -1,66 +0,0 @@ -"""Application file.""" -from zipfile import ZipFile -import os -import imageio -import gradio as gr - -def create_gif(filenames, name_gif): - filenames = sorted(filenames) - images = [] - for filename in filenames: - images.append(imageio.imread(filename)) - - kargs = {"duration": 2.00} - imageio.mimsave(name_gif, images, "GIF", **kargs) - -def upload_file(file): - folder_name = file.name.split(".")[0] - - print(folder_name) - print("EXTRACTING ....") - # Extract the zip file - with ZipFile(file.name, "r") as zObject: - # Extracting specific file in the zip - # into a specific location. - zObject.extractall(path=folder_name) - - zip_filename = os.listdir(folder_name)[0] - filenames = os.listdir(f"{folder_name}/{zip_filename}") - filenames = sorted(filenames) - filenames = [f"{folder_name}/{zip_filename}/{name}" for name in filenames] - - create_gif(filenames, "created_gif.gif") - return "created_gif.gif" - -with gr.Blocks() as demo: - gr.Markdown(""" - If you have a sequence of images, that you want to turn a gif, you can use this app to do so. - - Folder Structure: - - folder_name - |- epoch001.png - |- epoch002.png - |- epoch003.png - |- ... - └── epoch100.png - - Turn this folder into a zip: - ```shell - zip -r folder_name.zip folder_name - ``` - - Upload this zip file and the app will create a gif for you. - """) - output_gif = gr.Image() - upload_button = gr.UploadButton( - label="Click to Upload a Zip file", - ) - upload_button.upload( - upload_file, - inputs=upload_button, - outputs=output_gif, - ) - - -demo.launch() \ No newline at end of file diff --git a/spaces/qingxu98/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h b/spaces/qingxu98/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h deleted file mode 100644 index a21f3446e06b5826af7b554c8a7d9c5d80848b62..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h +++ /dev/null @@ -1,216 +0,0 @@ -#pragma once - -#include -#include -#include // [[since C++14]]: std::exchange -#include -#include -#include -#include -#include -#include -#include // assert - -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/rw_lock.h" - -#include "libipc/utility/log.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" - -namespace ipc { -namespace detail { - -class queue_conn { -protected: - circ::cc_t connected_ = 0; - shm::handle elems_h_; - - template - Elems* open(char const * name) { - if (name == nullptr || name[0] == '\0') { - ipc::error("fail open waiter: name is empty!\n"); - return nullptr; - } - if (!elems_h_.acquire(name, sizeof(Elems))) { - return nullptr; - } - auto elems = static_cast(elems_h_.get()); - if (elems == nullptr) { - ipc::error("fail acquire elems: %s\n", name); - return nullptr; - } - elems->init(); - return elems; - } - - void close() { - elems_h_.release(); - } - -public: - queue_conn() = default; - queue_conn(const queue_conn&) = delete; - queue_conn& operator=(const queue_conn&) = delete; - - bool connected() const noexcept { - return connected_ != 0; - } - - circ::cc_t connected_id() const noexcept { - return connected_; - } - - template - auto connect(Elems* elems) noexcept - /*needs 'optional' here*/ - -> std::tuple().cursor())> { - if (elems == nullptr) return {}; - // if it's already connected, just return - if (connected()) return {connected(), false, 0}; - connected_ = elems->connect_receiver(); - return {connected(), true, elems->cursor()}; - } - - template - bool disconnect(Elems* elems) noexcept { - if (elems == nullptr) return false; - // if it's already disconnected, just return false - if (!connected()) return false; - elems->disconnect_receiver(std::exchange(connected_, 0)); - return true; - } -}; - -template -class queue_base : public queue_conn { - using base_t = queue_conn; - -public: - using elems_t = Elems; - using policy_t = typename elems_t::policy_t; - -protected: - elems_t * elems_ = nullptr; - decltype(std::declval().cursor()) cursor_ = 0; - bool sender_flag_ = false; - -public: - using base_t::base_t; - - queue_base() = default; - - explicit queue_base(char const * name) - : queue_base{} { - elems_ = open(name); - } - - explicit queue_base(elems_t * elems) noexcept - : queue_base{} { - assert(elems != nullptr); - elems_ = elems; - } - - /* not virtual */ ~queue_base() { - base_t::close(); - } - - elems_t * elems() noexcept { return elems_; } - elems_t const * elems() const noexcept { return elems_; } - - bool ready_sending() noexcept { - if (elems_ == nullptr) return false; - return sender_flag_ || (sender_flag_ = elems_->connect_sender()); - } - - void shut_sending() noexcept { - if (elems_ == nullptr) return; - if (!sender_flag_) return; - elems_->disconnect_sender(); - } - - bool connect() noexcept { - auto tp = base_t::connect(elems_); - if (std::get<0>(tp) && std::get<1>(tp)) { - cursor_ = std::get<2>(tp); - return true; - } - return std::get<0>(tp); - } - - bool disconnect() noexcept { - return base_t::disconnect(elems_); - } - - std::size_t conn_count() const noexcept { - return (elems_ == nullptr) ? static_cast(invalid_value) : elems_->conn_count(); - } - - bool valid() const noexcept { - return elems_ != nullptr; - } - - bool empty() const noexcept { - return !valid() || (cursor_ == elems_->cursor()); - } - - template - bool push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

    (params)...); - }); - } - - template - bool force_push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->force_push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

    (params)...); - }); - } - - template - bool pop(T& item, F&& out) { - if (elems_ == nullptr) { - return false; - } - return elems_->pop(this, &(this->cursor_), [&item](void* p) { - ::new (&item) T(std::move(*static_cast(p))); - }, std::forward(out)); - } -}; - -} // namespace detail - -template -class queue final : public detail::queue_base> { - using base_t = detail::queue_base>; - -public: - using value_t = T; - - using base_t::base_t; - - template - bool push(P&&... params) { - return base_t::template push(std::forward

    (params)...); - } - - template - bool force_push(P&&... params) { - return base_t::template force_push(std::forward

    (params)...); - } - - bool pop(T& item) { - return base_t::pop(item, [](bool) {}); - } - - template - bool pop(T& item, F&& out) { - return base_t::pop(item, std::forward(out)); - } -}; - -} // namespace ipc diff --git a/spaces/qinzhu/diy-girlfriend/utils.py b/spaces/qinzhu/diy-girlfriend/utils.py deleted file mode 100644 index 8a4413ca864a9acdcdd2e22387d2a3f283e143ee..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/diy-girlfriend/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cuda') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/qtoino/form_matcher/public/form4.html b/spaces/qtoino/form_matcher/public/form4.html deleted file mode 100644 index 9c9923874e6bfb8c9a06ff89d71b5a1e7154f0c3..0000000000000000000000000000000000000000 --- a/spaces/qtoino/form_matcher/public/form4.html +++ /dev/null @@ -1,17 +0,0 @@ - - - - - Email Form - - -

    - - -
    - - -
    - -
    - diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Free Brewii 4.3e Download [New Version].rar ((HOT)).md b/spaces/quidiaMuxgu/Expedit-SAM/Free Brewii 4.3e Download [New Version].rar ((HOT)).md deleted file mode 100644 index 490e7d6f89467030c233d74f0722fd16f0c36185..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Free Brewii 4.3e Download [New Version].rar ((HOT)).md +++ /dev/null @@ -1,42 +0,0 @@ - -

    How to Mod Your Wii with Brewii 4.3e [New Version].rar

    -

    If you are looking for a way to mod your Wii and unlock its full potential, you might have heard of Brewii 4.3e [New Version].rar. This is a software that allows you to install custom firmware on your Wii and access various features and applications that are not available on the official system. In this article, we will show you how to download Brewii 4.3e [New Version].rar for free and how to use it to mod your Wii safely and easily.

    -

    free brewii 4.3e download [New Version].rar


    Download Ziphttps://geags.com/2uCszj



    -

    What is Brewii 4.3e [New Version].rar?

    -

    Brewii 4.3e [New Version].rar is a file that contains the latest version of Brewii, a software that lets you install the Homebrew Channel on your Wii. The Homebrew Channel is a platform that allows you to run homebrew applications and games on your Wii, such as emulators, media players, backup loaders, and more. Brewii 4.3e [New Version].rar works with any Wii system version up to 4.3e, which is the latest one as of now.

    -

    Why should you use Brewii 4.3e [New Version].rar?

    -

    There are many benefits of using Brewii 4.3e [New Version].rar to mod your Wii. Here are some of them:

    -
      -
    • You can play games from other regions and consoles on your Wii, such as NES, SNES, N64, GameCube, and more.
    • -
    • You can backup your games and play them from a USB drive or an SD card, saving space and protecting your discs from scratches.
    • -
    • You can customize your Wii with themes, skins, sounds, and fonts.
    • -
    • You can access online services and features that are no longer supported by Nintendo, such as WiiConnect24 and Netflix.
    • -
    • You can enhance your gaming experience with cheats, mods, hacks, and trainers.
    • -
    • You can update your Wii without losing your homebrew access.
    • -
    -

    How to download Brewii 4.3e [New Version].rar for free?

    -

    Downloading Brewii 4.3e [New Version].rar for free is very easy and fast. You just need to follow these steps:

    -
      -
    1. Go to this link and click on the download button.
    2. -
    3. Wait for the file to be downloaded on your computer. It should be around 20 MB in size.
    4. -
    5. Extract the file using a program like WinRAR or 7-Zip. You should get a folder called Brewii 4.3e [New Version].
    6. -
    7. Copy the folder to the root of an SD card that is formatted in FAT32.
    8. -
    9. Eject the SD card from your computer and insert it into your Wii.
    10. -
    -

    How to use Brewii 4.3e [New Version].rar to mod your Wii?

    -

    Using Brewii 4.3e [New Version].rar to mod your Wii is also very simple and safe. You just need to follow these steps:

    -
      -
    1. Turn on your Wii and go to the Settings menu.
    2. -
    3. Select Data Management and then Channels.
    4. -
    5. Select SD Card and you should see a channel called Brewii.
    6. -
    7. Select it and press Yes when asked if you want to load it.
    8. -
    9. Follow the instructions on the screen and wait for the installation process to finish.
    10. -
    11. Congratulations! You have successfully installed the Homebrew Channel on your Wii!
    12. -
    -

    Conclusion

    -

    Brewii 4.3e [New Version].rar is a great software that allows you to mod your Wii and enjoy many features and applications that are not available on the official system. It is easy to download and use, and it works with any Wii system version up to 4.3e. If you want to unlock your Wii's full potential, you should definitely try Brewii 4.3e [New Version].rar today!

    -

    -

    Conclusion

    -

    Brewii 4.3e [New Version].rar is a great software that allows you to mod your Wii and enjoy many features and applications that are not available on the official system. It is easy to download and use, and it works with any Wii system version up to 4.3e. If you want to unlock your Wii's full potential, you should definitely try Brewii 4.3e [New Version].rar today!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/I Spit On Your Grave Full Movie In Hindi Free 19.md b/spaces/quidiaMuxgu/Expedit-SAM/I Spit On Your Grave Full Movie In Hindi Free 19.md deleted file mode 100644 index de0cbd44bf33ce8dee946591b0ef1d25eaa92cdd..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/I Spit On Your Grave Full Movie In Hindi Free 19.md +++ /dev/null @@ -1,6 +0,0 @@ -

    I Spit On Your Grave Full Movie In Hindi Free 19


    DOWNLOADhttps://geags.com/2uCqdx



    - -18% Over the past 12-month period, Puyallup reported 117 violent crimes ... Watch Hindi, Tamil, Telugu, Bollywood, Korean and other Asian Movies and ... Watch32 is the best website where you can watch tons of movies just for free ... This home was built in 1992 and last sold on 2/19/2015 for $242,950. , according to TPD. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mask Movie Dubbed In Punjabi ((TOP)) Free Downloadl.md b/spaces/quidiaMuxgu/Expedit-SAM/Mask Movie Dubbed In Punjabi ((TOP)) Free Downloadl.md deleted file mode 100644 index dcd8b9f6858dc01e051172d1039678c5a63a9fb7..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Mask Movie Dubbed In Punjabi ((TOP)) Free Downloadl.md +++ /dev/null @@ -1,7 +0,0 @@ - -

    As we all are seeing that in todays digital era, everyone likes to watch movies in high quality, if any new movie is released, first of all, they go to the cinema and show it on the OTT platform, but here these To watch movies, you have to pay money, which is not the case with many people. And similar to people searching the net like free movie downloading websites, keep searching on the net, Vega movies. I am one such popular free movie website where Malayalam Movie, English Movie, Hindi movies, Telugu movies, and South movies can be downloaded for free.

    -

    Pull My Dreads 2013 720p HDTV x264 AC3 Web-DL by [url= [url= Taiseertaids [url= gvitro full crack [url= source code C++ gcc[/url] melsAtterve [url= sesspaphpag [url= frmahans_counsel_v05 [url= flissinneple [url=
    Download 3gp Dvd Rip Mp4 Hollywood 720p Dubbed 2018 MP3 RarIncl.rar[/url]
    walpZoffoopyiptyday [url= shawn spencer full movie download rar[/url] [url= The Ultimate Razor 19 1.00 1627[/url]
    [url= [url= njgames_niels_slot_jockey_v1.0 [url= njgames_niels_slot_jockey_v1.0 [url= [url= mognet_shiplinkjungle_2003_v8 [url= Borussia Dortmund Team Shop Mens Tees Liverpool Football Club [url= flissinneple [url=
    [url= eepub for mac ProgArch [url= [url= mognet_shiplinkjungle_2003_v8 [url= Free download cassandra 2 full version net 32bit[/url]
    Free download 5 gb unofficial [url= guid:79ac6e93ad5fcc4489f16ed7fe1e6b1c:/94f48ab#94f48ab]] Download 5 gb online [/url]
    walpZoffoopyiptyday [url=

    -

    Mask Movie Dubbed In Punjabi Free Downloadl


    Download Zip >>>>> https://geags.com/2uCqL8



    -

    Seattle 2006 National Conference On Medicine And Health And Civil Society M [url= cjn&pid=10d81dde-7af4-446f-a743-3e3a6106822b[/url] diy for windows 7 free download [url=
    sesspaphpag [url= dahliasjannae ecu ea suspensionm8 [url= Culture Day 2011 [url= plokk]maplol/nucutrl[/url] video 2.0.2 repacked! [url=
    sesspaphpag [url= Live Football Highlights [url= shankarrao jatka [url= ]slikara[/url] crack 7z file moviefree download [url=baldwin [url=
    [url= ganwar01.net/7z-file-downloader-crack-free-download-.rar [url= movie hindi dubbed free download [url=
    [url= 2.6 serial key free download md[/url] dde72c6

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/__init__.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Cars 2 The Video Game PC RELOADED Serial Number.rar.md b/spaces/raedeXanto/academic-chatgpt-beta/Cars 2 The Video Game PC RELOADED Serial Number.rar.md deleted file mode 100644 index d0414c22133cc05123357da24d2969465532790f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Cars 2 The Video Game PC RELOADED Serial Number.rar.md +++ /dev/null @@ -1,131 +0,0 @@ -
    -

    Cars 2 The Video Game PC RELOADED Serial Number.rar: A Guide for Gamers

    -

    If you are a fan of Disney Pixar's Cars franchise, you might be interested in playing Cars 2 The Video Game, a racing and action-adventure game based on the animated film of the same name. But what if you don't have the original game or you want to play it for free? That's where Cars 2 The Video Game PC RELOADED Serial Number.rar comes in. In this article, we will explain what this file is, how to download and install it, and how to play the game. We will also answer some frequently asked questions about the game and RELOADED.

    -

    Cars 2 The Video Game PC RELOADED Serial Number.rar


    DOWNLOADhttps://tinourl.com/2uL1gi



    -

    What is Cars 2 The Video Game?

    -

    A brief introduction to the game and its features

    -

    Cars 2 The Video Game is a video game developed by Avalanche Software and published by Disney Interactive Studios in 2011. It is inspired by the Disney Pixar animated film Cars 2, which was released in the same year. The game lets players jump into the Cars 2 universe with some of their favorite Cars personalities in exotic locations around the globe.

    -

    The game features various modes and genres, such as racing, combat, spy missions, and mini-games. Players can choose from over 30 playable characters, each with their own abilities and customization options. They can also unlock new cars, tracks, weapons, gadgets, and modes as they progress through the game. The game supports both single-player and multiplayer modes, with up to four players on split-screen or online.

    -

    The plot and the characters of the game

    -

    The game follows the storyline from the film, but also adds some new elements and twists. Players can choose to play as Mater and Lightning McQueen, as well as some new characters, such as Finn McMissile, Holley Shiftwell, Francesco Bernoulli, Carla Veloso, Raoul ÇaRoule, and more. They will train in the international training center CHROME (Command Headquarters for Recon Operations and Motorized Espionage) to become world-class spies. They will take on dangerous missions, compete to become the fastest racecar in the world, or use their spy skills in exciting combat racing and battle arenas.

    -

    The game features six different locations from the film, such as Tokyo, London, Paris, Italy, Porto Corsa, and Radiator Springs. Each location has its own unique tracks, landmarks, scenery, and challenges. Players will also encounter some familiar faces from the film, such as Professor Zündapp, Grem, Acer, Miles Axlerod, Tomber, Siddeley, Rod "Torque" Redline, and more.

    -

    What is RELOADED?

    -

    A group of hackers who crack and release PC games

    -

    RELOADED is a well-known group of hackers who specialize in cracking and releasing PC games. They are also known as RLD or Reloaded Games. They have been active since 2004 and have cracked hundreds of games from various genres and publishers.

    -

    Cars 2 The Video Game Crack RELOADED Free Download
    -Cars 2 The Video Game Torrent RELOADED PC
    -Cars 2 The Video Game-RELOADED Skidrow & Reloaded Games
    -Disney Pixar Cars 2 The Video Game-RELOADED PCGamesTorrents
    -Cars 2 The Video Game RELOADED ISO Download
    -Cars 2 The Video Game PC RELOADED Serial Key Generator
    -Cars 2 The Video Game PC RELOADED Activation Code
    -Cars 2 The Video Game PC RELOADED Patch Update
    -Cars 2 The Video Game PC RELOADED System Requirements
    -Cars 2 The Video Game PC RELOADED Gameplay Screenshots
    -Cars 2 The Video Game PC RELOADED Multiplayer Mode
    -Cars 2 The Video Game PC RELOADED Splitscreen Mode
    -Cars 2 The Video Game PC RELOADED Hold 'Em Mode
    -Cars 2 The Video Game PC RELOADED Mystery Host
    -Cars 2 The Video Game PC RELOADED Coach Feature
    -Cars 2 The Video Game PC RELOADED Racing Split
    -Cars 2 The Video Game PC RELOADED Career Mode
    -Cars 2 The Video Game PC RELOADED Characters List
    -Cars 2 The Video Game PC RELOADED Locations List
    -Cars 2 The Video Game PC RELOADED CHROME Training Center
    -Cars 2 The Video Game PC RELOADED Spy Skills
    -Cars 2 The Video Game PC RELOADED Action Racing
    -Cars 2 The Video Game PC RELOADED Battle Arenas
    -Cars 2 The Video Game PC RELOADED Disney Pixar Film
    -Cars 2 The Video Game PC RELOADED Mater and Lightning McQueen
    -Cars 2 The Video Game PC RELOADED Avalanche Software Developer
    -Cars 2 The Video Game PC RELOADED Disney Interactive Publisher
    -Cars 2 The Video Game PC RELOADED Release Date June 21, 2011
    -Cars 2 The Video Game PC RELOADED Genre Action Adventure Racing
    -Cars 2 The Video Game PC RELOADED Size 2.1 GB
    -Cars 2 The Video Game PC RELOADED One FTP Link Download
    -Cars 2 The Video Game PC RELOADED Direct Link Download
    -Cars 2 The Video Game PC RELOADED UP07 Link Download
    -Cars 2 The Video Game PC RELOADED UPTOBOX Link Download
    -Cars 2 The Video Game PC RELOADED HUGEFILES Link Download
    -Cars 2 The Video Game PC RELOADED USERSCLOUD Link Download
    -Cars 2 The Video Game PC RELOADED JHEBERG Link Download
    -Cars 2 The Video Game PC RELOADED GO4UP Link Download
    -Cars 2 The Video Game PC RELOADED MULTI LINKS Download
    -Cars 2 The Video Game PC RELOADED TORRENT Link Download

    -

    Cracking is a process of modifying or bypassing the copy protection or digital rights management (DRM) of a software or a game. This allows users to run or distribute the software or game without paying for it or without needing a valid license or activation key. Cracking is usually done by reverse engineering or hacking the code of the software or game.

    -

    RELOADED usually releases their cracked games as .rar files that contain an ISO image of the game disc and a serial number or a keygen (a program that generates serial numbers). Users can then mount the ISO image on a virtual drive or burn it on a physical disc and install the game using the serial number or keygen. Sometimes RELOADED also releases patches or updates for their cracked games.

    -

    The advantages and disadvantages of using RELOADED games

    -

    Using RELOADED games has some advantages and disadvantages that users should be aware of before downloading them.

    -

    The main advantage of using RELOADED games is that they are free. Users can save money by not buying the original games or paying for subscription fees or microtransactions. They can also access games that are not available in their region or platform.

    -

    The main disadvantage of using RELOADED games is that they are illegal. Users can face legal consequences if they are caught downloading or distributing pirated games. They can also expose their devices to malware or viruses that may be hidden in the cracked files. They may also experience bugs or errors that are not present in the original games. They may also miss out on some features or content that are only available in the official versions of the games.

    -

    Therefore, users should be careful when using RELOADED games and weigh the pros and cons before deciding whether to download them or not.

    -

    How to download and install Cars 2 The Video Game PC RELOADED Serial Number.rar?

    -

    The requirements and precautions for downloading the game

    -

    Before downloading Cars 2 The Video Game PC RELOADED Serial Number.rar, users should make sure that their devices meet the minimum system requirements for running the game. According to Skidrow & Reloaded Games, these are:

    - - - - - - - - - - - - - - - -
    OSProcessorMemoryGraphicsStorage
    Microsoft Windows XP SP3/ Windows 7Intel Pentium 4 3.0 GHz, AMD Athlon 64 3500+ or equivalent processor2 GB RAM256 MB DirectX 9.0c-compatible, NVIDIA GeForce 6600 or better, ATI Radeon X800 or better dedicated video card supporting Shaders Model 33 GB available space
    -

    Users should also have a reliable internet connection for downloading the file and a program that can extract .rar files such as WinRAR or 7-Zip. They should also scan the file for any malware or viruses before opening it.

    -

    The steps to install and run the game

    -

    To install and run Cars 2 The Video Game PC RELOADED Serial Number.rar, users should follow these steps:

    -
      -
    1. Download Cars 2 The Video Game PC RELOADED Serial Number.rar from a trusted source such as Skidrow & Reloaded Games or PCGamesTorrents. The file size is about 2.1 GB.
    2. -
    3. Extract Cars 2 The Video Game PC RELOADED Serial Number.rar using WinRAR or 7-Zip. This will create a folder named Cars_2_The_Video_Game_RELOADED containing an ISO file named Cars_2_The_Video_Game_RELOADED.iso.
    4. -
    5. Mount Cars_2_The_Video_Game_RELOADED.iso on a virtual drive using a program such as Daemon Tools Lite or PowerISO. Alternatively, burn Cars_2_The_Video_Game_RELOADED.iso on a blank DVD using a program such as ImgBurn or Nero Burning ROM.
    6. -
    7. Open the virtual drive or insert the DVD into your device's disc drive. This will launch an autorun menu with options such as Install Game, Play Game, View Readme, and Exit.
    8. -
    9. Select Install Game and follow the instructions on screen. You will need to enter a serial number or use a keygen to activate the game. You can find the serial number or keygen in a folder named Crack inside the folder Cars_2_The_Video_Game_RELOADED.
    10. -
    11. After installing the game, you can launch it from the autorun menu or from the desktop shortcut. You can also play the game from the DVD or the virtual drive without needing to install it.
    12. -
    -

    How to play Cars 2 The Video Game?

    -

    The modes and the missions of the game

    -

    Cars 2 The Video Game has four main modes: C.H.R.O.M.E., Free Play, Multiplayer, and Rewards.

    -
      -
    • C.H.R.O.M.E. is the story mode where you can play as different characters and complete various missions and challenges. You can earn badges and crests for your performance and unlock new cars, tracks, weapons, gadgets, and modes.
    • -
    • Free Play is where you can customize your own races and battles with any car, track, weapon, gadget, or mode you have unlocked. You can also set the difficulty, number of laps, number of opponents, and other options.
    • -
    • Multiplayer is where you can play with up to four players on split-screen or online. You can choose from any mode or track you have unlocked and compete with or against your friends.
    • -
    • Rewards is where you can view your statistics, achievements, trophies, badges, crests, unlocked cars, tracks, weapons, gadgets, and modes. You can also view concept art and videos from the game and the film.

      -
    -

    The tips and tricks to master the game

    -

    Cars 2 The Video Game is a game that requires both speed and skill to win. Here are some tips and tricks to help you master the game:

    -
      -
    • Use your turbo boost wisely. You can fill up your turbo meter by performing tricks, drifting, drafting, side-bashing, or using weapons. You can also use your turbo to jump over obstacles or gaps. To activate your turbo boost, press .
    • -
    • Use your weapons effectively. You can equip two weapons at a time and switch between them by pressing . You can find weapons on the track or by driving through weapon boxes. Some weapons are offensive, such as machine guns, rockets, oil slicks, or mines. Some weapons are defensive, such as shields, EMPs, or leeches. Some weapons are special, such as spy vision or holograms. To use your weapon, press .
    • -
    • Use your gadgets smartly. You can equip one gadget at a time and use it by pressing . You can find gadgets on the track or by driving through gadget boxes. Some gadgets are passive, such as magnets or turbo chargers. Some gadgets are active, such as cloaks or disguises. Some gadgets are unique, such as tow cables or spy flies.
    • -
    • Use your spy skills cleverly. You can perform various spy skills by using different combinations of buttons. For example:
    • -
        -
      • To drive on two wheels (left), press + . To drive on two wheels (right), press + . Driving on two wheels lets you squeeze through narrow spaces or avoid obstacles.
      • -
      • To drive backwards (reverse), press + . Driving backwards lets you shoot behind you or confuse your opponents.
      • -
      • To jump (hop), press + . Jumping lets you avoid obstacles or perform tricks in mid-air.
      • -
      • To side-bash (left), press + . To side-bash (right), press + . Side-bashing lets you knock out your opponents or make them spin out.
      • -
      -
    • Practice on different tracks and modes. Each track has its own layout, shortcuts, hazards, and secrets. Each mode has its own objectives, rules, and strategies. The more you play on different tracks and modes, the more you will learn how to adapt to different situations and challenges.
    • -
    -

    Conclusion

    -

    Cars 2 The Video Game PC RELOADED Serial Number.rar is a file that lets you play Cars 2 The Video Game for free on your PC. It is a fun and exciting game that combines racing and action-adventure with spy elements and humor. It is based on the Disney Pixar animated film Cars 2 but also adds some new features and twists. It has four main modes: C.H.R.O.M.E., Free Play, Multiplayer, and Rewards. It has over 30 playable characters, each with their own abilities and customization options. It has six different locations from the film, each with their own unique tracks, landmarks, scenery, and challenges. It has various weapons, gadgets, and spy skills that you can use to enhance your gameplay. It has easy drop-in and drop-out local multiplayer for up to four players or online multiplayer for up to eight players. It has stunning graphics, smooth animations, and authentic voice-overs from the film's cast.

    -

    If you want to play Cars 2 The Video Game PC RELOADED Serial Number.rar, you need to download it from a trusted source, extract it using WinRAR or 7-Zip, mount it on a virtual drive or burn it on a DVD, install it using a serial number or a keygen, and launch it from the autorun menu or from the desktop shortcut. You also need to make sure that your device meets the minimum system requirements for running the game and that you scan the file for any malware or viruses before opening it. You also need to be aware of the legal risks and ethical issues of using pirated games.

    -

    If you want to master Cars 2 The Video Game PC RELOADED Serial Number.rar, you need to practice on different tracks and modes, use your turbo boost wisely, use your weapons effectively, use your gadgets smartly, use your spy skills cleverly, and have fun with your friends.

    -

    We hope this article has helped you understand what Cars 2 The Video Game PC RELOADED Serial Number.rar is, how to download and install it, and how to play it. We hope you enjoy playing this game as much as we did writing this article. Happy gaming!

    -

    FAQs

    -
      -
    1. Q: Is Cars 2 The Video Game PC RELOADED Serial Number.rar safe to download?
    2. -A: It depends on where you download it from. Some sources may contain malware or viruses that can harm your device or steal your data. Always scan the file before opening it and use a reliable antivirus program.
    3. Q: Is Cars 2 The Video Game PC RELOADED Serial Number.rar legal to download?
    4. -A: No. Downloading pirated games is illegal in most countries and can result in fines or jail time if you are caught. It also violates the intellectual property rights of the developers and publishers of the game.
    5. Q: Is Cars 2 The Video Game PC RELOADED Serial Number.rar compatible with Windows 10?
    6. -A: Yes. According to some users who have tried it, Cars 2 The Video Game PC RELOADED Serial Number.rar works fine on Windows 10.
    7. Q: How do I unlock all the cars in Cars 2 The Video Game PC RELOADED Serial Number.rar?
    8. -A: You can unlock all the cars by playing through C.H.R.O.M.E. mode and earning badges and crests for your performance. Alternatively, you can use a cheat code that unlocks all the cars instantly. From the main menu go to Options and choose \"Enter Codes\". Enter this code: 959595.
    9. Q: How do I get a perfect start in Cars 2 The Video Game PC RELOADED Serial Number.rar?
    10. -A: To get a perfect start in any race or battle event in Cars 2 The Video Game PC RELOADED Serial Number.rar , you need to press as fast as possible when the countdown starts until it reaches zero. Tip the controller up and use your index/pointer finger to rapidly tap the button. Alternatively ,you can use a turbo controller to get this with ease.
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Chassis Design Principles And Analysis Milliken Pdf Download Insights and Examples from a Master of Chassis Technology.md b/spaces/raedeXanto/academic-chatgpt-beta/Chassis Design Principles And Analysis Milliken Pdf Download Insights and Examples from a Master of Chassis Technology.md deleted file mode 100644 index f96d51c3dcd8339b23c638d1cd5af12e2cd24194..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Chassis Design Principles And Analysis Milliken Pdf Download Insights and Examples from a Master of Chassis Technology.md +++ /dev/null @@ -1,72 +0,0 @@ -
    -

    Chassis Design Principles And Analysis Milliken Pdf Download

    -

    If you are interested in learning more about chassis design and analysis, you may want to check out this book by William F. Milliken and Douglas L. Milliken. The book is based on the technical writings of Maurice Olley, one of the great automotive engineers of the 20th century, who had a career that spanned two continents. Olley is perhaps best known for his systematic approach to ride and handling, which is still used in the auto industry today. The book is the first complete presentation of his life and work, and provides insight into the development of chassis technology and its practical application by a master.

    -

    Chassis Design Principles And Analysis Milliken Pdf Download


    Downloadhttps://tinourl.com/2uL1Oe



    -

    Who was Maurice Olley and why is his work important?

    -

    Maurice Olley was born in England in 1889 and graduated from Cambridge University with a degree in mechanical engineering. He worked for several companies, including Rolls-Royce, where he designed aircraft engines during World War I. He moved to the United States in 1927 and joined General Motors, where he became chief engineer of Cadillac. He was responsible for many innovations, such as independent front suspension, hydraulic shock absorbers, torsion bars, anti-roll bars, power steering, power brakes, automatic transmission, air conditioning, etc. He also conducted extensive research on vehicle dynamics, especially on cornering behavior, ride quality, stability control, etc. He published many technical papers and reports on these topics, which are now considered classics in the field. He retired from GM in 1955 and continued to work as a consultant until his death in 1972.

    -

    Olley's work is important because he was one of the pioneers who applied scientific methods and principles to chassis design and analysis. He developed mathematical models and equations that describe how vehicles behave under various conditions. He also devised test procedures and evaluation techniques that measure vehicle performance objectively. He used his engineering experience and intuition to interpret the results and suggest improvements. His work was comprehensive and rigorous, covering both theoretical aspects and practical applications. His work influenced many generations of engineers who followed his footsteps.

    -

    What are the main topics covered in the book?

    -

    The book consists of 10 chapters that cover various aspects of chassis design and analysis. The chapters are based on Olley's technical writings, which are reproduced in full or in part in the book. The authors also provide additional explanations, comments, examples, exercises, appendices, figures and tables to supplement Olley's original material. The main topics covered in the book are:

    -

    Steady-state cornering

    -

    This chapter explains how slip angle and steer effects influence vehicle handling during steady-state cornering. Slip angle is the angle between the direction of travel of a wheel and its plane of rotation. Steer effects are caused by changes in wheel angles due to steering input or suspension geometry. The chapter discusses how these factors affect lateral force, yaw moment, understeer gradient, steering ratio, steering effort, etc.

    -

    Transient cornering

    -

    This chapter explains how vehicle dynamics change during transient cornering situations, such as acceleration, braking or steering maneuvers. The chapter discusses how these factors affect lateral acceleration, yaw rate, sideslip angle, roll angle, etc. The chapter also introduces concepts such as natural frequency, damping ratio, phase lag, etc., which describe how vehicles respond to disturbances.

    -

    Ride oscillations

    -

    In conclusion, Chassis Design Principles And Analysis Milliken Pdf Download is a great book for anyone who wants to learn more about chassis design and analysis. The book is based on the technical writings of Maurice Olley, one of the pioneers who applied scientific methods and principles to chassis design and analysis. The book covers various topics such as steady-state cornering, transient cornering, ride oscillations, linkages, roll, roll moments and skew rates, fore-and-aft forces, leaf springs, etc. The book also provides many benefits such as examples and exercises, analytical developments, historical perspective, etc. The book can be downloaded online from this website: https://www.millikenresearch.com/olleybak.pdf. We hope you enjoyed reading this article and found it informative and useful.

    -

    Chassis Engineering Milliken Pdf Free Download
    -How To Design A Car Chassis Pdf Book By Milliken
    -Principles Of Vehicle Dynamics And Chassis Analysis Pdf
    -Milliken Chassis Design Ebook Download
    -Chassis Design For Racing Cars Pdf By Milliken
    -Vehicle Chassis Dynamics And Design Pdf Course
    -Milliken And Milliken Chassis Engineering Pdf
    -Chassis Design Fundamentals And Applications Pdf
    -Download Chassis Design Principles And Analysis By Milliken
    -Car Chassis Design And Analysis Pdf Tutorial
    -Chassis Engineering Principles And Practice Pdf By Milliken
    -Vehicle Chassis Design And Optimization Pdf
    -Milliken Race Car Vehicle Dynamics And Chassis Design Pdf
    -Chassis Design Techniques And Analysis Pdf Book
    -Milliken Chassis Design Pdf Free Online
    -Vehicle Dynamics And Chassis Design Pdf Lecture Notes
    -Chassis Engineering By William F. Milliken Pdf
    -Chassis Design For Performance And Handling Pdf
    -Milliken Chassis Design Principles And Analysis Solutions Pdf
    -Car Chassis Engineering And Design Pdf By Milliken
    -Vehicle Chassis Analysis And Design Methods Pdf
    -Milliken Chassis Design Book Pdf Download Link
    -Chassis Design For Road And Track Pdf By Milliken
    -Vehicle Dynamics Theory And Application To Chassis Design Pdf
    -Milliken Chassis Engineering Principles And Analysis Pdf Download
    -Car Chassis Dynamics And Design Principles Pdf
    -Chassis Design Concepts And Analysis Pdf By Milliken
    -Vehicle Chassis Structural Analysis And Design Pdf
    -Download Free Ebook Of Chassis Engineering By Milliken
    -Car Chassis Design Theory And Practice Pdf By Milliken
    -Vehicle Dynamics And Chassis Engineering Pdf Book
    -Milliken Race Car Vehicle Dynamics Solutions Manual For Chassis Design Pdf
    -Chassis Structural Design And Analysis Pdf By Milliken
    -Vehicle Dynamics Modeling And Simulation For Chassis Design Pdf
    -Milliken Car Suspension And Handling With Chassis Engineering Pdf Download
    -Car Chassis Analysis And Optimization Techniques Pdf By Milliken
    -Vehicle Dynamics Control And Chassis Design Pdf Course
    -Milliken Race Car Engineering And Mechanics With Chassis Design Pdf Download Link
    -Car Suspension System Design And Analysis With Chassis Engineering Pdf By Milliken
    -Vehicle Dynamics Fundamentals For Chassis Engineers Pdf Book

    -

    FAQs

    -

    Here are some frequently asked questions about chassis design and analysis and the book:

    -
      -
    • Q: What is chassis design and analysis?
    • -
    • A: Chassis design and analysis is the study of how vehicles behave under various conditions due to the interaction of their components such as wheels, tires, suspension systems, brakes, steering systems, etc.
    • -
    • Q: Who is Maurice Olley and why is his work important?
    • -
    • A: Maurice Olley was one of the great automotive engineers of the 20th century who worked for Rolls-Royce and General Motors. He was responsible for many innovations in chassis design and analysis and published many technical papers and reports on these topics. His work is important because he was one of the pioneers who applied scientific methods and principles to chassis design and analysis.
    • -
    • Q: What are the main topics covered in the book?
    • -
    • A: The book covers various topics such as steady-state cornering, transient cornering, ride oscillations, linkages, roll, roll moments and skew rates, fore-and-aft forces, leaf springs, etc.
    • -
    • Q: What are the benefits of reading the book?
    • -
    • A: The book provides many benefits such as examples and exercises, analytical developments, historical perspective, etc.
    • -
    • Q: How to download the book?
    • -
    • A: The book can be downloaded online from this website: https://www.millikenresearch.com/olleybak.pdf.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dora en francais saison 1 torrent Comment regarder les aventures de Dora lexploratrice en ligne.md b/spaces/raedeXanto/academic-chatgpt-beta/Dora en francais saison 1 torrent Comment regarder les aventures de Dora lexploratrice en ligne.md deleted file mode 100644 index 0bd9c14546d92a1e0d9566deabd84f77d018b56c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dora en francais saison 1 torrent Comment regarder les aventures de Dora lexploratrice en ligne.md +++ /dev/null @@ -1,172 +0,0 @@ -
    -

    Dora en francais saison 1 torrent: comment télécharger et regarder la série animée pour enfants

    -

    Vous êtes fan de la série animée Dora l'exploratrice, ou vous souhaitez la faire découvrir à vos enfants? Vous cherchez un moyen simple et rapide pour accéder à tous les épisodes de la première saison en français? Vous avez entendu parler du téléchargement de torrents, mais vous ne savez pas comment ça marche ni quels sont les risques? Cet article est fait pour vous! Nous allons vous expliquer ce qu'est Dora en francais saison 1 torrent, pourquoi vous pourriez vouloir le télécharger et le regarder, et comment faire en toute sécurité.

    -

    dora en francais saison 1 torrent


    Download ———>>> https://tinourl.com/2uL4sT



    -

    Introduction

    -

    Qu'est-ce que Dora en francais saison 1 torrent?

    -

    Dora en francais saison 1 torrent est le nom donné à un fichier qui contient les informations nécessaires pour télécharger et visionner les 26 épisodes de la première saison de la série animée Dora l'exploratrice en version française. Un fichier torrent est un type de fichier qui permet de partager des données sur internet via un réseau peer-to-peer (P2P), c'est-à-dire un réseau où les utilisateurs s'échangent directement des fichiers sans passer par un serveur centralisé. Pour utiliser un fichier torrent, il faut avoir un logiciel spécialisé appelé client torrent, qui se charge de se connecter aux autres utilisateurs qui possèdent le même fichier, et de le télécharger par morceaux jusqu'à ce qu'il soit complet. Une fois le fichier téléchargé, il suffit de l'ouvrir avec un lecteur multimédia compatible pour pouvoir le regarder.

    -

    Pourquoi télécharger et regarder Dora en francais saison 1 torrent?

    -

    Télécharger et regarder Dora en francais saison 1 torrent peut présenter plusieurs avantages, mais aussi quelques inconvénients. Voici les principaux:

    -

    Comment télécharger et regarder Dora en francais saison 1 torrent?

    -

    Télécharger et regarder Dora en francais saison 1 torrent n'est pas très compliqué, mais il faut respecter certaines étapes et prendre certaines précautions. Voici comment faire:

    -

    Qu'est-ce que Dora en francais saison 1 torrent?

    -

    Présentation de la série animée Dora l'exploratrice

    -

    Synopsis

    -

    Dora l'exploratrice est une série animée américaine créée par Chris Gifford, Valerie Walsh Valdes et Eric Weiner, diffusée depuis 2000 sur Nickelodeon. Elle met en scène les aventures de Dora, une petite fille hispano-américaine de sept ans qui adore explorer le monde avec son ami Babouche, un singe bleu. Dans chaque épisode, Dora doit résoudre des énigmes, apprendre des mots d'espagnol, chanter des chansons, et éviter les pièges du renard rusé Chipeur. Elle peut compter sur l'aide du sac à dos magique Sac à Dos, de la carte parlante Carte, et des téléspectateurs qui sont invités à participer activement à l'histoire.

    -

    Personnages principaux

    -
      -
    • Dora: c'est l'héroïne de la série. Elle est curieuse, courageuse, optimiste et généreuse. Elle aime apprendre de nouvelles choses et aider les autres. Elle porte toujours un t-shirt rose, un short orange, des baskets jaunes et un bracelet magique qui lui permet d'appeler ses amis.
    • -
    • Babouche: c'est le meilleur ami de Dora. C'est un singe bleu qui porte des bottes rouges. Il est malin, drôle, loyal et gourmand. Il suit toujours Dora dans ses aventures, même s'il a parfois peur ou qu'il fait des bêtises.
    • -
    • Chipeur: c'est l'antagoniste principal de la série. C'est un renard roux qui porte un masque bleu et des gants jaunes. Il aime chiper les objets de Dora et ses amis pour les embêter. Il peut être arrêté si on lui dit trois fois "Chipeur arrête de chiper!".
    • -
    • Sac à Dos: c'est le sac à dos magique que porte Dora. Il contient tout ce dont elle a besoin pour ses aventures. Il peut parler et chanter une chanson pour présenter son contenu. Il demande souvent aux téléspectateurs de choisir l'objet utile parmi ceux qu'il propose.
    • -
    • Carte: c'est une carte parlante qui se trouve dans Sac à Dos. Elle indique à Dora le chemin à suivre pour atteindre son objectif. Elle répète souvent les trois lieux principaux que doit traverser Dora, comme "la forêt, le lac, la montagne". Elle demande aussi aux téléspectateurs de répéter avec elle.
    • -
    -

    Épisodes de la saison 1

    - - - -```html - - - - - - - - - - - -<L'île aux étoiles -Dora sauve le prince -Babouche perd ses bottes -Dora et le petit cochon -Dora et Diego à la rescousse -Dora et le singe magicien -Dora et le vœu de Babouche -Dora et la flûte enchantée -Dora et le pont de singes -Dora et le bébé oiseau bleu -Dora et la chasse aux œufs de Pâques -Dora et le petit train bleu -Dora et la maison de poupée géante -
    NuméroTitreDate de diffusion
    01La légende du grand poulet rouge14 août 2000
    02Perdus15 août 2000
    03La plage16 août 2000
    04Les trois petits cochons17 août 2000
    05Nous sommes tous des musiciens18 août 2000
    06Babouche à la rescousse21 août 2000
    07L'arbre à chocolat22 août 2000
    08Tico et les noisettes23 août 2000
    09Babouche le pompier24 août 2000
    10Berry Hunt25 août 2000
    11Dora fait de la gymnastique rythmique28 août 2000
    12L'anniversaire de Tico29 août 2000
    13Dora et le trésor des pirates
    -

    Présentation de la version française de Dora l'exploratrice

    -

    Diffusion et doublage

    -

    La version française de Dora l'exploratrice est diffusée depuis le 19 mars 2001 sur TF1 dans l'émission Tfou, puis sur Nickelodeon Junior. Le doublage français est assuré par les studios Dubbing Brothers, sous la direction artistique de Valérie Siclay. Les voix principales sont celles de:

    -
      -
    • Céline Melloul: Dora (saisons 1 à 4), Carte (saisons 1 à 4), Tico (saisons 1 à 4)
    • -
    • Alexia Papineschi: Dora (saisons 5 à 8), Carte (saisons 5 à 8), Tico (saisons 5 à 8)
    • -
    • Bernard Alane: Sac à Dos (saisons 1 à 4), Chipeur (saisons 1 à 8)
    • -```html 8), Babouche (saisons 1 à 8) -
    • Marie-Eugénie Maréchal: Babouche (chant, saisons 1 à 8)
    • -
    • Donald Reignoux: Diego (saisons 3 à 8)
    • -
    -

    Particularités et différences avec la version originale

    -

    La version française de Dora l'exploratrice présente quelques particularités et différences avec la version originale. Voici les principales:

    -
      -
    • Dans la version originale, Dora apprend des mots d'espagnol aux téléspectateurs anglophones. Dans la version française, elle apprend des mots d'anglais aux téléspectateurs francophones.
    • -
    • Dans la version originale, Dora et ses amis chantent en anglais et en espagnol. Dans la version française, ils chantent en français et en anglais.
    • -
    • Dans la version originale, le nom de certains personnages est différent. Par exemple, Babouche s'appelle Boots, Sac à Dos s'appelle Backpack, Carte s'appelle Map, Tico s'appelle Tico.
    • -
    • Dans la version originale, le générique de fin est différent. Il montre Dora et Babouche qui dansent sur une chanson intitulée "We Did It!". Dans la version française, le générique de fin est le même que celui du début, mais avec des paroles différentes.
    • -
    -

    Pourquoi télécharger et regarder Dora en francais saison 1 torrent?

    -

    Les avantages de télécharger et regarder Dora en francais saison 1 torrent

    -

    Accéder à la série sans connexion internet

    -

    L'un des principaux avantages de télécharger et regarder Dora en francais saison 1 torrent est de pouvoir accéder à la série sans avoir besoin d'une connexion internet. En effet, une fois le fichier torrent téléchargé sur son appareil, il suffit de l'ouvrir avec un lecteur multimédia pour pouvoir le regarder. Cela peut être pratique si on n'a pas accès à internet, ou si on veut éviter de consommer trop de données mobiles.

    -

    Choisir la qualité et le format de la vidéo

    -

    Un autre avantage de télécharger et regarder Dora en francais saison 1 torrent est de pouvoir choisir la qualité et le format de la vidéo. En effet, il existe différents fichiers torrents qui proposent différents niveaux de résolution (SD, HD, 4K), différents formats (MP4, MKV, AVI) et différents codecs (H264, H265). Cela permet de s'adapter aux préférences et aux capacités de son appareil. Par exemple, si on a un écran haute définition, on peut choisir un fichier torrent en HD ou en 4K pour profiter d'une meilleure qualité d'image. Si on a un appareil avec peu d'espace de stockage, on peut choisir un fichier torrent avec un format ou un codec plus léger pour économiser de la place.

    -

    dora l'exploratrice saison 1 french torrent
    -dora en francais saison 1 cpasbien
    -dora en francais saison 1 oxtorrent
    -dora en francais saison 1 telecharger
    -dora en francais saison 1 streaming
    -dora en francais saison 1 gratuit
    -dora en francais saison 1 complet
    -dora en francais saison 1 episode 1
    -dora en francais saison 1 dvdrip
    -dora en francais saison 1 hd
    -dora en francais saison 1 uptobox
    -dora en francais saison 1 download
    -dora en francais saison 1 vostfr
    -dora en francais saison 1 youtube
    -dora en francais saison 1 mega
    -dora en francais saison 1 rarbg
    -dora en francais saison 1 ygg
    -dora en francais saison 1 magnet
    -dora en francais saison 1 kickass
    -dora en francais saison 1 t411
    -dora en francais saison 1 zone telechargement
    -dora en francais saison 1 voirfilm
    -dora en francais saison 1 sokrostream
    -dora en francais saison 1 papystreaming
    -dora en francais saison 1 libertyvf
    -dora en francais saison 1 torrent9
    -dora en francais saison 1 torrentz2
    -dora en francais saison 1 limetorrents
    -dora en francais saison 1 extratorrents
    -dora en francais saison 1 eztv
    -dora en francais saison 1 tfarjo
    -dora en francais saison 1 filmcomplet
    -dora en francais saison 1 wawacity
    -dora en francais saison 1 torrentfunk
    -dora en francais saison 1 torlock
    -dora en francais saison 1 torrentproject
    -dora en francais saison 1 bittorrent
    -dora en francais saison 1 idope
    -dora en francais saison 1 monova
    -dora en francais saison 1 seedpeer
    -dora en francais saison 1 yourbittorrent
    -dora en francais saison 1 isohunt
    -dora en francais saison 1 demonoid
    -dora en francais saison 1 katcr
    -dora en francais saison 1 piratebay
    -dora en francais saison 1 rarbgmirror
    -dora en francais saison 1 torrentgalaxy
    -dora en francais saison 1 ettvdl
    -dora en francais saison 1 glodls

    -

    Éviter les publicités et les interruptions

    -

    Un dernier avantage de télécharger et regarder Dora en francais saison 1 torrent est d'éviter les publicités et les interruptions. En effet, si on regarde la série sur internet ou à la télévision, on peut être confronté à des publicités qui coupent le flux ou qui ralentissent le chargement. De plus, on peut être dépendant du programme ou du site qui diffuse la série, et ne pas pouvoir choisir l'ordre ou le moment où on veut regarder les épisodes. En téléchargeant et en regardant Dora en francais saison 1 torrent, on peut éviter ces désagréments et profiter pleinement de la série sans interruption ni contrainte.

    -

    Les inconvénients de télécharger et regarder Dora en francais saison 1 torrent

    -

    Risquer de télécharger des fichiers malveillants ou illégaux

    -```html il existe de nombreux sites de torrents qui proposent des fichiers de mauvaise qualité, truqués, infectés par des virus ou des logiciels espions, ou qui contiennent du contenu inapproprié ou choquant. De plus, il faut savoir que le téléchargement et le partage de torrents qui ne respectent pas les droits d'auteur et la propriété intellectuelle sont considérés comme illégaux dans la plupart des pays, et peuvent entraîner des sanctions pénales ou civiles. Il est donc important de vérifier la légalité et la qualité du fichier torrent avant de le télécharger, et de se protéger avec un antivirus et un VPN.

    -

    Enfreindre les droits d'auteur et la propriété intellectuelle

    -

    Un autre inconvénient de télécharger et regarder Dora en francais saison 1 torrent est d'enfreindre les droits d'auteur et la propriété intellectuelle. En effet, la série animée Dora l'exploratrice est une œuvre protégée par le droit d'auteur, qui appartient à ses créateurs et à ses producteurs. Le téléchargement et le partage de torrents qui reproduisent la série sans leur autorisation constituent une violation de leurs droits, et portent atteinte à leur rémunération et à leur reconnaissance. Il est donc préférable de respecter les règles du droit d'auteur, et de privilégier les sources légales pour accéder à la série, comme les plateformes de streaming ou les DVD.

    -

    Occuper de l'espace de stockage sur son appareil

    -

    Un dernier inconvénient de télécharger et regarder Dora en francais saison 1 torrent est d'occuper de l'espace de stockage sur son appareil. En effet, les fichiers torrents peuvent être assez volumineux, surtout s'ils sont en haute définition ou en 4K. Ils peuvent donc prendre beaucoup de place sur le disque dur ou la mémoire interne de son ordinateur, de sa tablette ou de son smartphone. Cela peut réduire les performances de l'appareil, ou empêcher d'y stocker d'autres fichiers. Il est donc conseillé de supprimer les fichiers torrents une fois qu'on les a regardés, ou de les transférer sur un support externe comme une clé USB ou un disque dur externe.

    -

    Comment télécharger et regarder Dora en francais saison 1 torrent?

    -

    Les étapes pour télécharger et regarder Dora en francais saison 1 torrent

    -

    Installer un logiciel de téléchargement de torrents

    -

    La première étape pour télécharger et regarder Dora en francais saison 1 torrent est d'installer un logiciel de téléchargement de torrents sur son appareil. Il existe plusieurs logiciels gratuits et faciles à utiliser, comme uTorrent, BitTorrent, qBittorrent ou Vuze. Il suffit de se rendre sur le site officiel du logiciel choisi, de télécharger le fichier d'installation, et de suivre les instructions pour l'installer sur son appareil.

    -

    Trouver un site de torrents fiable et sécurisé

    -

    La deuxième étape pour télécharger et regarder Dora en francais saison 1 torrent est de trouver un site de torrents fiable et sécurisé. Il existe des milliers de sites de torrents sur internet, mais tous ne sont pas dignes de confiance. Certains peuvent contenir des fichiers malveillants ou illégaux, ou être bloqués par les autorités. Il faut donc faire attention à choisir un site qui propose des fichiers torrents de bonne qualité, vérifiés par les utilisateurs, et qui respecte les droits d'auteur. Parmi les sites les plus populaires et les plus sûrs, on peut citer OxTorrent, Torrent911, The Pirate Bay ou YggTorrent.

    -

    Rechercher et sélectionner le fichier torrent de Dora en francais saison 1

    -```html de rechercher et de sélectionner le fichier torrent de Dora en francais saison 1. Pour cela, il faut se rendre sur le site de torrents choisi, et taper le nom de la série dans la barre de recherche. On peut ensuite trier les résultats par date, par taille, par nombre de seeders (les utilisateurs qui partagent le fichier) ou par nombre de leechers (les utilisateurs qui téléchargent le fichier). Il faut choisir le fichier torrent qui correspond le mieux à ses critères, en vérifiant la qualité, le format, le codec et la langue de la vidéo. Il faut aussi lire les commentaires des autres utilisateurs pour s'assurer que le fichier torrent est sans danger et sans défaut. Une fois le fichier torrent sélectionné, il faut cliquer sur le bouton de téléchargement pour l'ouvrir avec son logiciel de téléchargement de torrents.

    -

    Lancer le téléchargement et attendre la fin du transfert

    -

    La quatrième étape pour télécharger et regarder Dora en francais saison 1 torrent est de lancer le téléchargement et d'attendre la fin du transfert. Une fois le fichier torrent ouvert avec son logiciel de téléchargement de torrents, il faut choisir l'emplacement où l'on veut enregistrer le fichier vidéo sur son appareil. On peut ensuite lancer le téléchargement en cliquant sur le bouton de démarrage. Le logiciel va alors se connecter aux autres utilisateurs qui possèdent le même fichier, et commencer à le télécharger par morceaux. La durée du téléchargement dépend de la taille du fichier, du nombre de seeders et de leechers, et de la vitesse de sa connexion internet. On peut suivre la progression du téléchargement sur l'interface du logiciel, qui indique le pourcentage, la vitesse, le temps restant et le ratio (le rapport entre ce qu'on a téléchargé et ce qu'on a partagé). Quand le téléchargement est terminé, le fichier vidéo est prêt à être regardé.

    -

    Ouvrir le fichier vidéo avec un lecteur multimédia compatible

    -

    La cinquième et dernière étape pour télécharger et regarder Dora en francais saison 1 torrent est d'ouvrir le fichier vidéo avec un lecteur multimédia compatible. Pour cela, il faut se rendre dans le dossier où l'on a enregistré le fichier vidéo, et faire un clic droit dessus. On peut alors choisir l'option "Ouvrir avec" et sélectionner son lecteur multimédia préféré. Il existe plusieurs lecteurs multimédias gratuits et performants, comme VLC Media Player, Media Player Classic ou KMPlayer. Ils peuvent lire la plupart des formats et des codecs de vidéo, et offrent des options de réglage du son, de l'image, des sous-titres ou des chapitres. Une fois le lecteur multimédia lancé, il suffit de cliquer sur le bouton de lecture pour commencer à regarder Dora en francais saison 1 torrent.

    -

    Les précautions à prendre pour télécharger et regarder Dora en francais saison 1 torrent

    -

    Vérifier la légalité et la qualité du fichier torrent avant de le télécharger

    -

    Comme nous l'avons vu précédemment, télécharger et regarder Dora en francais saison 1 torrent peut présenter des risques pour son appareil et pour sa sécurité juridique. Il est donc important de vérifier la légalité et la qualité du fichier torrent avant de le télécharger. Pour cela, il faut se renseigner sur les droits d'auteur et la propriété intellectuelle qui s'appliquent à la série animée Dora l'exploratrice dans son pays, et respecter les règles du partage et du téléchargement de torrents. Il faut aussi lire attentivement les informations et les commentaires qui accompagnent le fichier torrent sur le site de torrents, et s'assurer qu'il n'y a pas d'alerte ou d'avertissement sur sa qualité ou sa sécurité.

    -

    Utiliser un antivirus et un VPN pour se protéger des virus et des hackers

    -```html et la qualité du fichier torrent avant de le télécharger, utiliser un antivirus et un VPN pour se protéger des virus et des hackers, et respecter les règles du partage et du téléchargement de torrents. - -

    Recommandation personnelle sur le téléchargement et le visionnage de Dora en francais saison 1 torrent

    -

    Enfin, voici ma recommandation personnelle sur le téléchargement et le visionnage de Dora en francais saison 1 torrent. Je pense que c'est une bonne option si vous êtes fan de la série animée Dora l'exploratrice, ou si vous souhaitez la faire découvrir à vos enfants. C'est une série éducative, divertissante et adaptée aux plus jeunes, qui leur apprend des mots d'anglais, des notions de géographie, de mathématiques, de culture générale, et des valeurs comme l'amitié, la solidarité, le courage et la curiosité. En téléchargeant et en regardant Dora en francais saison 1 torrent, vous pourrez profiter de la série à votre rythme, sans dépendre d'une connexion internet ou d'un programme télévisé. Vous pourrez aussi choisir la qualité et le format de la vidéo qui vous conviennent le mieux. Cependant, je vous conseille de faire attention aux risques que peut comporter le téléchargement et le partage de torrents. Il faut toujours vérifier la légalité et la qualité du fichier torrent avant de le télécharger, et utiliser un antivirus et un VPN pour se protéger des virus et des hackers. Il faut aussi respecter les droits d'auteur et la propriété intellectuelle des créateurs et des producteurs de la série, et les règles du partage et du téléchargement de torrents. Si vous suivez ces conseils, vous pourrez télécharger et regarder Dora en francais saison 1 torrent en toute sécurité et en toute sérénité.

    -

    FAQs

    -
      -
    • Qu'est-ce qu'un fichier torrent?
    • -

      Un fichier torrent est un type de fichier qui permet de partager des données sur internet via un réseau peer-to-peer (P2P), c'est-à-dire un réseau où les utilisateurs s'échangent directement des fichiers sans passer par un serveur centralisé. Un fichier torrent contient les informations nécessaires pour télécharger et visionner un fichier vidéo, comme son nom, sa taille, son format, son codec, sa langue, sa qualité, son nombre de seeders et de leechers, etc.

      -
    • Qu'est-ce qu'un logiciel de téléchargement de torrents?
    • -

      Un logiciel de téléchargement de torrents est un logiciel spécialisé qui permet d'utiliser des fichiers torrents pour télécharger et partager des fichiers sur internet. Il se charge de se connecter aux autres utilisateurs qui possèdent le même fichier torrent, et de le télécharger par morceaux jusqu'à ce qu'il soit complet. Il permet aussi de gérer les paramètres du téléchargement, comme la vitesse, le ratio, le dossier de destination, etc.

      -
    • Qu'est-ce qu'un site de torrents?
    • -

      Un site de torrents est un site web qui propose des fichiers torrents à télécharger. Il permet aux utilisateurs de rechercher, de trier, de sélectionner et de télécharger les fichiers torrents qui les intéressent. Il offre aussi la possibilité aux utilisateurs de commenter, de noter ou de signaler les fichiers torrents qu'ils ont utilisés.

      -
    • Qu'est-ce qu'un antivirus?
    • -```html de données.

      -
    • Qu'est-ce qu'un VPN?
    • -

      Un VPN, ou réseau privé virtuel, est un service qui permet de créer une connexion sécurisée et anonyme sur internet. Un VPN chiffre les données qui transitent entre son appareil et le réseau P2P, et les fait passer par un serveur situé dans un autre pays. Ainsi, il empêche les hackers ou les autorités de surveiller ou de tracer son activité en ligne. Il permet aussi d'accéder à des sites de torrents qui sont bloqués ou censurés dans son pays.

      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Auto Mouse 1.3 Keygen Serial [EXCLUSIVE].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Auto Mouse 1.3 Keygen Serial [EXCLUSIVE].md deleted file mode 100644 index 5600ca9aca7764de8b400e97a582eee24315284e..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Auto Mouse 1.3 Keygen Serial [EXCLUSIVE].md +++ /dev/null @@ -1,8 +0,0 @@ -

    Auto mouse 1.3 keygen serial


    Download Zip · https://urlgoal.com/2uCJtx



    -
    -Auto Mouse 1.3 Keygen Serial auto mouse mover keygen, auto mouse clicker murgee. Murgee Auto Keyboard 1.2 Crack - Serial Key.. - -Auto Mouse 1.3 Key 4fefd39f24
    -
    -
    -

    diff --git a/spaces/rewoo/ReWOO-Demo/algos/notool.py b/spaces/rewoo/ReWOO-Demo/algos/notool.py deleted file mode 100644 index d1d6c9bb215c2e0d6199b2a8e5b58d4b6a0392ff..0000000000000000000000000000000000000000 --- a/spaces/rewoo/ReWOO-Demo/algos/notool.py +++ /dev/null @@ -1,57 +0,0 @@ -from nodes.LLMNode import * -import time -from utils.util import * - - -class IO: - def __init__(self, fewshot="\n", model_name="text-davinci-003"): - self.fewshot = fewshot - self.model_name = model_name - self.llm = LLMNode("CoT", model_name, input_type=str, output_type=str) - self.context_prompt = "Answer following questions. Respond directly with no extra words.\n" - self.token_unit_price = get_token_unit_price(model_name) - - def run(self, input): - result = {} - st = time.time() - prompt = self.context_prompt + self.fewshot + input + '\n' - response = self.llm.run(prompt, log=True) - result["wall_time"] = time.time() - st - result["input"] = response["input"] - result["output"] = response["output"] - result["prompt_tokens"] = response["prompt_tokens"] - result["completion_tokens"] = response["completion_tokens"] - result["total_tokens"] = response["prompt_tokens"] + response["completion_tokens"] - result["token_cost"] = result["total_tokens"] * self.token_unit_price - result["tool_cost"] = 0 - result["total_cost"] = result["token_cost"] + result["tool_cost"] - result["steps"] = 1 - return result - - -class CoT: - def __init__(self, fewshot="\n", model_name="text-davinci-003"): - self.fewshot = fewshot - self.model_name = model_name - self.llm = LLMNode("CoT", model_name, input_type=str, output_type=str) - self.context_prompt = "Answer following questions. Let's think step by step. Give your reasoning process, and then answer the " \ - "question in a new line directly with no extra words.\n" - self.token_unit_price = get_token_unit_price(model_name) - - def run(self, input): - result = {} - st = time.time() - prompt = self.context_prompt + self.fewshot + input + '\n' - response = self.llm.run(prompt, log=True) - result["wall_time"] = time.time() - st - result["input"] = response["input"] - result["output"] = response["output"] - result["prompt_tokens"] = response["prompt_tokens"] - result["completion_tokens"] = response["completion_tokens"] - result["total_tokens"] = response["prompt_tokens"] + response["completion_tokens"] - result["token_cost"] = result["total_tokens"] * self.token_unit_price - result["tool_cost"] = 0 - result["total_cost"] = result["token_cost"] + result["tool_cost"] - result["steps"] = response["output"].count("Step") - return result - diff --git a/spaces/ronvolutional/sk-node/app/src/routes/sverdle/words.server.ts b/spaces/ronvolutional/sk-node/app/src/routes/sverdle/words.server.ts deleted file mode 100644 index 56082a33115d5ecd1263f5ffa4e733ee7f90e286..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/sk-node/app/src/routes/sverdle/words.server.ts +++ /dev/null @@ -1,12980 +0,0 @@ -/** The list of possible words */ -export const words = [ - 'aback', - 'abase', - 'abate', - 'abbey', - 'abbot', - 'abhor', - 'abide', - 'abled', - 'abode', - 'abort', - 'about', - 'above', - 'abuse', - 'abyss', - 'acorn', - 'acrid', - 'actor', - 'acute', - 'adage', - 'adapt', - 'adept', - 'admin', - 'admit', - 'adobe', - 'adopt', - 'adore', - 'adorn', - 'adult', - 'affix', - 'afire', - 'afoot', - 'afoul', - 'after', - 'again', - 'agape', - 'agate', - 'agent', - 'agile', - 'aging', - 'aglow', - 'agony', - 'agora', - 'agree', - 'ahead', - 'aider', - 'aisle', - 'alarm', - 'album', - 'alert', - 'algae', - 'alibi', - 'alien', - 'align', - 'alike', - 'alive', - 'allay', - 'alley', - 'allot', - 'allow', - 'alloy', - 'aloft', - 'alone', - 'along', - 'aloof', - 'aloud', - 'alpha', - 'altar', - 'alter', - 'amass', - 'amaze', - 'amber', - 'amble', - 'amend', - 'amiss', - 'amity', - 'among', - 'ample', - 'amply', - 'amuse', - 'angel', - 'anger', - 'angle', - 'angry', - 'angst', - 'anime', - 'ankle', - 'annex', - 'annoy', - 'annul', - 'anode', - 'antic', - 'anvil', - 'aorta', - 'apart', - 'aphid', - 'aping', - 'apnea', - 'apple', - 'apply', - 'apron', - 'aptly', - 'arbor', - 'ardor', - 'arena', - 'argue', - 'arise', - 'armor', - 'aroma', - 'arose', - 'array', - 'arrow', - 'arson', - 'artsy', - 'ascot', - 'ashen', - 'aside', - 'askew', - 'assay', - 'asset', - 'atoll', - 'atone', - 'attic', - 'audio', - 'audit', - 'augur', - 'aunty', - 'avail', - 'avert', - 'avian', - 'avoid', - 'await', - 'awake', - 'award', - 'aware', - 'awash', - 'awful', - 'awoke', - 'axial', - 'axiom', - 'axion', - 'azure', - 'bacon', - 'badge', - 'badly', - 'bagel', - 'baggy', - 'baker', - 'baler', - 'balmy', - 'banal', - 'banjo', - 'barge', - 'baron', - 'basal', - 'basic', - 'basil', - 'basin', - 'basis', - 'baste', - 'batch', - 'bathe', - 'baton', - 'batty', - 'bawdy', - 'bayou', - 'beach', - 'beady', - 'beard', - 'beast', - 'beech', - 'beefy', - 'befit', - 'began', - 'begat', - 'beget', - 'begin', - 'begun', - 'being', - 'belch', - 'belie', - 'belle', - 'belly', - 'below', - 'bench', - 'beret', - 'berry', - 'berth', - 'beset', - 'betel', - 'bevel', - 'bezel', - 'bible', - 'bicep', - 'biddy', - 'bigot', - 'bilge', - 'billy', - 'binge', - 'bingo', - 'biome', - 'birch', - 'birth', - 'bison', - 'bitty', - 'black', - 'blade', - 'blame', - 'bland', - 'blank', - 'blare', - 'blast', - 'blaze', - 'bleak', - 'bleat', - 'bleed', - 'bleep', - 'blend', - 'bless', - 'blimp', - 'blind', - 'blink', - 'bliss', - 'blitz', - 'bloat', - 'block', - 'bloke', - 'blond', - 'blood', - 'bloom', - 'blown', - 'bluer', - 'bluff', - 'blunt', - 'blurb', - 'blurt', - 'blush', - 'board', - 'boast', - 'bobby', - 'boney', - 'bongo', - 'bonus', - 'booby', - 'boost', - 'booth', - 'booty', - 'booze', - 'boozy', - 'borax', - 'borne', - 'bosom', - 'bossy', - 'botch', - 'bough', - 'boule', - 'bound', - 'bowel', - 'boxer', - 'brace', - 'braid', - 'brain', - 'brake', - 'brand', - 'brash', - 'brass', - 'brave', - 'bravo', - 'brawl', - 'brawn', - 'bread', - 'break', - 'breed', - 'briar', - 'bribe', - 'brick', - 'bride', - 'brief', - 'brine', - 'bring', - 'brink', - 'briny', - 'brisk', - 'broad', - 'broil', - 'broke', - 'brood', - 'brook', - 'broom', - 'broth', - 'brown', - 'brunt', - 'brush', - 'brute', - 'buddy', - 'budge', - 'buggy', - 'bugle', - 'build', - 'built', - 'bulge', - 'bulky', - 'bully', - 'bunch', - 'bunny', - 'burly', - 'burnt', - 'burst', - 'bused', - 'bushy', - 'butch', - 'butte', - 'buxom', - 'buyer', - 'bylaw', - 'cabal', - 'cabby', - 'cabin', - 'cable', - 'cacao', - 'cache', - 'cacti', - 'caddy', - 'cadet', - 'cagey', - 'cairn', - 'camel', - 'cameo', - 'canal', - 'candy', - 'canny', - 'canoe', - 'canon', - 'caper', - 'caput', - 'carat', - 'cargo', - 'carol', - 'carry', - 'carve', - 'caste', - 'catch', - 'cater', - 'catty', - 'caulk', - 'cause', - 'cavil', - 'cease', - 'cedar', - 'cello', - 'chafe', - 'chaff', - 'chain', - 'chair', - 'chalk', - 'champ', - 'chant', - 'chaos', - 'chard', - 'charm', - 'chart', - 'chase', - 'chasm', - 'cheap', - 'cheat', - 'check', - 'cheek', - 'cheer', - 'chess', - 'chest', - 'chick', - 'chide', - 'chief', - 'child', - 'chili', - 'chill', - 'chime', - 'china', - 'chirp', - 'chock', - 'choir', - 'choke', - 'chord', - 'chore', - 'chose', - 'chuck', - 'chump', - 'chunk', - 'churn', - 'chute', - 'cider', - 'cigar', - 'cinch', - 'circa', - 'civic', - 'civil', - 'clack', - 'claim', - 'clamp', - 'clang', - 'clank', - 'clash', - 'clasp', - 'class', - 'clean', - 'clear', - 'cleat', - 'cleft', - 'clerk', - 'click', - 'cliff', - 'climb', - 'cling', - 'clink', - 'cloak', - 'clock', - 'clone', - 'close', - 'cloth', - 'cloud', - 'clout', - 'clove', - 'clown', - 'cluck', - 'clued', - 'clump', - 'clung', - 'coach', - 'coast', - 'cobra', - 'cocoa', - 'colon', - 'color', - 'comet', - 'comfy', - 'comic', - 'comma', - 'conch', - 'condo', - 'conic', - 'copse', - 'coral', - 'corer', - 'corny', - 'couch', - 'cough', - 'could', - 'count', - 'coupe', - 'court', - 'coven', - 'cover', - 'covet', - 'covey', - 'cower', - 'coyly', - 'crack', - 'craft', - 'cramp', - 'crane', - 'crank', - 'crash', - 'crass', - 'crate', - 'crave', - 'crawl', - 'craze', - 'crazy', - 'creak', - 'cream', - 'credo', - 'creed', - 'creek', - 'creep', - 'creme', - 'crepe', - 'crept', - 'cress', - 'crest', - 'crick', - 'cried', - 'crier', - 'crime', - 'crimp', - 'crisp', - 'croak', - 'crock', - 'crone', - 'crony', - 'crook', - 'cross', - 'croup', - 'crowd', - 'crown', - 'crude', - 'cruel', - 'crumb', - 'crump', - 'crush', - 'crust', - 'crypt', - 'cubic', - 'cumin', - 'curio', - 'curly', - 'curry', - 'curse', - 'curve', - 'curvy', - 'cutie', - 'cyber', - 'cycle', - 'cynic', - 'daddy', - 'daily', - 'dairy', - 'daisy', - 'dally', - 'dance', - 'dandy', - 'datum', - 'daunt', - 'dealt', - 'death', - 'debar', - 'debit', - 'debug', - 'debut', - 'decal', - 'decay', - 'decor', - 'decoy', - 'decry', - 'defer', - 'deign', - 'deity', - 'delay', - 'delta', - 'delve', - 'demon', - 'demur', - 'denim', - 'dense', - 'depot', - 'depth', - 'derby', - 'deter', - 'detox', - 'deuce', - 'devil', - 'diary', - 'dicey', - 'digit', - 'dilly', - 'dimly', - 'diner', - 'dingo', - 'dingy', - 'diode', - 'dirge', - 'dirty', - 'disco', - 'ditch', - 'ditto', - 'ditty', - 'diver', - 'dizzy', - 'dodge', - 'dodgy', - 'dogma', - 'doing', - 'dolly', - 'donor', - 'donut', - 'dopey', - 'doubt', - 'dough', - 'dowdy', - 'dowel', - 'downy', - 'dowry', - 'dozen', - 'draft', - 'drain', - 'drake', - 'drama', - 'drank', - 'drape', - 'drawl', - 'drawn', - 'dread', - 'dream', - 'dress', - 'dried', - 'drier', - 'drift', - 'drill', - 'drink', - 'drive', - 'droit', - 'droll', - 'drone', - 'drool', - 'droop', - 'dross', - 'drove', - 'drown', - 'druid', - 'drunk', - 'dryer', - 'dryly', - 'duchy', - 'dully', - 'dummy', - 'dumpy', - 'dunce', - 'dusky', - 'dusty', - 'dutch', - 'duvet', - 'dwarf', - 'dwell', - 'dwelt', - 'dying', - 'eager', - 'eagle', - 'early', - 'earth', - 'easel', - 'eaten', - 'eater', - 'ebony', - 'eclat', - 'edict', - 'edify', - 'eerie', - 'egret', - 'eight', - 'eject', - 'eking', - 'elate', - 'elbow', - 'elder', - 'elect', - 'elegy', - 'elfin', - 'elide', - 'elite', - 'elope', - 'elude', - 'email', - 'embed', - 'ember', - 'emcee', - 'empty', - 'enact', - 'endow', - 'enema', - 'enemy', - 'enjoy', - 'ennui', - 'ensue', - 'enter', - 'entry', - 'envoy', - 'epoch', - 'epoxy', - 'equal', - 'equip', - 'erase', - 'erect', - 'erode', - 'error', - 'erupt', - 'essay', - 'ester', - 'ether', - 'ethic', - 'ethos', - 'etude', - 'evade', - 'event', - 'every', - 'evict', - 'evoke', - 'exact', - 'exalt', - 'excel', - 'exert', - 'exile', - 'exist', - 'expel', - 'extol', - 'extra', - 'exult', - 'eying', - 'fable', - 'facet', - 'faint', - 'fairy', - 'faith', - 'false', - 'fancy', - 'fanny', - 'farce', - 'fatal', - 'fatty', - 'fault', - 'fauna', - 'favor', - 'feast', - 'fecal', - 'feign', - 'fella', - 'felon', - 'femme', - 'femur', - 'fence', - 'feral', - 'ferry', - 'fetal', - 'fetch', - 'fetid', - 'fetus', - 'fever', - 'fewer', - 'fiber', - 'fibre', - 'ficus', - 'field', - 'fiend', - 'fiery', - 'fifth', - 'fifty', - 'fight', - 'filer', - 'filet', - 'filly', - 'filmy', - 'filth', - 'final', - 'finch', - 'finer', - 'first', - 'fishy', - 'fixer', - 'fizzy', - 'fjord', - 'flack', - 'flail', - 'flair', - 'flake', - 'flaky', - 'flame', - 'flank', - 'flare', - 'flash', - 'flask', - 'fleck', - 'fleet', - 'flesh', - 'flick', - 'flier', - 'fling', - 'flint', - 'flirt', - 'float', - 'flock', - 'flood', - 'floor', - 'flora', - 'floss', - 'flour', - 'flout', - 'flown', - 'fluff', - 'fluid', - 'fluke', - 'flume', - 'flung', - 'flunk', - 'flush', - 'flute', - 'flyer', - 'foamy', - 'focal', - 'focus', - 'foggy', - 'foist', - 'folio', - 'folly', - 'foray', - 'force', - 'forge', - 'forgo', - 'forte', - 'forth', - 'forty', - 'forum', - 'found', - 'foyer', - 'frail', - 'frame', - 'frank', - 'fraud', - 'freak', - 'freed', - 'freer', - 'fresh', - 'friar', - 'fried', - 'frill', - 'frisk', - 'fritz', - 'frock', - 'frond', - 'front', - 'frost', - 'froth', - 'frown', - 'froze', - 'fruit', - 'fudge', - 'fugue', - 'fully', - 'fungi', - 'funky', - 'funny', - 'furor', - 'furry', - 'fussy', - 'fuzzy', - 'gaffe', - 'gaily', - 'gamer', - 'gamma', - 'gamut', - 'gassy', - 'gaudy', - 'gauge', - 'gaunt', - 'gauze', - 'gavel', - 'gawky', - 'gayer', - 'gayly', - 'gazer', - 'gecko', - 'geeky', - 'geese', - 'genie', - 'genre', - 'ghost', - 'ghoul', - 'giant', - 'giddy', - 'gipsy', - 'girly', - 'girth', - 'given', - 'giver', - 'glade', - 'gland', - 'glare', - 'glass', - 'glaze', - 'gleam', - 'glean', - 'glide', - 'glint', - 'gloat', - 'globe', - 'gloom', - 'glory', - 'gloss', - 'glove', - 'glyph', - 'gnash', - 'gnome', - 'godly', - 'going', - 'golem', - 'golly', - 'gonad', - 'goner', - 'goody', - 'gooey', - 'goofy', - 'goose', - 'gorge', - 'gouge', - 'gourd', - 'grace', - 'grade', - 'graft', - 'grail', - 'grain', - 'grand', - 'grant', - 'grape', - 'graph', - 'grasp', - 'grass', - 'grate', - 'grave', - 'gravy', - 'graze', - 'great', - 'greed', - 'green', - 'greet', - 'grief', - 'grill', - 'grime', - 'grimy', - 'grind', - 'gripe', - 'groan', - 'groin', - 'groom', - 'grope', - 'gross', - 'group', - 'grout', - 'grove', - 'growl', - 'grown', - 'gruel', - 'gruff', - 'grunt', - 'guard', - 'guava', - 'guess', - 'guest', - 'guide', - 'guild', - 'guile', - 'guilt', - 'guise', - 'gulch', - 'gully', - 'gumbo', - 'gummy', - 'guppy', - 'gusto', - 'gusty', - 'gypsy', - 'habit', - 'hairy', - 'halve', - 'handy', - 'happy', - 'hardy', - 'harem', - 'harpy', - 'harry', - 'harsh', - 'haste', - 'hasty', - 'hatch', - 'hater', - 'haunt', - 'haute', - 'haven', - 'havoc', - 'hazel', - 'heady', - 'heard', - 'heart', - 'heath', - 'heave', - 'heavy', - 'hedge', - 'hefty', - 'heist', - 'helix', - 'hello', - 'hence', - 'heron', - 'hilly', - 'hinge', - 'hippo', - 'hippy', - 'hitch', - 'hoard', - 'hobby', - 'hoist', - 'holly', - 'homer', - 'honey', - 'honor', - 'horde', - 'horny', - 'horse', - 'hotel', - 'hotly', - 'hound', - 'house', - 'hovel', - 'hover', - 'howdy', - 'human', - 'humid', - 'humor', - 'humph', - 'humus', - 'hunch', - 'hunky', - 'hurry', - 'husky', - 'hussy', - 'hutch', - 'hydro', - 'hyena', - 'hymen', - 'hyper', - 'icily', - 'icing', - 'ideal', - 'idiom', - 'idiot', - 'idler', - 'idyll', - 'igloo', - 'iliac', - 'image', - 'imbue', - 'impel', - 'imply', - 'inane', - 'inbox', - 'incur', - 'index', - 'inept', - 'inert', - 'infer', - 'ingot', - 'inlay', - 'inlet', - 'inner', - 'input', - 'inter', - 'intro', - 'ionic', - 'irate', - 'irony', - 'islet', - 'issue', - 'itchy', - 'ivory', - 'jaunt', - 'jazzy', - 'jelly', - 'jerky', - 'jetty', - 'jewel', - 'jiffy', - 'joint', - 'joist', - 'joker', - 'jolly', - 'joust', - 'judge', - 'juice', - 'juicy', - 'jumbo', - 'jumpy', - 'junta', - 'junto', - 'juror', - 'kappa', - 'karma', - 'kayak', - 'kebab', - 'khaki', - 'kinky', - 'kiosk', - 'kitty', - 'knack', - 'knave', - 'knead', - 'kneed', - 'kneel', - 'knelt', - 'knife', - 'knock', - 'knoll', - 'known', - 'koala', - 'krill', - 'label', - 'labor', - 'laden', - 'ladle', - 'lager', - 'lance', - 'lanky', - 'lapel', - 'lapse', - 'large', - 'larva', - 'lasso', - 'latch', - 'later', - 'lathe', - 'latte', - 'laugh', - 'layer', - 'leach', - 'leafy', - 'leaky', - 'leant', - 'leapt', - 'learn', - 'lease', - 'leash', - 'least', - 'leave', - 'ledge', - 'leech', - 'leery', - 'lefty', - 'legal', - 'leggy', - 'lemon', - 'lemur', - 'leper', - 'level', - 'lever', - 'libel', - 'liege', - 'light', - 'liken', - 'lilac', - 'limbo', - 'limit', - 'linen', - 'liner', - 'lingo', - 'lipid', - 'lithe', - 'liver', - 'livid', - 'llama', - 'loamy', - 'loath', - 'lobby', - 'local', - 'locus', - 'lodge', - 'lofty', - 'logic', - 'login', - 'loopy', - 'loose', - 'lorry', - 'loser', - 'louse', - 'lousy', - 'lover', - 'lower', - 'lowly', - 'loyal', - 'lucid', - 'lucky', - 'lumen', - 'lumpy', - 'lunar', - 'lunch', - 'lunge', - 'lupus', - 'lurch', - 'lurid', - 'lusty', - 'lying', - 'lymph', - 'lynch', - 'lyric', - 'macaw', - 'macho', - 'macro', - 'madam', - 'madly', - 'mafia', - 'magic', - 'magma', - 'maize', - 'major', - 'maker', - 'mambo', - 'mamma', - 'mammy', - 'manga', - 'mange', - 'mango', - 'mangy', - 'mania', - 'manic', - 'manly', - 'manor', - 'maple', - 'march', - 'marry', - 'marsh', - 'mason', - 'masse', - 'match', - 'matey', - 'mauve', - 'maxim', - 'maybe', - 'mayor', - 'mealy', - 'meant', - 'meaty', - 'mecca', - 'medal', - 'media', - 'medic', - 'melee', - 'melon', - 'mercy', - 'merge', - 'merit', - 'merry', - 'metal', - 'meter', - 'metro', - 'micro', - 'midge', - 'midst', - 'might', - 'milky', - 'mimic', - 'mince', - 'miner', - 'minim', - 'minor', - 'minty', - 'minus', - 'mirth', - 'miser', - 'missy', - 'mocha', - 'modal', - 'model', - 'modem', - 'mogul', - 'moist', - 'molar', - 'moldy', - 'money', - 'month', - 'moody', - 'moose', - 'moral', - 'moron', - 'morph', - 'mossy', - 'motel', - 'motif', - 'motor', - 'motto', - 'moult', - 'mound', - 'mount', - 'mourn', - 'mouse', - 'mouth', - 'mover', - 'movie', - 'mower', - 'mucky', - 'mucus', - 'muddy', - 'mulch', - 'mummy', - 'munch', - 'mural', - 'murky', - 'mushy', - 'music', - 'musky', - 'musty', - 'myrrh', - 'nadir', - 'naive', - 'nanny', - 'nasal', - 'nasty', - 'natal', - 'naval', - 'navel', - 'needy', - 'neigh', - 'nerdy', - 'nerve', - 'never', - 'newer', - 'newly', - 'nicer', - 'niche', - 'niece', - 'night', - 'ninja', - 'ninny', - 'ninth', - 'noble', - 'nobly', - 'noise', - 'noisy', - 'nomad', - 'noose', - 'north', - 'nosey', - 'notch', - 'novel', - 'nudge', - 'nurse', - 'nutty', - 'nylon', - 'nymph', - 'oaken', - 'obese', - 'occur', - 'ocean', - 'octal', - 'octet', - 'odder', - 'oddly', - 'offal', - 'offer', - 'often', - 'olden', - 'older', - 'olive', - 'ombre', - 'omega', - 'onion', - 'onset', - 'opera', - 'opine', - 'opium', - 'optic', - 'orbit', - 'order', - 'organ', - 'other', - 'otter', - 'ought', - 'ounce', - 'outdo', - 'outer', - 'outgo', - 'ovary', - 'ovate', - 'overt', - 'ovine', - 'ovoid', - 'owing', - 'owner', - 'oxide', - 'ozone', - 'paddy', - 'pagan', - 'paint', - 'paler', - 'palsy', - 'panel', - 'panic', - 'pansy', - 'papal', - 'paper', - 'parer', - 'parka', - 'parry', - 'parse', - 'party', - 'pasta', - 'paste', - 'pasty', - 'patch', - 'patio', - 'patsy', - 'patty', - 'pause', - 'payee', - 'payer', - 'peace', - 'peach', - 'pearl', - 'pecan', - 'pedal', - 'penal', - 'pence', - 'penne', - 'penny', - 'perch', - 'peril', - 'perky', - 'pesky', - 'pesto', - 'petal', - 'petty', - 'phase', - 'phone', - 'phony', - 'photo', - 'piano', - 'picky', - 'piece', - 'piety', - 'piggy', - 'pilot', - 'pinch', - 'piney', - 'pinky', - 'pinto', - 'piper', - 'pique', - 'pitch', - 'pithy', - 'pivot', - 'pixel', - 'pixie', - 'pizza', - 'place', - 'plaid', - 'plain', - 'plait', - 'plane', - 'plank', - 'plant', - 'plate', - 'plaza', - 'plead', - 'pleat', - 'plied', - 'plier', - 'pluck', - 'plumb', - 'plume', - 'plump', - 'plunk', - 'plush', - 'poesy', - 'point', - 'poise', - 'poker', - 'polar', - 'polka', - 'polyp', - 'pooch', - 'poppy', - 'porch', - 'poser', - 'posit', - 'posse', - 'pouch', - 'pound', - 'pouty', - 'power', - 'prank', - 'prawn', - 'preen', - 'press', - 'price', - 'prick', - 'pride', - 'pried', - 'prime', - 'primo', - 'print', - 'prior', - 'prism', - 'privy', - 'prize', - 'probe', - 'prone', - 'prong', - 'proof', - 'prose', - 'proud', - 'prove', - 'prowl', - 'proxy', - 'prude', - 'prune', - 'psalm', - 'pubic', - 'pudgy', - 'puffy', - 'pulpy', - 'pulse', - 'punch', - 'pupal', - 'pupil', - 'puppy', - 'puree', - 'purer', - 'purge', - 'purse', - 'pushy', - 'putty', - 'pygmy', - 'quack', - 'quail', - 'quake', - 'qualm', - 'quark', - 'quart', - 'quash', - 'quasi', - 'queen', - 'queer', - 'quell', - 'query', - 'quest', - 'queue', - 'quick', - 'quiet', - 'quill', - 'quilt', - 'quirk', - 'quite', - 'quota', - 'quote', - 'quoth', - 'rabbi', - 'rabid', - 'racer', - 'radar', - 'radii', - 'radio', - 'rainy', - 'raise', - 'rajah', - 'rally', - 'ralph', - 'ramen', - 'ranch', - 'randy', - 'range', - 'rapid', - 'rarer', - 'raspy', - 'ratio', - 'ratty', - 'raven', - 'rayon', - 'razor', - 'reach', - 'react', - 'ready', - 'realm', - 'rearm', - 'rebar', - 'rebel', - 'rebus', - 'rebut', - 'recap', - 'recur', - 'recut', - 'reedy', - 'refer', - 'refit', - 'regal', - 'rehab', - 'reign', - 'relax', - 'relay', - 'relic', - 'remit', - 'renal', - 'renew', - 'repay', - 'repel', - 'reply', - 'rerun', - 'reset', - 'resin', - 'retch', - 'retro', - 'retry', - 'reuse', - 'revel', - 'revue', - 'rhino', - 'rhyme', - 'rider', - 'ridge', - 'rifle', - 'right', - 'rigid', - 'rigor', - 'rinse', - 'ripen', - 'riper', - 'risen', - 'riser', - 'risky', - 'rival', - 'river', - 'rivet', - 'roach', - 'roast', - 'robin', - 'robot', - 'rocky', - 'rodeo', - 'roger', - 'rogue', - 'roomy', - 'roost', - 'rotor', - 'rouge', - 'rough', - 'round', - 'rouse', - 'route', - 'rover', - 'rowdy', - 'rower', - 'royal', - 'ruddy', - 'ruder', - 'rugby', - 'ruler', - 'rumba', - 'rumor', - 'rupee', - 'rural', - 'rusty', - 'sadly', - 'safer', - 'saint', - 'salad', - 'sally', - 'salon', - 'salsa', - 'salty', - 'salve', - 'salvo', - 'sandy', - 'saner', - 'sappy', - 'sassy', - 'satin', - 'satyr', - 'sauce', - 'saucy', - 'sauna', - 'saute', - 'savor', - 'savoy', - 'savvy', - 'scald', - 'scale', - 'scalp', - 'scaly', - 'scamp', - 'scant', - 'scare', - 'scarf', - 'scary', - 'scene', - 'scent', - 'scion', - 'scoff', - 'scold', - 'scone', - 'scoop', - 'scope', - 'score', - 'scorn', - 'scour', - 'scout', - 'scowl', - 'scram', - 'scrap', - 'scree', - 'screw', - 'scrub', - 'scrum', - 'scuba', - 'sedan', - 'seedy', - 'segue', - 'seize', - 'semen', - 'sense', - 'sepia', - 'serif', - 'serum', - 'serve', - 'setup', - 'seven', - 'sever', - 'sewer', - 'shack', - 'shade', - 'shady', - 'shaft', - 'shake', - 'shaky', - 'shale', - 'shall', - 'shalt', - 'shame', - 'shank', - 'shape', - 'shard', - 'share', - 'shark', - 'sharp', - 'shave', - 'shawl', - 'shear', - 'sheen', - 'sheep', - 'sheer', - 'sheet', - 'sheik', - 'shelf', - 'shell', - 'shied', - 'shift', - 'shine', - 'shiny', - 'shire', - 'shirk', - 'shirt', - 'shoal', - 'shock', - 'shone', - 'shook', - 'shoot', - 'shore', - 'shorn', - 'short', - 'shout', - 'shove', - 'shown', - 'showy', - 'shrew', - 'shrub', - 'shrug', - 'shuck', - 'shunt', - 'shush', - 'shyly', - 'siege', - 'sieve', - 'sight', - 'sigma', - 'silky', - 'silly', - 'since', - 'sinew', - 'singe', - 'siren', - 'sissy', - 'sixth', - 'sixty', - 'skate', - 'skier', - 'skiff', - 'skill', - 'skimp', - 'skirt', - 'skulk', - 'skull', - 'skunk', - 'slack', - 'slain', - 'slang', - 'slant', - 'slash', - 'slate', - 'slave', - 'sleek', - 'sleep', - 'sleet', - 'slept', - 'slice', - 'slick', - 'slide', - 'slime', - 'slimy', - 'sling', - 'slink', - 'sloop', - 'slope', - 'slosh', - 'sloth', - 'slump', - 'slung', - 'slunk', - 'slurp', - 'slush', - 'slyly', - 'smack', - 'small', - 'smart', - 'smash', - 'smear', - 'smell', - 'smelt', - 'smile', - 'smirk', - 'smite', - 'smith', - 'smock', - 'smoke', - 'smoky', - 'smote', - 'snack', - 'snail', - 'snake', - 'snaky', - 'snare', - 'snarl', - 'sneak', - 'sneer', - 'snide', - 'sniff', - 'snipe', - 'snoop', - 'snore', - 'snort', - 'snout', - 'snowy', - 'snuck', - 'snuff', - 'soapy', - 'sober', - 'soggy', - 'solar', - 'solid', - 'solve', - 'sonar', - 'sonic', - 'sooth', - 'sooty', - 'sorry', - 'sound', - 'south', - 'sower', - 'space', - 'spade', - 'spank', - 'spare', - 'spark', - 'spasm', - 'spawn', - 'speak', - 'spear', - 'speck', - 'speed', - 'spell', - 'spelt', - 'spend', - 'spent', - 'sperm', - 'spice', - 'spicy', - 'spied', - 'spiel', - 'spike', - 'spiky', - 'spill', - 'spilt', - 'spine', - 'spiny', - 'spire', - 'spite', - 'splat', - 'split', - 'spoil', - 'spoke', - 'spoof', - 'spook', - 'spool', - 'spoon', - 'spore', - 'sport', - 'spout', - 'spray', - 'spree', - 'sprig', - 'spunk', - 'spurn', - 'spurt', - 'squad', - 'squat', - 'squib', - 'stack', - 'staff', - 'stage', - 'staid', - 'stain', - 'stair', - 'stake', - 'stale', - 'stalk', - 'stall', - 'stamp', - 'stand', - 'stank', - 'stare', - 'stark', - 'start', - 'stash', - 'state', - 'stave', - 'stead', - 'steak', - 'steal', - 'steam', - 'steed', - 'steel', - 'steep', - 'steer', - 'stein', - 'stern', - 'stick', - 'stiff', - 'still', - 'stilt', - 'sting', - 'stink', - 'stint', - 'stock', - 'stoic', - 'stoke', - 'stole', - 'stomp', - 'stone', - 'stony', - 'stood', - 'stool', - 'stoop', - 'store', - 'stork', - 'storm', - 'story', - 'stout', - 'stove', - 'strap', - 'straw', - 'stray', - 'strip', - 'strut', - 'stuck', - 'study', - 'stuff', - 'stump', - 'stung', - 'stunk', - 'stunt', - 'style', - 'suave', - 'sugar', - 'suing', - 'suite', - 'sulky', - 'sully', - 'sumac', - 'sunny', - 'super', - 'surer', - 'surge', - 'surly', - 'sushi', - 'swami', - 'swamp', - 'swarm', - 'swash', - 'swath', - 'swear', - 'sweat', - 'sweep', - 'sweet', - 'swell', - 'swept', - 'swift', - 'swill', - 'swine', - 'swing', - 'swirl', - 'swish', - 'swoon', - 'swoop', - 'sword', - 'swore', - 'sworn', - 'swung', - 'synod', - 'syrup', - 'tabby', - 'table', - 'taboo', - 'tacit', - 'tacky', - 'taffy', - 'taint', - 'taken', - 'taker', - 'tally', - 'talon', - 'tamer', - 'tango', - 'tangy', - 'taper', - 'tapir', - 'tardy', - 'tarot', - 'taste', - 'tasty', - 'tatty', - 'taunt', - 'tawny', - 'teach', - 'teary', - 'tease', - 'teddy', - 'teeth', - 'tempo', - 'tenet', - 'tenor', - 'tense', - 'tenth', - 'tepee', - 'tepid', - 'terra', - 'terse', - 'testy', - 'thank', - 'theft', - 'their', - 'theme', - 'there', - 'these', - 'theta', - 'thick', - 'thief', - 'thigh', - 'thing', - 'think', - 'third', - 'thong', - 'thorn', - 'those', - 'three', - 'threw', - 'throb', - 'throw', - 'thrum', - 'thumb', - 'thump', - 'thyme', - 'tiara', - 'tibia', - 'tidal', - 'tiger', - 'tight', - 'tilde', - 'timer', - 'timid', - 'tipsy', - 'titan', - 'tithe', - 'title', - 'toast', - 'today', - 'toddy', - 'token', - 'tonal', - 'tonga', - 'tonic', - 'tooth', - 'topaz', - 'topic', - 'torch', - 'torso', - 'torus', - 'total', - 'totem', - 'touch', - 'tough', - 'towel', - 'tower', - 'toxic', - 'toxin', - 'trace', - 'track', - 'tract', - 'trade', - 'trail', - 'train', - 'trait', - 'tramp', - 'trash', - 'trawl', - 'tread', - 'treat', - 'trend', - 'triad', - 'trial', - 'tribe', - 'trice', - 'trick', - 'tried', - 'tripe', - 'trite', - 'troll', - 'troop', - 'trope', - 'trout', - 'trove', - 'truce', - 'truck', - 'truer', - 'truly', - 'trump', - 'trunk', - 'truss', - 'trust', - 'truth', - 'tryst', - 'tubal', - 'tuber', - 'tulip', - 'tulle', - 'tumor', - 'tunic', - 'turbo', - 'tutor', - 'twang', - 'tweak', - 'tweed', - 'tweet', - 'twice', - 'twine', - 'twirl', - 'twist', - 'twixt', - 'tying', - 'udder', - 'ulcer', - 'ultra', - 'umbra', - 'uncle', - 'uncut', - 'under', - 'undid', - 'undue', - 'unfed', - 'unfit', - 'unify', - 'union', - 'unite', - 'unity', - 'unlit', - 'unmet', - 'unset', - 'untie', - 'until', - 'unwed', - 'unzip', - 'upper', - 'upset', - 'urban', - 'urine', - 'usage', - 'usher', - 'using', - 'usual', - 'usurp', - 'utile', - 'utter', - 'vague', - 'valet', - 'valid', - 'valor', - 'value', - 'valve', - 'vapid', - 'vapor', - 'vault', - 'vaunt', - 'vegan', - 'venom', - 'venue', - 'verge', - 'verse', - 'verso', - 'verve', - 'vicar', - 'video', - 'vigil', - 'vigor', - 'villa', - 'vinyl', - 'viola', - 'viper', - 'viral', - 'virus', - 'visit', - 'visor', - 'vista', - 'vital', - 'vivid', - 'vixen', - 'vocal', - 'vodka', - 'vogue', - 'voice', - 'voila', - 'vomit', - 'voter', - 'vouch', - 'vowel', - 'vying', - 'wacky', - 'wafer', - 'wager', - 'wagon', - 'waist', - 'waive', - 'waltz', - 'warty', - 'waste', - 'watch', - 'water', - 'waver', - 'waxen', - 'weary', - 'weave', - 'wedge', - 'weedy', - 'weigh', - 'weird', - 'welch', - 'welsh', - 'wench', - 'whack', - 'whale', - 'wharf', - 'wheat', - 'wheel', - 'whelp', - 'where', - 'which', - 'whiff', - 'while', - 'whine', - 'whiny', - 'whirl', - 'whisk', - 'white', - 'whole', - 'whoop', - 'whose', - 'widen', - 'wider', - 'widow', - 'width', - 'wield', - 'wight', - 'willy', - 'wimpy', - 'wince', - 'winch', - 'windy', - 'wiser', - 'wispy', - 'witch', - 'witty', - 'woken', - 'woman', - 'women', - 'woody', - 'wooer', - 'wooly', - 'woozy', - 'wordy', - 'world', - 'worry', - 'worse', - 'worst', - 'worth', - 'would', - 'wound', - 'woven', - 'wrack', - 'wrath', - 'wreak', - 'wreck', - 'wrest', - 'wring', - 'wrist', - 'write', - 'wrong', - 'wrote', - 'wrung', - 'wryly', - 'yacht', - 'yearn', - 'yeast', - 'yield', - 'young', - 'youth', - 'zebra', - 'zesty', - 'zonal' -]; - -/** The list of valid guesses, of which the list of possible words is a subset */ -export const allowed = new Set([ - ...words, - 'aahed', - 'aalii', - 'aargh', - 'aarti', - 'abaca', - 'abaci', - 'abacs', - 'abaft', - 'abaka', - 'abamp', - 'aband', - 'abash', - 'abask', - 'abaya', - 'abbas', - 'abbed', - 'abbes', - 'abcee', - 'abeam', - 'abear', - 'abele', - 'abers', - 'abets', - 'abies', - 'abler', - 'ables', - 'ablet', - 'ablow', - 'abmho', - 'abohm', - 'aboil', - 'aboma', - 'aboon', - 'abord', - 'abore', - 'abram', - 'abray', - 'abrim', - 'abrin', - 'abris', - 'absey', - 'absit', - 'abuna', - 'abune', - 'abuts', - 'abuzz', - 'abyes', - 'abysm', - 'acais', - 'acari', - 'accas', - 'accoy', - 'acerb', - 'acers', - 'aceta', - 'achar', - 'ached', - 'aches', - 'achoo', - 'acids', - 'acidy', - 'acing', - 'acini', - 'ackee', - 'acker', - 'acmes', - 'acmic', - 'acned', - 'acnes', - 'acock', - 'acold', - 'acred', - 'acres', - 'acros', - 'acted', - 'actin', - 'acton', - 'acyls', - 'adaws', - 'adays', - 'adbot', - 'addax', - 'added', - 'adder', - 'addio', - 'addle', - 'adeem', - 'adhan', - 'adieu', - 'adios', - 'adits', - 'adman', - 'admen', - 'admix', - 'adobo', - 'adown', - 'adoze', - 'adrad', - 'adred', - 'adsum', - 'aduki', - 'adunc', - 'adust', - 'advew', - 'adyta', - 'adzed', - 'adzes', - 'aecia', - 'aedes', - 'aegis', - 'aeons', - 'aerie', - 'aeros', - 'aesir', - 'afald', - 'afara', - 'afars', - 'afear', - 'aflaj', - 'afore', - 'afrit', - 'afros', - 'agama', - 'agami', - 'agars', - 'agast', - 'agave', - 'agaze', - 'agene', - 'agers', - 'agger', - 'aggie', - 'aggri', - 'aggro', - 'aggry', - 'aghas', - 'agila', - 'agios', - 'agism', - 'agist', - 'agita', - 'aglee', - 'aglet', - 'agley', - 'agloo', - 'aglus', - 'agmas', - 'agoge', - 'agone', - 'agons', - 'agood', - 'agria', - 'agrin', - 'agros', - 'agued', - 'agues', - 'aguna', - 'aguti', - 'aheap', - 'ahent', - 'ahigh', - 'ahind', - 'ahing', - 'ahint', - 'ahold', - 'ahull', - 'ahuru', - 'aidas', - 'aided', - 'aides', - 'aidoi', - 'aidos', - 'aiery', - 'aigas', - 'aight', - 'ailed', - 'aimed', - 'aimer', - 'ainee', - 'ainga', - 'aioli', - 'aired', - 'airer', - 'airns', - 'airth', - 'airts', - 'aitch', - 'aitus', - 'aiver', - 'aiyee', - 'aizle', - 'ajies', - 'ajiva', - 'ajuga', - 'ajwan', - 'akees', - 'akela', - 'akene', - 'aking', - 'akita', - 'akkas', - 'alaap', - 'alack', - 'alamo', - 'aland', - 'alane', - 'alang', - 'alans', - 'alant', - 'alapa', - 'alaps', - 'alary', - 'alate', - 'alays', - 'albas', - 'albee', - 'alcid', - 'alcos', - 'aldea', - 'alder', - 'aldol', - 'aleck', - 'alecs', - 'alefs', - 'aleft', - 'aleph', - 'alews', - 'aleye', - 'alfas', - 'algal', - 'algas', - 'algid', - 'algin', - 'algor', - 'algum', - 'alias', - 'alifs', - 'aline', - 'alist', - 'aliya', - 'alkie', - 'alkos', - 'alkyd', - 'alkyl', - 'allee', - 'allel', - 'allis', - 'allod', - 'allyl', - 'almah', - 'almas', - 'almeh', - 'almes', - 'almud', - 'almug', - 'alods', - 'aloed', - 'aloes', - 'aloha', - 'aloin', - 'aloos', - 'alowe', - 'altho', - 'altos', - 'alula', - 'alums', - 'alure', - 'alvar', - 'alway', - 'amahs', - 'amain', - 'amate', - 'amaut', - 'amban', - 'ambit', - 'ambos', - 'ambry', - 'ameba', - 'ameer', - 'amene', - 'amens', - 'ament', - 'amias', - 'amice', - 'amici', - 'amide', - 'amido', - 'amids', - 'amies', - 'amiga', - 'amigo', - 'amine', - 'amino', - 'amins', - 'amirs', - 'amlas', - 'amman', - 'ammon', - 'ammos', - 'amnia', - 'amnic', - 'amnio', - 'amoks', - 'amole', - 'amort', - 'amour', - 'amove', - 'amowt', - 'amped', - 'ampul', - 'amrit', - 'amuck', - 'amyls', - 'anana', - 'anata', - 'ancho', - 'ancle', - 'ancon', - 'andro', - 'anear', - 'anele', - 'anent', - 'angas', - 'anglo', - 'anigh', - 'anile', - 'anils', - 'anima', - 'animi', - 'anion', - 'anise', - 'anker', - 'ankhs', - 'ankus', - 'anlas', - 'annal', - 'annas', - 'annat', - 'anoas', - 'anole', - 'anomy', - 'ansae', - 'antae', - 'antar', - 'antas', - 'anted', - 'antes', - 'antis', - 'antra', - 'antre', - 'antsy', - 'anura', - 'anyon', - 'apace', - 'apage', - 'apaid', - 'apayd', - 'apays', - 'apeak', - 'apeek', - 'apers', - 'apert', - 'apery', - 'apgar', - 'aphis', - 'apian', - 'apiol', - 'apish', - 'apism', - 'apode', - 'apods', - 'apoop', - 'aport', - 'appal', - 'appay', - 'appel', - 'appro', - 'appui', - 'appuy', - 'apres', - 'apses', - 'apsis', - 'apsos', - 'apted', - 'apter', - 'aquae', - 'aquas', - 'araba', - 'araks', - 'arame', - 'arars', - 'arbas', - 'arced', - 'archi', - 'arcos', - 'arcus', - 'ardeb', - 'ardri', - 'aread', - 'areae', - 'areal', - 'arear', - 'areas', - 'areca', - 'aredd', - 'arede', - 'arefy', - 'areic', - 'arene', - 'arepa', - 'arere', - 'arete', - 'arets', - 'arett', - 'argal', - 'argan', - 'argil', - 'argle', - 'argol', - 'argon', - 'argot', - 'argus', - 'arhat', - 'arias', - 'ariel', - 'ariki', - 'arils', - 'ariot', - 'arish', - 'arked', - 'arled', - 'arles', - 'armed', - 'armer', - 'armet', - 'armil', - 'arnas', - 'arnut', - 'aroba', - 'aroha', - 'aroid', - 'arpas', - 'arpen', - 'arrah', - 'arras', - 'arret', - 'arris', - 'arroz', - 'arsed', - 'arses', - 'arsey', - 'arsis', - 'artal', - 'artel', - 'artic', - 'artis', - 'aruhe', - 'arums', - 'arval', - 'arvee', - 'arvos', - 'aryls', - 'asana', - 'ascon', - 'ascus', - 'asdic', - 'ashed', - 'ashes', - 'ashet', - 'asked', - 'asker', - 'askoi', - 'askos', - 'aspen', - 'asper', - 'aspic', - 'aspie', - 'aspis', - 'aspro', - 'assai', - 'assam', - 'asses', - 'assez', - 'assot', - 'aster', - 'astir', - 'astun', - 'asura', - 'asway', - 'aswim', - 'asyla', - 'ataps', - 'ataxy', - 'atigi', - 'atilt', - 'atimy', - 'atlas', - 'atman', - 'atmas', - 'atmos', - 'atocs', - 'atoke', - 'atoks', - 'atoms', - 'atomy', - 'atony', - 'atopy', - 'atria', - 'atrip', - 'attap', - 'attar', - 'atuas', - 'audad', - 'auger', - 'aught', - 'aulas', - 'aulic', - 'auloi', - 'aulos', - 'aumil', - 'aunes', - 'aunts', - 'aurae', - 'aural', - 'aurar', - 'auras', - 'aurei', - 'aures', - 'auric', - 'auris', - 'aurum', - 'autos', - 'auxin', - 'avale', - 'avant', - 'avast', - 'avels', - 'avens', - 'avers', - 'avgas', - 'avine', - 'avion', - 'avise', - 'aviso', - 'avize', - 'avows', - 'avyze', - 'awarn', - 'awato', - 'awave', - 'aways', - 'awdls', - 'aweel', - 'aweto', - 'awing', - 'awmry', - 'awned', - 'awner', - 'awols', - 'awork', - 'axels', - 'axile', - 'axils', - 'axing', - 'axite', - 'axled', - 'axles', - 'axman', - 'axmen', - 'axoid', - 'axone', - 'axons', - 'ayahs', - 'ayaya', - 'ayelp', - 'aygre', - 'ayins', - 'ayont', - 'ayres', - 'ayrie', - 'azans', - 'azide', - 'azido', - 'azine', - 'azlon', - 'azoic', - 'azole', - 'azons', - 'azote', - 'azoth', - 'azuki', - 'azurn', - 'azury', - 'azygy', - 'azyme', - 'azyms', - 'baaed', - 'baals', - 'babas', - 'babel', - 'babes', - 'babka', - 'baboo', - 'babul', - 'babus', - 'bacca', - 'bacco', - 'baccy', - 'bacha', - 'bachs', - 'backs', - 'baddy', - 'baels', - 'baffs', - 'baffy', - 'bafts', - 'baghs', - 'bagie', - 'bahts', - 'bahus', - 'bahut', - 'bails', - 'bairn', - 'baisa', - 'baith', - 'baits', - 'baiza', - 'baize', - 'bajan', - 'bajra', - 'bajri', - 'bajus', - 'baked', - 'baken', - 'bakes', - 'bakra', - 'balas', - 'balds', - 'baldy', - 'baled', - 'bales', - 'balks', - 'balky', - 'balls', - 'bally', - 'balms', - 'baloo', - 'balsa', - 'balti', - 'balun', - 'balus', - 'bambi', - 'banak', - 'banco', - 'bancs', - 'banda', - 'bandh', - 'bands', - 'bandy', - 'baned', - 'banes', - 'bangs', - 'bania', - 'banks', - 'banns', - 'bants', - 'bantu', - 'banty', - 'banya', - 'bapus', - 'barbe', - 'barbs', - 'barby', - 'barca', - 'barde', - 'bardo', - 'bards', - 'bardy', - 'bared', - 'barer', - 'bares', - 'barfi', - 'barfs', - 'baric', - 'barks', - 'barky', - 'barms', - 'barmy', - 'barns', - 'barny', - 'barps', - 'barra', - 'barre', - 'barro', - 'barry', - 'barye', - 'basan', - 'based', - 'basen', - 'baser', - 'bases', - 'basho', - 'basij', - 'basks', - 'bason', - 'basse', - 'bassi', - 'basso', - 'bassy', - 'basta', - 'basti', - 'basto', - 'basts', - 'bated', - 'bates', - 'baths', - 'batik', - 'batta', - 'batts', - 'battu', - 'bauds', - 'bauks', - 'baulk', - 'baurs', - 'bavin', - 'bawds', - 'bawks', - 'bawls', - 'bawns', - 'bawrs', - 'bawty', - 'bayed', - 'bayer', - 'bayes', - 'bayle', - 'bayts', - 'bazar', - 'bazoo', - 'beads', - 'beaks', - 'beaky', - 'beals', - 'beams', - 'beamy', - 'beano', - 'beans', - 'beany', - 'beare', - 'bears', - 'beath', - 'beats', - 'beaty', - 'beaus', - 'beaut', - 'beaux', - 'bebop', - 'becap', - 'becke', - 'becks', - 'bedad', - 'bedel', - 'bedes', - 'bedew', - 'bedim', - 'bedye', - 'beedi', - 'beefs', - 'beeps', - 'beers', - 'beery', - 'beets', - 'befog', - 'begad', - 'begar', - 'begem', - 'begot', - 'begum', - 'beige', - 'beigy', - 'beins', - 'bekah', - 'belah', - 'belar', - 'belay', - 'belee', - 'belga', - 'bells', - 'belon', - 'belts', - 'bemad', - 'bemas', - 'bemix', - 'bemud', - 'bends', - 'bendy', - 'benes', - 'benet', - 'benga', - 'benis', - 'benne', - 'benni', - 'benny', - 'bento', - 'bents', - 'benty', - 'bepat', - 'beray', - 'beres', - 'bergs', - 'berko', - 'berks', - 'berme', - 'berms', - 'berob', - 'beryl', - 'besat', - 'besaw', - 'besee', - 'beses', - 'besit', - 'besom', - 'besot', - 'besti', - 'bests', - 'betas', - 'beted', - 'betes', - 'beths', - 'betid', - 'beton', - 'betta', - 'betty', - 'bever', - 'bevor', - 'bevue', - 'bevvy', - 'bewet', - 'bewig', - 'bezes', - 'bezil', - 'bezzy', - 'bhais', - 'bhaji', - 'bhang', - 'bhats', - 'bhels', - 'bhoot', - 'bhuna', - 'bhuts', - 'biach', - 'biali', - 'bialy', - 'bibbs', - 'bibes', - 'biccy', - 'bices', - 'bided', - 'bider', - 'bides', - 'bidet', - 'bidis', - 'bidon', - 'bield', - 'biers', - 'biffo', - 'biffs', - 'biffy', - 'bifid', - 'bigae', - 'biggs', - 'biggy', - 'bigha', - 'bight', - 'bigly', - 'bigos', - 'bijou', - 'biked', - 'biker', - 'bikes', - 'bikie', - 'bilbo', - 'bilby', - 'biled', - 'biles', - 'bilgy', - 'bilks', - 'bills', - 'bimah', - 'bimas', - 'bimbo', - 'binal', - 'bindi', - 'binds', - 'biner', - 'bines', - 'bings', - 'bingy', - 'binit', - 'binks', - 'bints', - 'biogs', - 'biont', - 'biota', - 'biped', - 'bipod', - 'birds', - 'birks', - 'birle', - 'birls', - 'biros', - 'birrs', - 'birse', - 'birsy', - 'bises', - 'bisks', - 'bisom', - 'bitch', - 'biter', - 'bites', - 'bitos', - 'bitou', - 'bitsy', - 'bitte', - 'bitts', - 'bivia', - 'bivvy', - 'bizes', - 'bizzo', - 'bizzy', - 'blabs', - 'blads', - 'blady', - 'blaer', - 'blaes', - 'blaff', - 'blags', - 'blahs', - 'blain', - 'blams', - 'blart', - 'blase', - 'blash', - 'blate', - 'blats', - 'blatt', - 'blaud', - 'blawn', - 'blaws', - 'blays', - 'blear', - 'blebs', - 'blech', - 'blees', - 'blent', - 'blert', - 'blest', - 'blets', - 'bleys', - 'blimy', - 'bling', - 'blini', - 'blins', - 'bliny', - 'blips', - 'blist', - 'blite', - 'blits', - 'blive', - 'blobs', - 'blocs', - 'blogs', - 'blook', - 'bloop', - 'blore', - 'blots', - 'blows', - 'blowy', - 'blubs', - 'blude', - 'bluds', - 'bludy', - 'blued', - 'blues', - 'bluet', - 'bluey', - 'bluid', - 'blume', - 'blunk', - 'blurs', - 'blype', - 'boabs', - 'boaks', - 'boars', - 'boart', - 'boats', - 'bobac', - 'bobak', - 'bobas', - 'bobol', - 'bobos', - 'bocca', - 'bocce', - 'bocci', - 'boche', - 'bocks', - 'boded', - 'bodes', - 'bodge', - 'bodhi', - 'bodle', - 'boeps', - 'boets', - 'boeuf', - 'boffo', - 'boffs', - 'bogan', - 'bogey', - 'boggy', - 'bogie', - 'bogle', - 'bogue', - 'bogus', - 'bohea', - 'bohos', - 'boils', - 'boing', - 'boink', - 'boite', - 'boked', - 'bokeh', - 'bokes', - 'bokos', - 'bolar', - 'bolas', - 'bolds', - 'boles', - 'bolix', - 'bolls', - 'bolos', - 'bolts', - 'bolus', - 'bomas', - 'bombe', - 'bombo', - 'bombs', - 'bonce', - 'bonds', - 'boned', - 'boner', - 'bones', - 'bongs', - 'bonie', - 'bonks', - 'bonne', - 'bonny', - 'bonza', - 'bonze', - 'booai', - 'booay', - 'boobs', - 'boody', - 'booed', - 'boofy', - 'boogy', - 'boohs', - 'books', - 'booky', - 'bools', - 'booms', - 'boomy', - 'boong', - 'boons', - 'boord', - 'boors', - 'boose', - 'boots', - 'boppy', - 'borak', - 'boral', - 'boras', - 'borde', - 'bords', - 'bored', - 'boree', - 'borel', - 'borer', - 'bores', - 'borgo', - 'boric', - 'borks', - 'borms', - 'borna', - 'boron', - 'borts', - 'borty', - 'bortz', - 'bosie', - 'bosks', - 'bosky', - 'boson', - 'bosun', - 'botas', - 'botel', - 'botes', - 'bothy', - 'botte', - 'botts', - 'botty', - 'bouge', - 'bouks', - 'boult', - 'bouns', - 'bourd', - 'bourg', - 'bourn', - 'bouse', - 'bousy', - 'bouts', - 'bovid', - 'bowat', - 'bowed', - 'bower', - 'bowes', - 'bowet', - 'bowie', - 'bowls', - 'bowne', - 'bowrs', - 'bowse', - 'boxed', - 'boxen', - 'boxes', - 'boxla', - 'boxty', - 'boyar', - 'boyau', - 'boyed', - 'boyfs', - 'boygs', - 'boyla', - 'boyos', - 'boysy', - 'bozos', - 'braai', - 'brach', - 'brack', - 'bract', - 'brads', - 'braes', - 'brags', - 'brail', - 'braks', - 'braky', - 'brame', - 'brane', - 'brank', - 'brans', - 'brant', - 'brast', - 'brats', - 'brava', - 'bravi', - 'braws', - 'braxy', - 'brays', - 'braza', - 'braze', - 'bream', - 'brede', - 'breds', - 'breem', - 'breer', - 'brees', - 'breid', - 'breis', - 'breme', - 'brens', - 'brent', - 'brere', - 'brers', - 'breve', - 'brews', - 'breys', - 'brier', - 'bries', - 'brigs', - 'briki', - 'briks', - 'brill', - 'brims', - 'brins', - 'brios', - 'brise', - 'briss', - 'brith', - 'brits', - 'britt', - 'brize', - 'broch', - 'brock', - 'brods', - 'brogh', - 'brogs', - 'brome', - 'bromo', - 'bronc', - 'brond', - 'brool', - 'broos', - 'brose', - 'brosy', - 'brows', - 'brugh', - 'bruin', - 'bruit', - 'brule', - 'brume', - 'brung', - 'brusk', - 'brust', - 'bruts', - 'buats', - 'buaze', - 'bubal', - 'bubas', - 'bubba', - 'bubbe', - 'bubby', - 'bubus', - 'buchu', - 'bucko', - 'bucks', - 'bucku', - 'budas', - 'budis', - 'budos', - 'buffa', - 'buffe', - 'buffi', - 'buffo', - 'buffs', - 'buffy', - 'bufos', - 'bufty', - 'buhls', - 'buhrs', - 'buiks', - 'buist', - 'bukes', - 'bulbs', - 'bulgy', - 'bulks', - 'bulla', - 'bulls', - 'bulse', - 'bumbo', - 'bumfs', - 'bumph', - 'bumps', - 'bumpy', - 'bunas', - 'bunce', - 'bunco', - 'bunde', - 'bundh', - 'bunds', - 'bundt', - 'bundu', - 'bundy', - 'bungs', - 'bungy', - 'bunia', - 'bunje', - 'bunjy', - 'bunko', - 'bunks', - 'bunns', - 'bunts', - 'bunty', - 'bunya', - 'buoys', - 'buppy', - 'buran', - 'buras', - 'burbs', - 'burds', - 'buret', - 'burfi', - 'burgh', - 'burgs', - 'burin', - 'burka', - 'burke', - 'burks', - 'burls', - 'burns', - 'buroo', - 'burps', - 'burqa', - 'burro', - 'burrs', - 'burry', - 'bursa', - 'burse', - 'busby', - 'buses', - 'busks', - 'busky', - 'bussu', - 'busti', - 'busts', - 'busty', - 'buteo', - 'butes', - 'butle', - 'butoh', - 'butts', - 'butty', - 'butut', - 'butyl', - 'buzzy', - 'bwana', - 'bwazi', - 'byded', - 'bydes', - 'byked', - 'bykes', - 'byres', - 'byrls', - 'byssi', - 'bytes', - 'byway', - 'caaed', - 'cabas', - 'caber', - 'cabob', - 'caboc', - 'cabre', - 'cacas', - 'cacks', - 'cacky', - 'cadee', - 'cades', - 'cadge', - 'cadgy', - 'cadie', - 'cadis', - 'cadre', - 'caeca', - 'caese', - 'cafes', - 'caffs', - 'caged', - 'cager', - 'cages', - 'cagot', - 'cahow', - 'caids', - 'cains', - 'caird', - 'cajon', - 'cajun', - 'caked', - 'cakes', - 'cakey', - 'calfs', - 'calid', - 'calif', - 'calix', - 'calks', - 'calla', - 'calls', - 'calms', - 'calmy', - 'calos', - 'calpa', - 'calps', - 'calve', - 'calyx', - 'caman', - 'camas', - 'cames', - 'camis', - 'camos', - 'campi', - 'campo', - 'camps', - 'campy', - 'camus', - 'caned', - 'caneh', - 'caner', - 'canes', - 'cangs', - 'canid', - 'canna', - 'canns', - 'canso', - 'canst', - 'canto', - 'cants', - 'canty', - 'capas', - 'caped', - 'capes', - 'capex', - 'caphs', - 'capiz', - 'caple', - 'capon', - 'capos', - 'capot', - 'capri', - 'capul', - 'carap', - 'carbo', - 'carbs', - 'carby', - 'cardi', - 'cards', - 'cardy', - 'cared', - 'carer', - 'cares', - 'caret', - 'carex', - 'carks', - 'carle', - 'carls', - 'carns', - 'carny', - 'carob', - 'carom', - 'caron', - 'carpi', - 'carps', - 'carrs', - 'carse', - 'carta', - 'carte', - 'carts', - 'carvy', - 'casas', - 'casco', - 'cased', - 'cases', - 'casks', - 'casky', - 'casts', - 'casus', - 'cates', - 'cauda', - 'cauks', - 'cauld', - 'cauls', - 'caums', - 'caups', - 'cauri', - 'causa', - 'cavas', - 'caved', - 'cavel', - 'caver', - 'caves', - 'cavie', - 'cawed', - 'cawks', - 'caxon', - 'ceaze', - 'cebid', - 'cecal', - 'cecum', - 'ceded', - 'ceder', - 'cedes', - 'cedis', - 'ceiba', - 'ceili', - 'ceils', - 'celeb', - 'cella', - 'celli', - 'cells', - 'celom', - 'celts', - 'cense', - 'cento', - 'cents', - 'centu', - 'ceorl', - 'cepes', - 'cerci', - 'cered', - 'ceres', - 'cerge', - 'ceria', - 'ceric', - 'cerne', - 'ceroc', - 'ceros', - 'certs', - 'certy', - 'cesse', - 'cesta', - 'cesti', - 'cetes', - 'cetyl', - 'cezve', - 'chace', - 'chack', - 'chaco', - 'chado', - 'chads', - 'chaft', - 'chais', - 'chals', - 'chams', - 'chana', - 'chang', - 'chank', - 'chape', - 'chaps', - 'chapt', - 'chara', - 'chare', - 'chark', - 'charr', - 'chars', - 'chary', - 'chats', - 'chave', - 'chavs', - 'chawk', - 'chaws', - 'chaya', - 'chays', - 'cheep', - 'chefs', - 'cheka', - 'chela', - 'chelp', - 'chemo', - 'chems', - 'chere', - 'chert', - 'cheth', - 'chevy', - 'chews', - 'chewy', - 'chiao', - 'chias', - 'chibs', - 'chica', - 'chich', - 'chico', - 'chics', - 'chiel', - 'chiks', - 'chile', - 'chimb', - 'chimo', - 'chimp', - 'chine', - 'ching', - 'chink', - 'chino', - 'chins', - 'chips', - 'chirk', - 'chirl', - 'chirm', - 'chiro', - 'chirr', - 'chirt', - 'chiru', - 'chits', - 'chive', - 'chivs', - 'chivy', - 'chizz', - 'choco', - 'chocs', - 'chode', - 'chogs', - 'choil', - 'choko', - 'choky', - 'chola', - 'choli', - 'cholo', - 'chomp', - 'chons', - 'choof', - 'chook', - 'choom', - 'choon', - 'chops', - 'chota', - 'chott', - 'chout', - 'choux', - 'chowk', - 'chows', - 'chubs', - 'chufa', - 'chuff', - 'chugs', - 'chums', - 'churl', - 'churr', - 'chuse', - 'chuts', - 'chyle', - 'chyme', - 'chynd', - 'cibol', - 'cided', - 'cides', - 'ciels', - 'ciggy', - 'cilia', - 'cills', - 'cimar', - 'cimex', - 'cinct', - 'cines', - 'cinqs', - 'cions', - 'cippi', - 'circs', - 'cires', - 'cirls', - 'cirri', - 'cisco', - 'cissy', - 'cists', - 'cital', - 'cited', - 'citer', - 'cites', - 'cives', - 'civet', - 'civie', - 'civvy', - 'clach', - 'clade', - 'clads', - 'claes', - 'clags', - 'clame', - 'clams', - 'clans', - 'claps', - 'clapt', - 'claro', - 'clart', - 'clary', - 'clast', - 'clats', - 'claut', - 'clave', - 'clavi', - 'claws', - 'clays', - 'cleck', - 'cleek', - 'cleep', - 'clefs', - 'clegs', - 'cleik', - 'clems', - 'clepe', - 'clept', - 'cleve', - 'clews', - 'clied', - 'clies', - 'clift', - 'clime', - 'cline', - 'clint', - 'clipe', - 'clips', - 'clipt', - 'clits', - 'cloam', - 'clods', - 'cloff', - 'clogs', - 'cloke', - 'clomb', - 'clomp', - 'clonk', - 'clons', - 'cloop', - 'cloot', - 'clops', - 'clote', - 'clots', - 'clour', - 'clous', - 'clows', - 'cloye', - 'cloys', - 'cloze', - 'clubs', - 'clues', - 'cluey', - 'clunk', - 'clype', - 'cnida', - 'coact', - 'coady', - 'coala', - 'coals', - 'coaly', - 'coapt', - 'coarb', - 'coate', - 'coati', - 'coats', - 'cobbs', - 'cobby', - 'cobia', - 'coble', - 'cobza', - 'cocas', - 'cocci', - 'cocco', - 'cocks', - 'cocky', - 'cocos', - 'codas', - 'codec', - 'coded', - 'coden', - 'coder', - 'codes', - 'codex', - 'codon', - 'coeds', - 'coffs', - 'cogie', - 'cogon', - 'cogue', - 'cohab', - 'cohen', - 'cohoe', - 'cohog', - 'cohos', - 'coifs', - 'coign', - 'coils', - 'coins', - 'coirs', - 'coits', - 'coked', - 'cokes', - 'colas', - 'colby', - 'colds', - 'coled', - 'coles', - 'coley', - 'colic', - 'colin', - 'colls', - 'colly', - 'colog', - 'colts', - 'colza', - 'comae', - 'comal', - 'comas', - 'combe', - 'combi', - 'combo', - 'combs', - 'comby', - 'comer', - 'comes', - 'comix', - 'commo', - 'comms', - 'commy', - 'compo', - 'comps', - 'compt', - 'comte', - 'comus', - 'coned', - 'cones', - 'coney', - 'confs', - 'conga', - 'conge', - 'congo', - 'conia', - 'conin', - 'conks', - 'conky', - 'conne', - 'conns', - 'conte', - 'conto', - 'conus', - 'convo', - 'cooch', - 'cooed', - 'cooee', - 'cooer', - 'cooey', - 'coofs', - 'cooks', - 'cooky', - 'cools', - 'cooly', - 'coomb', - 'cooms', - 'coomy', - 'coons', - 'coops', - 'coopt', - 'coost', - 'coots', - 'cooze', - 'copal', - 'copay', - 'coped', - 'copen', - 'coper', - 'copes', - 'coppy', - 'copra', - 'copsy', - 'coqui', - 'coram', - 'corbe', - 'corby', - 'cords', - 'cored', - 'cores', - 'corey', - 'corgi', - 'coria', - 'corks', - 'corky', - 'corms', - 'corni', - 'corno', - 'corns', - 'cornu', - 'corps', - 'corse', - 'corso', - 'cosec', - 'cosed', - 'coses', - 'coset', - 'cosey', - 'cosie', - 'costa', - 'coste', - 'costs', - 'cotan', - 'coted', - 'cotes', - 'coths', - 'cotta', - 'cotts', - 'coude', - 'coups', - 'courb', - 'courd', - 'coure', - 'cours', - 'couta', - 'couth', - 'coved', - 'coves', - 'covin', - 'cowal', - 'cowan', - 'cowed', - 'cowks', - 'cowls', - 'cowps', - 'cowry', - 'coxae', - 'coxal', - 'coxed', - 'coxes', - 'coxib', - 'coyau', - 'coyed', - 'coyer', - 'coypu', - 'cozed', - 'cozen', - 'cozes', - 'cozey', - 'cozie', - 'craal', - 'crabs', - 'crags', - 'craic', - 'craig', - 'crake', - 'crame', - 'crams', - 'crans', - 'crape', - 'craps', - 'crapy', - 'crare', - 'craws', - 'crays', - 'creds', - 'creel', - 'crees', - 'crems', - 'crena', - 'creps', - 'crepy', - 'crewe', - 'crews', - 'crias', - 'cribs', - 'cries', - 'crims', - 'crine', - 'crios', - 'cripe', - 'crips', - 'crise', - 'crith', - 'crits', - 'croci', - 'crocs', - 'croft', - 'crogs', - 'cromb', - 'crome', - 'cronk', - 'crons', - 'crool', - 'croon', - 'crops', - 'crore', - 'crost', - 'crout', - 'crows', - 'croze', - 'cruck', - 'crudo', - 'cruds', - 'crudy', - 'crues', - 'cruet', - 'cruft', - 'crunk', - 'cruor', - 'crura', - 'cruse', - 'crusy', - 'cruve', - 'crwth', - 'cryer', - 'ctene', - 'cubby', - 'cubeb', - 'cubed', - 'cuber', - 'cubes', - 'cubit', - 'cuddy', - 'cuffo', - 'cuffs', - 'cuifs', - 'cuing', - 'cuish', - 'cuits', - 'cukes', - 'culch', - 'culet', - 'culex', - 'culls', - 'cully', - 'culms', - 'culpa', - 'culti', - 'cults', - 'culty', - 'cumec', - 'cundy', - 'cunei', - 'cunit', - 'cunts', - 'cupel', - 'cupid', - 'cuppa', - 'cuppy', - 'curat', - 'curbs', - 'curch', - 'curds', - 'curdy', - 'cured', - 'curer', - 'cures', - 'curet', - 'curfs', - 'curia', - 'curie', - 'curli', - 'curls', - 'curns', - 'curny', - 'currs', - 'cursi', - 'curst', - 'cusec', - 'cushy', - 'cusks', - 'cusps', - 'cuspy', - 'cusso', - 'cusum', - 'cutch', - 'cuter', - 'cutes', - 'cutey', - 'cutin', - 'cutis', - 'cutto', - 'cutty', - 'cutup', - 'cuvee', - 'cuzes', - 'cwtch', - 'cyano', - 'cyans', - 'cycad', - 'cycas', - 'cyclo', - 'cyder', - 'cylix', - 'cymae', - 'cymar', - 'cymas', - 'cymes', - 'cymol', - 'cysts', - 'cytes', - 'cyton', - 'czars', - 'daals', - 'dabba', - 'daces', - 'dacha', - 'dacks', - 'dadah', - 'dadas', - 'dados', - 'daffs', - 'daffy', - 'dagga', - 'daggy', - 'dagos', - 'dahls', - 'daiko', - 'daine', - 'daint', - 'daker', - 'daled', - 'dales', - 'dalis', - 'dalle', - 'dalts', - 'daman', - 'damar', - 'dames', - 'damme', - 'damns', - 'damps', - 'dampy', - 'dancy', - 'dangs', - 'danio', - 'danks', - 'danny', - 'dants', - 'daraf', - 'darbs', - 'darcy', - 'dared', - 'darer', - 'dares', - 'darga', - 'dargs', - 'daric', - 'daris', - 'darks', - 'darky', - 'darns', - 'darre', - 'darts', - 'darzi', - 'dashi', - 'dashy', - 'datal', - 'dated', - 'dater', - 'dates', - 'datos', - 'datto', - 'daube', - 'daubs', - 'dauby', - 'dauds', - 'dault', - 'daurs', - 'dauts', - 'daven', - 'davit', - 'dawah', - 'dawds', - 'dawed', - 'dawen', - 'dawks', - 'dawns', - 'dawts', - 'dayan', - 'daych', - 'daynt', - 'dazed', - 'dazer', - 'dazes', - 'deads', - 'deair', - 'deals', - 'deans', - 'deare', - 'dearn', - 'dears', - 'deary', - 'deash', - 'deave', - 'deaws', - 'deawy', - 'debag', - 'debby', - 'debel', - 'debes', - 'debts', - 'debud', - 'debur', - 'debus', - 'debye', - 'decad', - 'decaf', - 'decan', - 'decko', - 'decks', - 'decos', - 'dedal', - 'deeds', - 'deedy', - 'deely', - 'deems', - 'deens', - 'deeps', - 'deere', - 'deers', - 'deets', - 'deeve', - 'deevs', - 'defat', - 'deffo', - 'defis', - 'defog', - 'degas', - 'degum', - 'degus', - 'deice', - 'deids', - 'deify', - 'deils', - 'deism', - 'deist', - 'deked', - 'dekes', - 'dekko', - 'deled', - 'deles', - 'delfs', - 'delft', - 'delis', - 'dells', - 'delly', - 'delos', - 'delph', - 'delts', - 'deman', - 'demes', - 'demic', - 'demit', - 'demob', - 'demoi', - 'demos', - 'dempt', - 'denar', - 'denay', - 'dench', - 'denes', - 'denet', - 'denis', - 'dents', - 'deoxy', - 'derat', - 'deray', - 'dered', - 'deres', - 'derig', - 'derma', - 'derms', - 'derns', - 'derny', - 'deros', - 'derro', - 'derry', - 'derth', - 'dervs', - 'desex', - 'deshi', - 'desis', - 'desks', - 'desse', - 'devas', - 'devel', - 'devis', - 'devon', - 'devos', - 'devot', - 'dewan', - 'dewar', - 'dewax', - 'dewed', - 'dexes', - 'dexie', - 'dhaba', - 'dhaks', - 'dhals', - 'dhikr', - 'dhobi', - 'dhole', - 'dholl', - 'dhols', - 'dhoti', - 'dhows', - 'dhuti', - 'diact', - 'dials', - 'diane', - 'diazo', - 'dibbs', - 'diced', - 'dicer', - 'dices', - 'dicht', - 'dicks', - 'dicky', - 'dicot', - 'dicta', - 'dicts', - 'dicty', - 'diddy', - 'didie', - 'didos', - 'didst', - 'diebs', - 'diels', - 'diene', - 'diets', - 'diffs', - 'dight', - 'dikas', - 'diked', - 'diker', - 'dikes', - 'dikey', - 'dildo', - 'dilli', - 'dills', - 'dimbo', - 'dimer', - 'dimes', - 'dimps', - 'dinar', - 'dined', - 'dines', - 'dinge', - 'dings', - 'dinic', - 'dinks', - 'dinky', - 'dinna', - 'dinos', - 'dints', - 'diols', - 'diota', - 'dippy', - 'dipso', - 'diram', - 'direr', - 'dirke', - 'dirks', - 'dirls', - 'dirts', - 'disas', - 'disci', - 'discs', - 'dishy', - 'disks', - 'disme', - 'dital', - 'ditas', - 'dited', - 'dites', - 'ditsy', - 'ditts', - 'ditzy', - 'divan', - 'divas', - 'dived', - 'dives', - 'divis', - 'divna', - 'divos', - 'divot', - 'divvy', - 'diwan', - 'dixie', - 'dixit', - 'diyas', - 'dizen', - 'djinn', - 'djins', - 'doabs', - 'doats', - 'dobby', - 'dobes', - 'dobie', - 'dobla', - 'dobra', - 'dobro', - 'docht', - 'docks', - 'docos', - 'docus', - 'doddy', - 'dodos', - 'doeks', - 'doers', - 'doest', - 'doeth', - 'doffs', - 'dogan', - 'doges', - 'dogey', - 'doggo', - 'doggy', - 'dogie', - 'dohyo', - 'doilt', - 'doily', - 'doits', - 'dojos', - 'dolce', - 'dolci', - 'doled', - 'doles', - 'dolia', - 'dolls', - 'dolma', - 'dolor', - 'dolos', - 'dolts', - 'domal', - 'domed', - 'domes', - 'domic', - 'donah', - 'donas', - 'donee', - 'doner', - 'donga', - 'dongs', - 'donko', - 'donna', - 'donne', - 'donny', - 'donsy', - 'doobs', - 'dooce', - 'doody', - 'dooks', - 'doole', - 'dools', - 'dooly', - 'dooms', - 'doomy', - 'doona', - 'doorn', - 'doors', - 'doozy', - 'dopas', - 'doped', - 'doper', - 'dopes', - 'dorad', - 'dorba', - 'dorbs', - 'doree', - 'dores', - 'doric', - 'doris', - 'dorks', - 'dorky', - 'dorms', - 'dormy', - 'dorps', - 'dorrs', - 'dorsa', - 'dorse', - 'dorts', - 'dorty', - 'dosai', - 'dosas', - 'dosed', - 'doseh', - 'doser', - 'doses', - 'dosha', - 'dotal', - 'doted', - 'doter', - 'dotes', - 'dotty', - 'douar', - 'douce', - 'doucs', - 'douks', - 'doula', - 'douma', - 'doums', - 'doups', - 'doura', - 'douse', - 'douts', - 'doved', - 'doven', - 'dover', - 'doves', - 'dovie', - 'dowar', - 'dowds', - 'dowed', - 'dower', - 'dowie', - 'dowle', - 'dowls', - 'dowly', - 'downa', - 'downs', - 'dowps', - 'dowse', - 'dowts', - 'doxed', - 'doxes', - 'doxie', - 'doyen', - 'doyly', - 'dozed', - 'dozer', - 'dozes', - 'drabs', - 'drack', - 'draco', - 'draff', - 'drags', - 'drail', - 'drams', - 'drant', - 'draps', - 'drats', - 'drave', - 'draws', - 'drays', - 'drear', - 'dreck', - 'dreed', - 'dreer', - 'drees', - 'dregs', - 'dreks', - 'drent', - 'drere', - 'drest', - 'dreys', - 'dribs', - 'drice', - 'dries', - 'drily', - 'drips', - 'dript', - 'droid', - 'droil', - 'droke', - 'drole', - 'drome', - 'drony', - 'droob', - 'droog', - 'drook', - 'drops', - 'dropt', - 'drouk', - 'drows', - 'drubs', - 'drugs', - 'drums', - 'drupe', - 'druse', - 'drusy', - 'druxy', - 'dryad', - 'dryas', - 'dsobo', - 'dsomo', - 'duads', - 'duals', - 'duans', - 'duars', - 'dubbo', - 'ducal', - 'ducat', - 'duces', - 'ducks', - 'ducky', - 'ducts', - 'duddy', - 'duded', - 'dudes', - 'duels', - 'duets', - 'duett', - 'duffs', - 'dufus', - 'duing', - 'duits', - 'dukas', - 'duked', - 'dukes', - 'dukka', - 'dulce', - 'dules', - 'dulia', - 'dulls', - 'dulse', - 'dumas', - 'dumbo', - 'dumbs', - 'dumka', - 'dumky', - 'dumps', - 'dunam', - 'dunch', - 'dunes', - 'dungs', - 'dungy', - 'dunks', - 'dunno', - 'dunny', - 'dunsh', - 'dunts', - 'duomi', - 'duomo', - 'duped', - 'duper', - 'dupes', - 'duple', - 'duply', - 'duppy', - 'dural', - 'duras', - 'dured', - 'dures', - 'durgy', - 'durns', - 'duroc', - 'duros', - 'duroy', - 'durra', - 'durrs', - 'durry', - 'durst', - 'durum', - 'durzi', - 'dusks', - 'dusts', - 'duxes', - 'dwaal', - 'dwale', - 'dwalm', - 'dwams', - 'dwang', - 'dwaum', - 'dweeb', - 'dwile', - 'dwine', - 'dyads', - 'dyers', - 'dyked', - 'dykes', - 'dykey', - 'dykon', - 'dynel', - 'dynes', - 'dzhos', - 'eagre', - 'ealed', - 'eales', - 'eaned', - 'eards', - 'eared', - 'earls', - 'earns', - 'earnt', - 'earst', - 'eased', - 'easer', - 'eases', - 'easle', - 'easts', - 'eathe', - 'eaved', - 'eaves', - 'ebbed', - 'ebbet', - 'ebons', - 'ebook', - 'ecads', - 'eched', - 'eches', - 'echos', - 'ecrus', - 'edema', - 'edged', - 'edger', - 'edges', - 'edile', - 'edits', - 'educe', - 'educt', - 'eejit', - 'eensy', - 'eeven', - 'eevns', - 'effed', - 'egads', - 'egers', - 'egest', - 'eggar', - 'egged', - 'egger', - 'egmas', - 'ehing', - 'eider', - 'eidos', - 'eigne', - 'eiked', - 'eikon', - 'eilds', - 'eisel', - 'ejido', - 'ekkas', - 'elain', - 'eland', - 'elans', - 'elchi', - 'eldin', - 'elemi', - 'elfed', - 'eliad', - 'elint', - 'elmen', - 'eloge', - 'elogy', - 'eloin', - 'elops', - 'elpee', - 'elsin', - 'elute', - 'elvan', - 'elven', - 'elver', - 'elves', - 'emacs', - 'embar', - 'embay', - 'embog', - 'embow', - 'embox', - 'embus', - 'emeer', - 'emend', - 'emerg', - 'emery', - 'emeus', - 'emics', - 'emirs', - 'emits', - 'emmas', - 'emmer', - 'emmet', - 'emmew', - 'emmys', - 'emoji', - 'emong', - 'emote', - 'emove', - 'empts', - 'emule', - 'emure', - 'emyde', - 'emyds', - 'enarm', - 'enate', - 'ended', - 'ender', - 'endew', - 'endue', - 'enews', - 'enfix', - 'eniac', - 'enlit', - 'enmew', - 'ennog', - 'enoki', - 'enols', - 'enorm', - 'enows', - 'enrol', - 'ensew', - 'ensky', - 'entia', - 'enure', - 'enurn', - 'envoi', - 'enzym', - 'eorls', - 'eosin', - 'epact', - 'epees', - 'ephah', - 'ephas', - 'ephod', - 'ephor', - 'epics', - 'epode', - 'epopt', - 'epris', - 'eques', - 'equid', - 'erbia', - 'erevs', - 'ergon', - 'ergos', - 'ergot', - 'erhus', - 'erica', - 'erick', - 'erics', - 'ering', - 'erned', - 'ernes', - 'erose', - 'erred', - 'erses', - 'eruct', - 'erugo', - 'eruvs', - 'erven', - 'ervil', - 'escar', - 'escot', - 'esile', - 'eskar', - 'esker', - 'esnes', - 'esses', - 'estoc', - 'estop', - 'estro', - 'etage', - 'etape', - 'etats', - 'etens', - 'ethal', - 'ethne', - 'ethyl', - 'etics', - 'etnas', - 'ettin', - 'ettle', - 'etuis', - 'etwee', - 'etyma', - 'eughs', - 'euked', - 'eupad', - 'euros', - 'eusol', - 'evens', - 'evert', - 'evets', - 'evhoe', - 'evils', - 'evite', - 'evohe', - 'ewers', - 'ewest', - 'ewhow', - 'ewked', - 'exams', - 'exeat', - 'execs', - 'exeem', - 'exeme', - 'exfil', - 'exies', - 'exine', - 'exing', - 'exits', - 'exode', - 'exome', - 'exons', - 'expat', - 'expos', - 'exude', - 'exuls', - 'exurb', - 'eyass', - 'eyers', - 'eyots', - 'eyras', - 'eyres', - 'eyrie', - 'eyrir', - 'ezine', - 'fabby', - 'faced', - 'facer', - 'faces', - 'facia', - 'facta', - 'facts', - 'faddy', - 'faded', - 'fader', - 'fades', - 'fadge', - 'fados', - 'faena', - 'faery', - 'faffs', - 'faffy', - 'faggy', - 'fagin', - 'fagot', - 'faiks', - 'fails', - 'faine', - 'fains', - 'fairs', - 'faked', - 'faker', - 'fakes', - 'fakey', - 'fakie', - 'fakir', - 'falaj', - 'falls', - 'famed', - 'fames', - 'fanal', - 'fands', - 'fanes', - 'fanga', - 'fango', - 'fangs', - 'fanks', - 'fanon', - 'fanos', - 'fanum', - 'faqir', - 'farad', - 'farci', - 'farcy', - 'fards', - 'fared', - 'farer', - 'fares', - 'farle', - 'farls', - 'farms', - 'faros', - 'farro', - 'farse', - 'farts', - 'fasci', - 'fasti', - 'fasts', - 'fated', - 'fates', - 'fatly', - 'fatso', - 'fatwa', - 'faugh', - 'fauld', - 'fauns', - 'faurd', - 'fauts', - 'fauve', - 'favas', - 'favel', - 'faver', - 'faves', - 'favus', - 'fawns', - 'fawny', - 'faxed', - 'faxes', - 'fayed', - 'fayer', - 'fayne', - 'fayre', - 'fazed', - 'fazes', - 'feals', - 'feare', - 'fears', - 'feart', - 'fease', - 'feats', - 'feaze', - 'feces', - 'fecht', - 'fecit', - 'fecks', - 'fedex', - 'feebs', - 'feeds', - 'feels', - 'feens', - 'feers', - 'feese', - 'feeze', - 'fehme', - 'feint', - 'feist', - 'felch', - 'felid', - 'fells', - 'felly', - 'felts', - 'felty', - 'femal', - 'femes', - 'femmy', - 'fends', - 'fendy', - 'fenis', - 'fenks', - 'fenny', - 'fents', - 'feods', - 'feoff', - 'ferer', - 'feres', - 'feria', - 'ferly', - 'fermi', - 'ferms', - 'ferns', - 'ferny', - 'fesse', - 'festa', - 'fests', - 'festy', - 'fetas', - 'feted', - 'fetes', - 'fetor', - 'fetta', - 'fetts', - 'fetwa', - 'feuar', - 'feuds', - 'feued', - 'feyed', - 'feyer', - 'feyly', - 'fezes', - 'fezzy', - 'fiars', - 'fiats', - 'fibro', - 'fices', - 'fiche', - 'fichu', - 'ficin', - 'ficos', - 'fides', - 'fidge', - 'fidos', - 'fiefs', - 'fient', - 'fiere', - 'fiers', - 'fiest', - 'fifed', - 'fifer', - 'fifes', - 'fifis', - 'figgy', - 'figos', - 'fiked', - 'fikes', - 'filar', - 'filch', - 'filed', - 'files', - 'filii', - 'filks', - 'fille', - 'fillo', - 'fills', - 'filmi', - 'films', - 'filos', - 'filum', - 'finca', - 'finds', - 'fined', - 'fines', - 'finis', - 'finks', - 'finny', - 'finos', - 'fiord', - 'fiqhs', - 'fique', - 'fired', - 'firer', - 'fires', - 'firie', - 'firks', - 'firms', - 'firns', - 'firry', - 'firth', - 'fiscs', - 'fisks', - 'fists', - 'fisty', - 'fitch', - 'fitly', - 'fitna', - 'fitte', - 'fitts', - 'fiver', - 'fives', - 'fixed', - 'fixes', - 'fixit', - 'fjeld', - 'flabs', - 'flaff', - 'flags', - 'flaks', - 'flamm', - 'flams', - 'flamy', - 'flane', - 'flans', - 'flaps', - 'flary', - 'flats', - 'flava', - 'flawn', - 'flaws', - 'flawy', - 'flaxy', - 'flays', - 'fleam', - 'fleas', - 'fleek', - 'fleer', - 'flees', - 'flegs', - 'fleme', - 'fleur', - 'flews', - 'flexi', - 'flexo', - 'fleys', - 'flics', - 'flied', - 'flies', - 'flimp', - 'flims', - 'flips', - 'flirs', - 'flisk', - 'flite', - 'flits', - 'flitt', - 'flobs', - 'flocs', - 'floes', - 'flogs', - 'flong', - 'flops', - 'flors', - 'flory', - 'flosh', - 'flota', - 'flote', - 'flows', - 'flubs', - 'flued', - 'flues', - 'fluey', - 'fluky', - 'flump', - 'fluor', - 'flurr', - 'fluty', - 'fluyt', - 'flyby', - 'flype', - 'flyte', - 'foals', - 'foams', - 'foehn', - 'fogey', - 'fogie', - 'fogle', - 'fogou', - 'fohns', - 'foids', - 'foils', - 'foins', - 'folds', - 'foley', - 'folia', - 'folic', - 'folie', - 'folks', - 'folky', - 'fomes', - 'fonda', - 'fonds', - 'fondu', - 'fones', - 'fonly', - 'fonts', - 'foods', - 'foody', - 'fools', - 'foots', - 'footy', - 'foram', - 'forbs', - 'forby', - 'fordo', - 'fords', - 'forel', - 'fores', - 'forex', - 'forks', - 'forky', - 'forme', - 'forms', - 'forts', - 'forza', - 'forze', - 'fossa', - 'fosse', - 'fouat', - 'fouds', - 'fouer', - 'fouet', - 'foule', - 'fouls', - 'fount', - 'fours', - 'fouth', - 'fovea', - 'fowls', - 'fowth', - 'foxed', - 'foxes', - 'foxie', - 'foyle', - 'foyne', - 'frabs', - 'frack', - 'fract', - 'frags', - 'fraim', - 'franc', - 'frape', - 'fraps', - 'frass', - 'frate', - 'frati', - 'frats', - 'fraus', - 'frays', - 'frees', - 'freet', - 'freit', - 'fremd', - 'frena', - 'freon', - 'frere', - 'frets', - 'fribs', - 'frier', - 'fries', - 'frigs', - 'frise', - 'frist', - 'frith', - 'frits', - 'fritt', - 'frize', - 'frizz', - 'froes', - 'frogs', - 'frons', - 'frore', - 'frorn', - 'frory', - 'frosh', - 'frows', - 'frowy', - 'frugs', - 'frump', - 'frush', - 'frust', - 'fryer', - 'fubar', - 'fubby', - 'fubsy', - 'fucks', - 'fucus', - 'fuddy', - 'fudgy', - 'fuels', - 'fuero', - 'fuffs', - 'fuffy', - 'fugal', - 'fuggy', - 'fugie', - 'fugio', - 'fugle', - 'fugly', - 'fugus', - 'fujis', - 'fulls', - 'fumed', - 'fumer', - 'fumes', - 'fumet', - 'fundi', - 'funds', - 'fundy', - 'fungo', - 'fungs', - 'funks', - 'fural', - 'furan', - 'furca', - 'furls', - 'furol', - 'furrs', - 'furth', - 'furze', - 'furzy', - 'fused', - 'fusee', - 'fusel', - 'fuses', - 'fusil', - 'fusks', - 'fusts', - 'fusty', - 'futon', - 'fuzed', - 'fuzee', - 'fuzes', - 'fuzil', - 'fyces', - 'fyked', - 'fykes', - 'fyles', - 'fyrds', - 'fytte', - 'gabba', - 'gabby', - 'gable', - 'gaddi', - 'gades', - 'gadge', - 'gadid', - 'gadis', - 'gadje', - 'gadjo', - 'gadso', - 'gaffs', - 'gaged', - 'gager', - 'gages', - 'gaids', - 'gains', - 'gairs', - 'gaita', - 'gaits', - 'gaitt', - 'gajos', - 'galah', - 'galas', - 'galax', - 'galea', - 'galed', - 'gales', - 'galls', - 'gally', - 'galop', - 'galut', - 'galvo', - 'gamas', - 'gamay', - 'gamba', - 'gambe', - 'gambo', - 'gambs', - 'gamed', - 'games', - 'gamey', - 'gamic', - 'gamin', - 'gamme', - 'gammy', - 'gamps', - 'ganch', - 'gandy', - 'ganef', - 'ganev', - 'gangs', - 'ganja', - 'ganof', - 'gants', - 'gaols', - 'gaped', - 'gaper', - 'gapes', - 'gapos', - 'gappy', - 'garbe', - 'garbo', - 'garbs', - 'garda', - 'gares', - 'garis', - 'garms', - 'garni', - 'garre', - 'garth', - 'garum', - 'gases', - 'gasps', - 'gaspy', - 'gasts', - 'gatch', - 'gated', - 'gater', - 'gates', - 'gaths', - 'gator', - 'gauch', - 'gaucy', - 'gauds', - 'gauje', - 'gault', - 'gaums', - 'gaumy', - 'gaups', - 'gaurs', - 'gauss', - 'gauzy', - 'gavot', - 'gawcy', - 'gawds', - 'gawks', - 'gawps', - 'gawsy', - 'gayal', - 'gazal', - 'gazar', - 'gazed', - 'gazes', - 'gazon', - 'gazoo', - 'geals', - 'geans', - 'geare', - 'gears', - 'geats', - 'gebur', - 'gecks', - 'geeks', - 'geeps', - 'geest', - 'geist', - 'geits', - 'gelds', - 'gelee', - 'gelid', - 'gelly', - 'gelts', - 'gemel', - 'gemma', - 'gemmy', - 'gemot', - 'genal', - 'genas', - 'genes', - 'genet', - 'genic', - 'genii', - 'genip', - 'genny', - 'genoa', - 'genom', - 'genro', - 'gents', - 'genty', - 'genua', - 'genus', - 'geode', - 'geoid', - 'gerah', - 'gerbe', - 'geres', - 'gerle', - 'germs', - 'germy', - 'gerne', - 'gesse', - 'gesso', - 'geste', - 'gests', - 'getas', - 'getup', - 'geums', - 'geyan', - 'geyer', - 'ghast', - 'ghats', - 'ghaut', - 'ghazi', - 'ghees', - 'ghest', - 'ghyll', - 'gibed', - 'gibel', - 'giber', - 'gibes', - 'gibli', - 'gibus', - 'gifts', - 'gigas', - 'gighe', - 'gigot', - 'gigue', - 'gilas', - 'gilds', - 'gilet', - 'gills', - 'gilly', - 'gilpy', - 'gilts', - 'gimel', - 'gimme', - 'gimps', - 'gimpy', - 'ginch', - 'ginge', - 'gings', - 'ginks', - 'ginny', - 'ginzo', - 'gipon', - 'gippo', - 'gippy', - 'girds', - 'girls', - 'girns', - 'giron', - 'giros', - 'girrs', - 'girsh', - 'girts', - 'gismo', - 'gisms', - 'gists', - 'gitch', - 'gites', - 'giust', - 'gived', - 'gives', - 'gizmo', - 'glace', - 'glads', - 'glady', - 'glaik', - 'glair', - 'glams', - 'glans', - 'glary', - 'glaum', - 'glaur', - 'glazy', - 'gleba', - 'glebe', - 'gleby', - 'glede', - 'gleds', - 'gleed', - 'gleek', - 'glees', - 'gleet', - 'gleis', - 'glens', - 'glent', - 'gleys', - 'glial', - 'glias', - 'glibs', - 'gliff', - 'glift', - 'glike', - 'glime', - 'glims', - 'glisk', - 'glits', - 'glitz', - 'gloam', - 'globi', - 'globs', - 'globy', - 'glode', - 'glogg', - 'gloms', - 'gloop', - 'glops', - 'glost', - 'glout', - 'glows', - 'gloze', - 'glued', - 'gluer', - 'glues', - 'gluey', - 'glugs', - 'glume', - 'glums', - 'gluon', - 'glute', - 'gluts', - 'gnarl', - 'gnarr', - 'gnars', - 'gnats', - 'gnawn', - 'gnaws', - 'gnows', - 'goads', - 'goafs', - 'goals', - 'goary', - 'goats', - 'goaty', - 'goban', - 'gobar', - 'gobbi', - 'gobbo', - 'gobby', - 'gobis', - 'gobos', - 'godet', - 'godso', - 'goels', - 'goers', - 'goest', - 'goeth', - 'goety', - 'gofer', - 'goffs', - 'gogga', - 'gogos', - 'goier', - 'gojis', - 'golds', - 'goldy', - 'goles', - 'golfs', - 'golpe', - 'golps', - 'gombo', - 'gomer', - 'gompa', - 'gonch', - 'gonef', - 'gongs', - 'gonia', - 'gonif', - 'gonks', - 'gonna', - 'gonof', - 'gonys', - 'gonzo', - 'gooby', - 'goods', - 'goofs', - 'googs', - 'gooks', - 'gooky', - 'goold', - 'gools', - 'gooly', - 'goons', - 'goony', - 'goops', - 'goopy', - 'goors', - 'goory', - 'goosy', - 'gopak', - 'gopik', - 'goral', - 'goras', - 'gored', - 'gores', - 'goris', - 'gorms', - 'gormy', - 'gorps', - 'gorse', - 'gorsy', - 'gosht', - 'gosse', - 'gotch', - 'goths', - 'gothy', - 'gotta', - 'gouch', - 'gouks', - 'goura', - 'gouts', - 'gouty', - 'gowan', - 'gowds', - 'gowfs', - 'gowks', - 'gowls', - 'gowns', - 'goxes', - 'goyim', - 'goyle', - 'graal', - 'grabs', - 'grads', - 'graff', - 'graip', - 'grama', - 'grame', - 'gramp', - 'grams', - 'grana', - 'grans', - 'grapy', - 'gravs', - 'grays', - 'grebe', - 'grebo', - 'grece', - 'greek', - 'grees', - 'grege', - 'grego', - 'grein', - 'grens', - 'grese', - 'greve', - 'grews', - 'greys', - 'grice', - 'gride', - 'grids', - 'griff', - 'grift', - 'grigs', - 'grike', - 'grins', - 'griot', - 'grips', - 'gript', - 'gripy', - 'grise', - 'grist', - 'grisy', - 'grith', - 'grits', - 'grize', - 'groat', - 'grody', - 'grogs', - 'groks', - 'groma', - 'grone', - 'groof', - 'grosz', - 'grots', - 'grouf', - 'grovy', - 'grows', - 'grrls', - 'grrrl', - 'grubs', - 'grued', - 'grues', - 'grufe', - 'grume', - 'grump', - 'grund', - 'gryce', - 'gryde', - 'gryke', - 'grype', - 'grypt', - 'guaco', - 'guana', - 'guano', - 'guans', - 'guars', - 'gucks', - 'gucky', - 'gudes', - 'guffs', - 'gugas', - 'guids', - 'guimp', - 'guiro', - 'gulag', - 'gular', - 'gulas', - 'gules', - 'gulet', - 'gulfs', - 'gulfy', - 'gulls', - 'gulph', - 'gulps', - 'gulpy', - 'gumma', - 'gummi', - 'gumps', - 'gundy', - 'gunge', - 'gungy', - 'gunks', - 'gunky', - 'gunny', - 'guqin', - 'gurdy', - 'gurge', - 'gurls', - 'gurly', - 'gurns', - 'gurry', - 'gursh', - 'gurus', - 'gushy', - 'gusla', - 'gusle', - 'gusli', - 'gussy', - 'gusts', - 'gutsy', - 'gutta', - 'gutty', - 'guyed', - 'guyle', - 'guyot', - 'guyse', - 'gwine', - 'gyals', - 'gyans', - 'gybed', - 'gybes', - 'gyeld', - 'gymps', - 'gynae', - 'gynie', - 'gynny', - 'gynos', - 'gyoza', - 'gypos', - 'gyppo', - 'gyppy', - 'gyral', - 'gyred', - 'gyres', - 'gyron', - 'gyros', - 'gyrus', - 'gytes', - 'gyved', - 'gyves', - 'haafs', - 'haars', - 'hable', - 'habus', - 'hacek', - 'hacks', - 'hadal', - 'haded', - 'hades', - 'hadji', - 'hadst', - 'haems', - 'haets', - 'haffs', - 'hafiz', - 'hafts', - 'haggs', - 'hahas', - 'haick', - 'haika', - 'haiks', - 'haiku', - 'hails', - 'haily', - 'hains', - 'haint', - 'hairs', - 'haith', - 'hajes', - 'hajis', - 'hajji', - 'hakam', - 'hakas', - 'hakea', - 'hakes', - 'hakim', - 'hakus', - 'halal', - 'haled', - 'haler', - 'hales', - 'halfa', - 'halfs', - 'halid', - 'hallo', - 'halls', - 'halma', - 'halms', - 'halon', - 'halos', - 'halse', - 'halts', - 'halva', - 'halwa', - 'hamal', - 'hamba', - 'hamed', - 'hames', - 'hammy', - 'hamza', - 'hanap', - 'hance', - 'hanch', - 'hands', - 'hangi', - 'hangs', - 'hanks', - 'hanky', - 'hansa', - 'hanse', - 'hants', - 'haole', - 'haoma', - 'hapax', - 'haply', - 'happi', - 'hapus', - 'haram', - 'hards', - 'hared', - 'hares', - 'harim', - 'harks', - 'harls', - 'harms', - 'harns', - 'haros', - 'harps', - 'harts', - 'hashy', - 'hasks', - 'hasps', - 'hasta', - 'hated', - 'hates', - 'hatha', - 'hauds', - 'haufs', - 'haugh', - 'hauld', - 'haulm', - 'hauls', - 'hault', - 'hauns', - 'hause', - 'haver', - 'haves', - 'hawed', - 'hawks', - 'hawms', - 'hawse', - 'hayed', - 'hayer', - 'hayey', - 'hayle', - 'hazan', - 'hazed', - 'hazer', - 'hazes', - 'heads', - 'heald', - 'heals', - 'heame', - 'heaps', - 'heapy', - 'heare', - 'hears', - 'heast', - 'heats', - 'heben', - 'hebes', - 'hecht', - 'hecks', - 'heder', - 'hedgy', - 'heeds', - 'heedy', - 'heels', - 'heeze', - 'hefte', - 'hefts', - 'heids', - 'heigh', - 'heils', - 'heirs', - 'hejab', - 'hejra', - 'heled', - 'heles', - 'helio', - 'hells', - 'helms', - 'helos', - 'helot', - 'helps', - 'helve', - 'hemal', - 'hemes', - 'hemic', - 'hemin', - 'hemps', - 'hempy', - 'hench', - 'hends', - 'henge', - 'henna', - 'henny', - 'henry', - 'hents', - 'hepar', - 'herbs', - 'herby', - 'herds', - 'heres', - 'herls', - 'herma', - 'herms', - 'herns', - 'heros', - 'herry', - 'herse', - 'hertz', - 'herye', - 'hesps', - 'hests', - 'hetes', - 'heths', - 'heuch', - 'heugh', - 'hevea', - 'hewed', - 'hewer', - 'hewgh', - 'hexad', - 'hexed', - 'hexer', - 'hexes', - 'hexyl', - 'heyed', - 'hiant', - 'hicks', - 'hided', - 'hider', - 'hides', - 'hiems', - 'highs', - 'hight', - 'hijab', - 'hijra', - 'hiked', - 'hiker', - 'hikes', - 'hikoi', - 'hilar', - 'hilch', - 'hillo', - 'hills', - 'hilts', - 'hilum', - 'hilus', - 'himbo', - 'hinau', - 'hinds', - 'hings', - 'hinky', - 'hinny', - 'hints', - 'hiois', - 'hiply', - 'hired', - 'hiree', - 'hirer', - 'hires', - 'hissy', - 'hists', - 'hithe', - 'hived', - 'hiver', - 'hives', - 'hizen', - 'hoaed', - 'hoagy', - 'hoars', - 'hoary', - 'hoast', - 'hobos', - 'hocks', - 'hocus', - 'hodad', - 'hodja', - 'hoers', - 'hogan', - 'hogen', - 'hoggs', - 'hoghs', - 'hohed', - 'hoick', - 'hoied', - 'hoiks', - 'hoing', - 'hoise', - 'hokas', - 'hoked', - 'hokes', - 'hokey', - 'hokis', - 'hokku', - 'hokum', - 'holds', - 'holed', - 'holes', - 'holey', - 'holks', - 'holla', - 'hollo', - 'holme', - 'holms', - 'holon', - 'holos', - 'holts', - 'homas', - 'homed', - 'homes', - 'homey', - 'homie', - 'homme', - 'homos', - 'honan', - 'honda', - 'honds', - 'honed', - 'honer', - 'hones', - 'hongi', - 'hongs', - 'honks', - 'honky', - 'hooch', - 'hoods', - 'hoody', - 'hooey', - 'hoofs', - 'hooka', - 'hooks', - 'hooky', - 'hooly', - 'hoons', - 'hoops', - 'hoord', - 'hoors', - 'hoosh', - 'hoots', - 'hooty', - 'hoove', - 'hopak', - 'hoped', - 'hoper', - 'hopes', - 'hoppy', - 'horah', - 'horal', - 'horas', - 'horis', - 'horks', - 'horme', - 'horns', - 'horst', - 'horsy', - 'hosed', - 'hosel', - 'hosen', - 'hoser', - 'hoses', - 'hosey', - 'hosta', - 'hosts', - 'hotch', - 'hoten', - 'hotty', - 'houff', - 'houfs', - 'hough', - 'houri', - 'hours', - 'houts', - 'hovea', - 'hoved', - 'hoven', - 'hoves', - 'howbe', - 'howes', - 'howff', - 'howfs', - 'howks', - 'howls', - 'howre', - 'howso', - 'hoxed', - 'hoxes', - 'hoyas', - 'hoyed', - 'hoyle', - 'hubby', - 'hucks', - 'hudna', - 'hudud', - 'huers', - 'huffs', - 'huffy', - 'huger', - 'huggy', - 'huhus', - 'huias', - 'hulas', - 'hules', - 'hulks', - 'hulky', - 'hullo', - 'hulls', - 'hully', - 'humas', - 'humfs', - 'humic', - 'humps', - 'humpy', - 'hunks', - 'hunts', - 'hurds', - 'hurls', - 'hurly', - 'hurra', - 'hurst', - 'hurts', - 'hushy', - 'husks', - 'husos', - 'hutia', - 'huzza', - 'huzzy', - 'hwyls', - 'hydra', - 'hyens', - 'hygge', - 'hying', - 'hykes', - 'hylas', - 'hyleg', - 'hyles', - 'hylic', - 'hymns', - 'hynde', - 'hyoid', - 'hyped', - 'hypes', - 'hypha', - 'hyphy', - 'hypos', - 'hyrax', - 'hyson', - 'hythe', - 'iambi', - 'iambs', - 'ibrik', - 'icers', - 'iched', - 'iches', - 'ichor', - 'icier', - 'icker', - 'ickle', - 'icons', - 'ictal', - 'ictic', - 'ictus', - 'idant', - 'ideas', - 'idees', - 'ident', - 'idled', - 'idles', - 'idola', - 'idols', - 'idyls', - 'iftar', - 'igapo', - 'igged', - 'iglus', - 'ihram', - 'ikans', - 'ikats', - 'ikons', - 'ileac', - 'ileal', - 'ileum', - 'ileus', - 'iliad', - 'ilial', - 'ilium', - 'iller', - 'illth', - 'imago', - 'imams', - 'imari', - 'imaum', - 'imbar', - 'imbed', - 'imide', - 'imido', - 'imids', - 'imine', - 'imino', - 'immew', - 'immit', - 'immix', - 'imped', - 'impis', - 'impot', - 'impro', - 'imshi', - 'imshy', - 'inapt', - 'inarm', - 'inbye', - 'incel', - 'incle', - 'incog', - 'incus', - 'incut', - 'indew', - 'india', - 'indie', - 'indol', - 'indow', - 'indri', - 'indue', - 'inerm', - 'infix', - 'infos', - 'infra', - 'ingan', - 'ingle', - 'inion', - 'inked', - 'inker', - 'inkle', - 'inned', - 'innit', - 'inorb', - 'inrun', - 'inset', - 'inspo', - 'intel', - 'intil', - 'intis', - 'intra', - 'inula', - 'inure', - 'inurn', - 'inust', - 'invar', - 'inwit', - 'iodic', - 'iodid', - 'iodin', - 'iotas', - 'ippon', - 'irade', - 'irids', - 'iring', - 'irked', - 'iroko', - 'irone', - 'irons', - 'isbas', - 'ishes', - 'isled', - 'isles', - 'isnae', - 'issei', - 'istle', - 'items', - 'ither', - 'ivied', - 'ivies', - 'ixias', - 'ixnay', - 'ixora', - 'ixtle', - 'izard', - 'izars', - 'izzat', - 'jaaps', - 'jabot', - 'jacal', - 'jacks', - 'jacky', - 'jaded', - 'jades', - 'jafas', - 'jaffa', - 'jagas', - 'jager', - 'jaggs', - 'jaggy', - 'jagir', - 'jagra', - 'jails', - 'jaker', - 'jakes', - 'jakey', - 'jalap', - 'jalop', - 'jambe', - 'jambo', - 'jambs', - 'jambu', - 'james', - 'jammy', - 'jamon', - 'janes', - 'janns', - 'janny', - 'janty', - 'japan', - 'japed', - 'japer', - 'japes', - 'jarks', - 'jarls', - 'jarps', - 'jarta', - 'jarul', - 'jasey', - 'jaspe', - 'jasps', - 'jatos', - 'jauks', - 'jaups', - 'javas', - 'javel', - 'jawan', - 'jawed', - 'jaxie', - 'jeans', - 'jeats', - 'jebel', - 'jedis', - 'jeels', - 'jeely', - 'jeeps', - 'jeers', - 'jeeze', - 'jefes', - 'jeffs', - 'jehad', - 'jehus', - 'jelab', - 'jello', - 'jells', - 'jembe', - 'jemmy', - 'jenny', - 'jeons', - 'jerid', - 'jerks', - 'jerry', - 'jesse', - 'jests', - 'jesus', - 'jetes', - 'jeton', - 'jeune', - 'jewed', - 'jewie', - 'jhala', - 'jiaos', - 'jibba', - 'jibbs', - 'jibed', - 'jiber', - 'jibes', - 'jiffs', - 'jiggy', - 'jigot', - 'jihad', - 'jills', - 'jilts', - 'jimmy', - 'jimpy', - 'jingo', - 'jinks', - 'jinne', - 'jinni', - 'jinns', - 'jirds', - 'jirga', - 'jirre', - 'jisms', - 'jived', - 'jiver', - 'jives', - 'jivey', - 'jnana', - 'jobed', - 'jobes', - 'jocko', - 'jocks', - 'jocky', - 'jocos', - 'jodel', - 'joeys', - 'johns', - 'joins', - 'joked', - 'jokes', - 'jokey', - 'jokol', - 'joled', - 'joles', - 'jolls', - 'jolts', - 'jolty', - 'jomon', - 'jomos', - 'jones', - 'jongs', - 'jonty', - 'jooks', - 'joram', - 'jorum', - 'jotas', - 'jotty', - 'jotun', - 'joual', - 'jougs', - 'jouks', - 'joule', - 'jours', - 'jowar', - 'jowed', - 'jowls', - 'jowly', - 'joyed', - 'jubas', - 'jubes', - 'jucos', - 'judas', - 'judgy', - 'judos', - 'jugal', - 'jugum', - 'jujus', - 'juked', - 'jukes', - 'jukus', - 'julep', - 'jumar', - 'jumby', - 'jumps', - 'junco', - 'junks', - 'junky', - 'jupes', - 'jupon', - 'jural', - 'jurat', - 'jurel', - 'jures', - 'justs', - 'jutes', - 'jutty', - 'juves', - 'juvie', - 'kaama', - 'kabab', - 'kabar', - 'kabob', - 'kacha', - 'kacks', - 'kadai', - 'kades', - 'kadis', - 'kafir', - 'kagos', - 'kagus', - 'kahal', - 'kaiak', - 'kaids', - 'kaies', - 'kaifs', - 'kaika', - 'kaiks', - 'kails', - 'kaims', - 'kaing', - 'kains', - 'kakas', - 'kakis', - 'kalam', - 'kales', - 'kalif', - 'kalis', - 'kalpa', - 'kamas', - 'kames', - 'kamik', - 'kamis', - 'kamme', - 'kanae', - 'kanas', - 'kandy', - 'kaneh', - 'kanes', - 'kanga', - 'kangs', - 'kanji', - 'kants', - 'kanzu', - 'kaons', - 'kapas', - 'kaphs', - 'kapok', - 'kapow', - 'kapus', - 'kaput', - 'karas', - 'karat', - 'karks', - 'karns', - 'karoo', - 'karos', - 'karri', - 'karst', - 'karsy', - 'karts', - 'karzy', - 'kasha', - 'kasme', - 'katal', - 'katas', - 'katis', - 'katti', - 'kaugh', - 'kauri', - 'kauru', - 'kaury', - 'kaval', - 'kavas', - 'kawas', - 'kawau', - 'kawed', - 'kayle', - 'kayos', - 'kazis', - 'kazoo', - 'kbars', - 'kebar', - 'kebob', - 'kecks', - 'kedge', - 'kedgy', - 'keech', - 'keefs', - 'keeks', - 'keels', - 'keema', - 'keeno', - 'keens', - 'keeps', - 'keets', - 'keeve', - 'kefir', - 'kehua', - 'keirs', - 'kelep', - 'kelim', - 'kells', - 'kelly', - 'kelps', - 'kelpy', - 'kelts', - 'kelty', - 'kembo', - 'kembs', - 'kemps', - 'kempt', - 'kempy', - 'kenaf', - 'kench', - 'kendo', - 'kenos', - 'kente', - 'kents', - 'kepis', - 'kerbs', - 'kerel', - 'kerfs', - 'kerky', - 'kerma', - 'kerne', - 'kerns', - 'keros', - 'kerry', - 'kerve', - 'kesar', - 'kests', - 'ketas', - 'ketch', - 'ketes', - 'ketol', - 'kevel', - 'kevil', - 'kexes', - 'keyed', - 'keyer', - 'khadi', - 'khafs', - 'khans', - 'khaph', - 'khats', - 'khaya', - 'khazi', - 'kheda', - 'kheth', - 'khets', - 'khoja', - 'khors', - 'khoum', - 'khuds', - 'kiaat', - 'kiack', - 'kiang', - 'kibbe', - 'kibbi', - 'kibei', - 'kibes', - 'kibla', - 'kicks', - 'kicky', - 'kiddo', - 'kiddy', - 'kidel', - 'kidge', - 'kiefs', - 'kiers', - 'kieve', - 'kievs', - 'kight', - 'kikes', - 'kikoi', - 'kiley', - 'kilim', - 'kills', - 'kilns', - 'kilos', - 'kilps', - 'kilts', - 'kilty', - 'kimbo', - 'kinas', - 'kinda', - 'kinds', - 'kindy', - 'kines', - 'kings', - 'kinin', - 'kinks', - 'kinos', - 'kiore', - 'kipes', - 'kippa', - 'kipps', - 'kirby', - 'kirks', - 'kirns', - 'kirri', - 'kisan', - 'kissy', - 'kists', - 'kited', - 'kiter', - 'kites', - 'kithe', - 'kiths', - 'kitul', - 'kivas', - 'kiwis', - 'klang', - 'klaps', - 'klett', - 'klick', - 'klieg', - 'kliks', - 'klong', - 'kloof', - 'kluge', - 'klutz', - 'knags', - 'knaps', - 'knarl', - 'knars', - 'knaur', - 'knawe', - 'knees', - 'knell', - 'knish', - 'knits', - 'knive', - 'knobs', - 'knops', - 'knosp', - 'knots', - 'knout', - 'knowe', - 'knows', - 'knubs', - 'knurl', - 'knurr', - 'knurs', - 'knuts', - 'koans', - 'koaps', - 'koban', - 'kobos', - 'koels', - 'koffs', - 'kofta', - 'kogal', - 'kohas', - 'kohen', - 'kohls', - 'koine', - 'kojis', - 'kokam', - 'kokas', - 'koker', - 'kokra', - 'kokum', - 'kolas', - 'kolos', - 'kombu', - 'konbu', - 'kondo', - 'konks', - 'kooks', - 'kooky', - 'koori', - 'kopek', - 'kophs', - 'kopje', - 'koppa', - 'korai', - 'koras', - 'korat', - 'kores', - 'korma', - 'koros', - 'korun', - 'korus', - 'koses', - 'kotch', - 'kotos', - 'kotow', - 'koura', - 'kraal', - 'krabs', - 'kraft', - 'krais', - 'krait', - 'krang', - 'krans', - 'kranz', - 'kraut', - 'krays', - 'kreep', - 'kreng', - 'krewe', - 'krona', - 'krone', - 'kroon', - 'krubi', - 'krunk', - 'ksars', - 'kubie', - 'kudos', - 'kudus', - 'kudzu', - 'kufis', - 'kugel', - 'kuias', - 'kukri', - 'kukus', - 'kulak', - 'kulan', - 'kulas', - 'kulfi', - 'kumis', - 'kumys', - 'kuris', - 'kurre', - 'kurta', - 'kurus', - 'kusso', - 'kutas', - 'kutch', - 'kutis', - 'kutus', - 'kuzus', - 'kvass', - 'kvell', - 'kwela', - 'kyack', - 'kyaks', - 'kyang', - 'kyars', - 'kyats', - 'kybos', - 'kydst', - 'kyles', - 'kylie', - 'kylin', - 'kylix', - 'kyloe', - 'kynde', - 'kynds', - 'kypes', - 'kyrie', - 'kytes', - 'kythe', - 'laari', - 'labda', - 'labia', - 'labis', - 'labra', - 'laced', - 'lacer', - 'laces', - 'lacet', - 'lacey', - 'lacks', - 'laddy', - 'laded', - 'lader', - 'lades', - 'laers', - 'laevo', - 'lagan', - 'lahal', - 'lahar', - 'laich', - 'laics', - 'laids', - 'laigh', - 'laika', - 'laiks', - 'laird', - 'lairs', - 'lairy', - 'laith', - 'laity', - 'laked', - 'laker', - 'lakes', - 'lakhs', - 'lakin', - 'laksa', - 'laldy', - 'lalls', - 'lamas', - 'lambs', - 'lamby', - 'lamed', - 'lamer', - 'lames', - 'lamia', - 'lammy', - 'lamps', - 'lanai', - 'lanas', - 'lanch', - 'lande', - 'lands', - 'lanes', - 'lanks', - 'lants', - 'lapin', - 'lapis', - 'lapje', - 'larch', - 'lards', - 'lardy', - 'laree', - 'lares', - 'largo', - 'laris', - 'larks', - 'larky', - 'larns', - 'larnt', - 'larum', - 'lased', - 'laser', - 'lases', - 'lassi', - 'lassu', - 'lassy', - 'lasts', - 'latah', - 'lated', - 'laten', - 'latex', - 'lathi', - 'laths', - 'lathy', - 'latke', - 'latus', - 'lauan', - 'lauch', - 'lauds', - 'laufs', - 'laund', - 'laura', - 'laval', - 'lavas', - 'laved', - 'laver', - 'laves', - 'lavra', - 'lavvy', - 'lawed', - 'lawer', - 'lawin', - 'lawks', - 'lawns', - 'lawny', - 'laxed', - 'laxer', - 'laxes', - 'laxly', - 'layed', - 'layin', - 'layup', - 'lazar', - 'lazed', - 'lazes', - 'lazos', - 'lazzi', - 'lazzo', - 'leads', - 'leady', - 'leafs', - 'leaks', - 'leams', - 'leans', - 'leany', - 'leaps', - 'leare', - 'lears', - 'leary', - 'leats', - 'leavy', - 'leaze', - 'leben', - 'leccy', - 'ledes', - 'ledgy', - 'ledum', - 'leear', - 'leeks', - 'leeps', - 'leers', - 'leese', - 'leets', - 'leeze', - 'lefte', - 'lefts', - 'leger', - 'leges', - 'legge', - 'leggo', - 'legit', - 'lehrs', - 'lehua', - 'leirs', - 'leish', - 'leman', - 'lemed', - 'lemel', - 'lemes', - 'lemma', - 'lemme', - 'lends', - 'lenes', - 'lengs', - 'lenis', - 'lenos', - 'lense', - 'lenti', - 'lento', - 'leone', - 'lepid', - 'lepra', - 'lepta', - 'lered', - 'leres', - 'lerps', - 'lesbo', - 'leses', - 'lests', - 'letch', - 'lethe', - 'letup', - 'leuch', - 'leuco', - 'leuds', - 'leugh', - 'levas', - 'levee', - 'leves', - 'levin', - 'levis', - 'lewis', - 'lexes', - 'lexis', - 'lezes', - 'lezza', - 'lezzy', - 'liana', - 'liane', - 'liang', - 'liard', - 'liars', - 'liart', - 'liber', - 'libra', - 'libri', - 'lichi', - 'licht', - 'licit', - 'licks', - 'lidar', - 'lidos', - 'liefs', - 'liens', - 'liers', - 'lieus', - 'lieve', - 'lifer', - 'lifes', - 'lifts', - 'ligan', - 'liger', - 'ligge', - 'ligne', - 'liked', - 'liker', - 'likes', - 'likin', - 'lills', - 'lilos', - 'lilts', - 'liman', - 'limas', - 'limax', - 'limba', - 'limbi', - 'limbs', - 'limby', - 'limed', - 'limen', - 'limes', - 'limey', - 'limma', - 'limns', - 'limos', - 'limpa', - 'limps', - 'linac', - 'linch', - 'linds', - 'lindy', - 'lined', - 'lines', - 'liney', - 'linga', - 'lings', - 'lingy', - 'linin', - 'links', - 'linky', - 'linns', - 'linny', - 'linos', - 'lints', - 'linty', - 'linum', - 'linux', - 'lions', - 'lipas', - 'lipes', - 'lipin', - 'lipos', - 'lippy', - 'liras', - 'lirks', - 'lirot', - 'lisks', - 'lisle', - 'lisps', - 'lists', - 'litai', - 'litas', - 'lited', - 'liter', - 'lites', - 'litho', - 'liths', - 'litre', - 'lived', - 'liven', - 'lives', - 'livor', - 'livre', - 'llano', - 'loach', - 'loads', - 'loafs', - 'loams', - 'loans', - 'loast', - 'loave', - 'lobar', - 'lobed', - 'lobes', - 'lobos', - 'lobus', - 'loche', - 'lochs', - 'locie', - 'locis', - 'locks', - 'locos', - 'locum', - 'loden', - 'lodes', - 'loess', - 'lofts', - 'logan', - 'loges', - 'loggy', - 'logia', - 'logie', - 'logoi', - 'logon', - 'logos', - 'lohan', - 'loids', - 'loins', - 'loipe', - 'loirs', - 'lokes', - 'lolls', - 'lolly', - 'lolog', - 'lomas', - 'lomed', - 'lomes', - 'loner', - 'longa', - 'longe', - 'longs', - 'looby', - 'looed', - 'looey', - 'loofa', - 'loofs', - 'looie', - 'looks', - 'looky', - 'looms', - 'loons', - 'loony', - 'loops', - 'loord', - 'loots', - 'loped', - 'loper', - 'lopes', - 'loppy', - 'loral', - 'loran', - 'lords', - 'lordy', - 'lorel', - 'lores', - 'loric', - 'loris', - 'losed', - 'losel', - 'losen', - 'loses', - 'lossy', - 'lotah', - 'lotas', - 'lotes', - 'lotic', - 'lotos', - 'lotsa', - 'lotta', - 'lotte', - 'lotto', - 'lotus', - 'loued', - 'lough', - 'louie', - 'louis', - 'louma', - 'lound', - 'louns', - 'loupe', - 'loups', - 'loure', - 'lours', - 'loury', - 'louts', - 'lovat', - 'loved', - 'loves', - 'lovey', - 'lovie', - 'lowan', - 'lowed', - 'lowes', - 'lownd', - 'lowne', - 'lowns', - 'lowps', - 'lowry', - 'lowse', - 'lowts', - 'loxed', - 'loxes', - 'lozen', - 'luach', - 'luaus', - 'lubed', - 'lubes', - 'lubra', - 'luces', - 'lucks', - 'lucre', - 'ludes', - 'ludic', - 'ludos', - 'luffa', - 'luffs', - 'luged', - 'luger', - 'luges', - 'lulls', - 'lulus', - 'lumas', - 'lumbi', - 'lumme', - 'lummy', - 'lumps', - 'lunas', - 'lunes', - 'lunet', - 'lungi', - 'lungs', - 'lunks', - 'lunts', - 'lupin', - 'lured', - 'lurer', - 'lures', - 'lurex', - 'lurgi', - 'lurgy', - 'lurks', - 'lurry', - 'lurve', - 'luser', - 'lushy', - 'lusks', - 'lusts', - 'lusus', - 'lutea', - 'luted', - 'luter', - 'lutes', - 'luvvy', - 'luxed', - 'luxer', - 'luxes', - 'lweis', - 'lyams', - 'lyard', - 'lyart', - 'lyase', - 'lycea', - 'lycee', - 'lycra', - 'lymes', - 'lynes', - 'lyres', - 'lysed', - 'lyses', - 'lysin', - 'lysis', - 'lysol', - 'lyssa', - 'lyted', - 'lytes', - 'lythe', - 'lytic', - 'lytta', - 'maaed', - 'maare', - 'maars', - 'mabes', - 'macas', - 'maced', - 'macer', - 'maces', - 'mache', - 'machi', - 'machs', - 'macks', - 'macle', - 'macon', - 'madge', - 'madid', - 'madre', - 'maerl', - 'mafic', - 'mages', - 'maggs', - 'magot', - 'magus', - 'mahoe', - 'mahua', - 'mahwa', - 'maids', - 'maiko', - 'maiks', - 'maile', - 'maill', - 'mails', - 'maims', - 'mains', - 'maire', - 'mairs', - 'maise', - 'maist', - 'makar', - 'makes', - 'makis', - 'makos', - 'malam', - 'malar', - 'malas', - 'malax', - 'males', - 'malic', - 'malik', - 'malis', - 'malls', - 'malms', - 'malmy', - 'malts', - 'malty', - 'malus', - 'malva', - 'malwa', - 'mamas', - 'mamba', - 'mamee', - 'mamey', - 'mamie', - 'manas', - 'manat', - 'mandi', - 'maneb', - 'maned', - 'maneh', - 'manes', - 'manet', - 'mangs', - 'manis', - 'manky', - 'manna', - 'manos', - 'manse', - 'manta', - 'manto', - 'manty', - 'manul', - 'manus', - 'mapau', - 'maqui', - 'marae', - 'marah', - 'maras', - 'marcs', - 'mardy', - 'mares', - 'marge', - 'margs', - 'maria', - 'marid', - 'marka', - 'marks', - 'marle', - 'marls', - 'marly', - 'marms', - 'maron', - 'maror', - 'marra', - 'marri', - 'marse', - 'marts', - 'marvy', - 'masas', - 'mased', - 'maser', - 'mases', - 'mashy', - 'masks', - 'massa', - 'massy', - 'masts', - 'masty', - 'masus', - 'matai', - 'mated', - 'mater', - 'mates', - 'maths', - 'matin', - 'matlo', - 'matte', - 'matts', - 'matza', - 'matzo', - 'mauby', - 'mauds', - 'mauls', - 'maund', - 'mauri', - 'mausy', - 'mauts', - 'mauzy', - 'maven', - 'mavie', - 'mavin', - 'mavis', - 'mawed', - 'mawks', - 'mawky', - 'mawns', - 'mawrs', - 'maxed', - 'maxes', - 'maxis', - 'mayan', - 'mayas', - 'mayed', - 'mayos', - 'mayst', - 'mazed', - 'mazer', - 'mazes', - 'mazey', - 'mazut', - 'mbira', - 'meads', - 'meals', - 'meane', - 'means', - 'meany', - 'meare', - 'mease', - 'meath', - 'meats', - 'mebos', - 'mechs', - 'mecks', - 'medii', - 'medle', - 'meeds', - 'meers', - 'meets', - 'meffs', - 'meins', - 'meint', - 'meiny', - 'meith', - 'mekka', - 'melas', - 'melba', - 'melds', - 'melic', - 'melik', - 'mells', - 'melts', - 'melty', - 'memes', - 'memos', - 'menad', - 'mends', - 'mened', - 'menes', - 'menge', - 'mengs', - 'mensa', - 'mense', - 'mensh', - 'menta', - 'mento', - 'menus', - 'meous', - 'meows', - 'merch', - 'mercs', - 'merde', - 'mered', - 'merel', - 'merer', - 'meres', - 'meril', - 'meris', - 'merks', - 'merle', - 'merls', - 'merse', - 'mesal', - 'mesas', - 'mesel', - 'meses', - 'meshy', - 'mesic', - 'mesne', - 'meson', - 'messy', - 'mesto', - 'meted', - 'metes', - 'metho', - 'meths', - 'metic', - 'metif', - 'metis', - 'metol', - 'metre', - 'meuse', - 'meved', - 'meves', - 'mewed', - 'mewls', - 'meynt', - 'mezes', - 'mezze', - 'mezzo', - 'mhorr', - 'miaou', - 'miaow', - 'miasm', - 'miaul', - 'micas', - 'miche', - 'micht', - 'micks', - 'micky', - 'micos', - 'micra', - 'middy', - 'midgy', - 'midis', - 'miens', - 'mieve', - 'miffs', - 'miffy', - 'mifty', - 'miggs', - 'mihas', - 'mihis', - 'miked', - 'mikes', - 'mikra', - 'mikva', - 'milch', - 'milds', - 'miler', - 'miles', - 'milfs', - 'milia', - 'milko', - 'milks', - 'mille', - 'mills', - 'milor', - 'milos', - 'milpa', - 'milts', - 'milty', - 'miltz', - 'mimed', - 'mimeo', - 'mimer', - 'mimes', - 'mimsy', - 'minae', - 'minar', - 'minas', - 'mincy', - 'minds', - 'mined', - 'mines', - 'minge', - 'mings', - 'mingy', - 'minis', - 'minke', - 'minks', - 'minny', - 'minos', - 'mints', - 'mired', - 'mires', - 'mirex', - 'mirid', - 'mirin', - 'mirks', - 'mirky', - 'mirly', - 'miros', - 'mirvs', - 'mirza', - 'misch', - 'misdo', - 'mises', - 'misgo', - 'misos', - 'missa', - 'mists', - 'misty', - 'mitch', - 'miter', - 'mites', - 'mitis', - 'mitre', - 'mitts', - 'mixed', - 'mixen', - 'mixer', - 'mixes', - 'mixte', - 'mixup', - 'mizen', - 'mizzy', - 'mneme', - 'moans', - 'moats', - 'mobby', - 'mobes', - 'mobey', - 'mobie', - 'moble', - 'mochi', - 'mochs', - 'mochy', - 'mocks', - 'moder', - 'modes', - 'modge', - 'modii', - 'modus', - 'moers', - 'mofos', - 'moggy', - 'mohel', - 'mohos', - 'mohrs', - 'mohua', - 'mohur', - 'moile', - 'moils', - 'moira', - 'moire', - 'moits', - 'mojos', - 'mokes', - 'mokis', - 'mokos', - 'molal', - 'molas', - 'molds', - 'moled', - 'moles', - 'molla', - 'molls', - 'molly', - 'molto', - 'molts', - 'molys', - 'momes', - 'momma', - 'mommy', - 'momus', - 'monad', - 'monal', - 'monas', - 'monde', - 'mondo', - 'moner', - 'mongo', - 'mongs', - 'monic', - 'monie', - 'monks', - 'monos', - 'monte', - 'monty', - 'moobs', - 'mooch', - 'moods', - 'mooed', - 'mooks', - 'moola', - 'mooli', - 'mools', - 'mooly', - 'moong', - 'moons', - 'moony', - 'moops', - 'moors', - 'moory', - 'moots', - 'moove', - 'moped', - 'moper', - 'mopes', - 'mopey', - 'moppy', - 'mopsy', - 'mopus', - 'morae', - 'moras', - 'morat', - 'moray', - 'morel', - 'mores', - 'moria', - 'morne', - 'morns', - 'morra', - 'morro', - 'morse', - 'morts', - 'mosed', - 'moses', - 'mosey', - 'mosks', - 'mosso', - 'moste', - 'mosts', - 'moted', - 'moten', - 'motes', - 'motet', - 'motey', - 'moths', - 'mothy', - 'motis', - 'motte', - 'motts', - 'motty', - 'motus', - 'motza', - 'mouch', - 'moues', - 'mould', - 'mouls', - 'moups', - 'moust', - 'mousy', - 'moved', - 'moves', - 'mowas', - 'mowed', - 'mowra', - 'moxas', - 'moxie', - 'moyas', - 'moyle', - 'moyls', - 'mozed', - 'mozes', - 'mozos', - 'mpret', - 'mucho', - 'mucic', - 'mucid', - 'mucin', - 'mucks', - 'mucor', - 'mucro', - 'mudge', - 'mudir', - 'mudra', - 'muffs', - 'mufti', - 'mugga', - 'muggs', - 'muggy', - 'muhly', - 'muids', - 'muils', - 'muirs', - 'muist', - 'mujik', - 'mulct', - 'muled', - 'mules', - 'muley', - 'mulga', - 'mulie', - 'mulla', - 'mulls', - 'mulse', - 'mulsh', - 'mumms', - 'mumps', - 'mumsy', - 'mumus', - 'munga', - 'munge', - 'mungo', - 'mungs', - 'munis', - 'munts', - 'muntu', - 'muons', - 'muras', - 'mured', - 'mures', - 'murex', - 'murid', - 'murks', - 'murls', - 'murly', - 'murra', - 'murre', - 'murri', - 'murrs', - 'murry', - 'murti', - 'murva', - 'musar', - 'musca', - 'mused', - 'muser', - 'muses', - 'muset', - 'musha', - 'musit', - 'musks', - 'musos', - 'musse', - 'mussy', - 'musth', - 'musts', - 'mutch', - 'muted', - 'muter', - 'mutes', - 'mutha', - 'mutis', - 'muton', - 'mutts', - 'muxed', - 'muxes', - 'muzak', - 'muzzy', - 'mvule', - 'myall', - 'mylar', - 'mynah', - 'mynas', - 'myoid', - 'myoma', - 'myope', - 'myops', - 'myopy', - 'mysid', - 'mythi', - 'myths', - 'mythy', - 'myxos', - 'mzees', - 'naams', - 'naans', - 'nabes', - 'nabis', - 'nabks', - 'nabla', - 'nabob', - 'nache', - 'nacho', - 'nacre', - 'nadas', - 'naeve', - 'naevi', - 'naffs', - 'nagas', - 'naggy', - 'nagor', - 'nahal', - 'naiad', - 'naifs', - 'naiks', - 'nails', - 'naira', - 'nairu', - 'naked', - 'naker', - 'nakfa', - 'nalas', - 'naled', - 'nalla', - 'named', - 'namer', - 'names', - 'namma', - 'namus', - 'nanas', - 'nance', - 'nancy', - 'nandu', - 'nanna', - 'nanos', - 'nanua', - 'napas', - 'naped', - 'napes', - 'napoo', - 'nappa', - 'nappe', - 'nappy', - 'naras', - 'narco', - 'narcs', - 'nards', - 'nares', - 'naric', - 'naris', - 'narks', - 'narky', - 'narre', - 'nashi', - 'natch', - 'nates', - 'natis', - 'natty', - 'nauch', - 'naunt', - 'navar', - 'naves', - 'navew', - 'navvy', - 'nawab', - 'nazes', - 'nazir', - 'nazis', - 'nduja', - 'neafe', - 'neals', - 'neaps', - 'nears', - 'neath', - 'neats', - 'nebek', - 'nebel', - 'necks', - 'neddy', - 'needs', - 'neeld', - 'neele', - 'neemb', - 'neems', - 'neeps', - 'neese', - 'neeze', - 'negro', - 'negus', - 'neifs', - 'neist', - 'neive', - 'nelis', - 'nelly', - 'nemas', - 'nemns', - 'nempt', - 'nenes', - 'neons', - 'neper', - 'nepit', - 'neral', - 'nerds', - 'nerka', - 'nerks', - 'nerol', - 'nerts', - 'nertz', - 'nervy', - 'nests', - 'netes', - 'netop', - 'netts', - 'netty', - 'neuks', - 'neume', - 'neums', - 'nevel', - 'neves', - 'nevus', - 'newbs', - 'newed', - 'newel', - 'newie', - 'newsy', - 'newts', - 'nexts', - 'nexus', - 'ngaio', - 'ngana', - 'ngati', - 'ngoma', - 'ngwee', - 'nicad', - 'nicht', - 'nicks', - 'nicol', - 'nidal', - 'nided', - 'nides', - 'nidor', - 'nidus', - 'niefs', - 'nieve', - 'nifes', - 'niffs', - 'niffy', - 'nifty', - 'niger', - 'nighs', - 'nihil', - 'nikab', - 'nikah', - 'nikau', - 'nills', - 'nimbi', - 'nimbs', - 'nimps', - 'niner', - 'nines', - 'ninon', - 'nipas', - 'nippy', - 'niqab', - 'nirls', - 'nirly', - 'nisei', - 'nisse', - 'nisus', - 'niter', - 'nites', - 'nitid', - 'niton', - 'nitre', - 'nitro', - 'nitry', - 'nitty', - 'nival', - 'nixed', - 'nixer', - 'nixes', - 'nixie', - 'nizam', - 'nkosi', - 'noahs', - 'nobby', - 'nocks', - 'nodal', - 'noddy', - 'nodes', - 'nodus', - 'noels', - 'noggs', - 'nohow', - 'noils', - 'noily', - 'noint', - 'noirs', - 'noles', - 'nolls', - 'nolos', - 'nomas', - 'nomen', - 'nomes', - 'nomic', - 'nomoi', - 'nomos', - 'nonas', - 'nonce', - 'nones', - 'nonet', - 'nongs', - 'nonis', - 'nonny', - 'nonyl', - 'noobs', - 'nooit', - 'nooks', - 'nooky', - 'noons', - 'noops', - 'nopal', - 'noria', - 'noris', - 'norks', - 'norma', - 'norms', - 'nosed', - 'noser', - 'noses', - 'notal', - 'noted', - 'noter', - 'notes', - 'notum', - 'nould', - 'noule', - 'nouls', - 'nouns', - 'nouny', - 'noups', - 'novae', - 'novas', - 'novum', - 'noway', - 'nowed', - 'nowls', - 'nowts', - 'nowty', - 'noxal', - 'noxes', - 'noyau', - 'noyed', - 'noyes', - 'nubby', - 'nubia', - 'nucha', - 'nuddy', - 'nuder', - 'nudes', - 'nudie', - 'nudzh', - 'nuffs', - 'nugae', - 'nuked', - 'nukes', - 'nulla', - 'nulls', - 'numbs', - 'numen', - 'nummy', - 'nunny', - 'nurds', - 'nurdy', - 'nurls', - 'nurrs', - 'nutso', - 'nutsy', - 'nyaff', - 'nyala', - 'nying', - 'nyssa', - 'oaked', - 'oaker', - 'oakum', - 'oared', - 'oases', - 'oasis', - 'oasts', - 'oaten', - 'oater', - 'oaths', - 'oaves', - 'obang', - 'obeah', - 'obeli', - 'obeys', - 'obias', - 'obied', - 'obiit', - 'obits', - 'objet', - 'oboes', - 'obole', - 'oboli', - 'obols', - 'occam', - 'ocher', - 'oches', - 'ochre', - 'ochry', - 'ocker', - 'ocrea', - 'octad', - 'octan', - 'octas', - 'octyl', - 'oculi', - 'odahs', - 'odals', - 'odeon', - 'odeum', - 'odism', - 'odist', - 'odium', - 'odors', - 'odour', - 'odyle', - 'odyls', - 'ofays', - 'offed', - 'offie', - 'oflag', - 'ofter', - 'ogams', - 'ogeed', - 'ogees', - 'oggin', - 'ogham', - 'ogive', - 'ogled', - 'ogler', - 'ogles', - 'ogmic', - 'ogres', - 'ohias', - 'ohing', - 'ohmic', - 'ohone', - 'oidia', - 'oiled', - 'oiler', - 'oinks', - 'oints', - 'ojime', - 'okapi', - 'okays', - 'okehs', - 'okras', - 'oktas', - 'oldie', - 'oleic', - 'olein', - 'olent', - 'oleos', - 'oleum', - 'olios', - 'ollas', - 'ollav', - 'oller', - 'ollie', - 'ology', - 'olpae', - 'olpes', - 'omasa', - 'omber', - 'ombus', - 'omens', - 'omers', - 'omits', - 'omlah', - 'omovs', - 'omrah', - 'oncer', - 'onces', - 'oncet', - 'oncus', - 'onely', - 'oners', - 'onery', - 'onium', - 'onkus', - 'onlay', - 'onned', - 'ontic', - 'oobit', - 'oohed', - 'oomph', - 'oonts', - 'ooped', - 'oorie', - 'ooses', - 'ootid', - 'oozed', - 'oozes', - 'opahs', - 'opals', - 'opens', - 'opepe', - 'oping', - 'oppos', - 'opsin', - 'opted', - 'opter', - 'orach', - 'oracy', - 'orals', - 'orang', - 'orant', - 'orate', - 'orbed', - 'orcas', - 'orcin', - 'ordos', - 'oread', - 'orfes', - 'orgia', - 'orgic', - 'orgue', - 'oribi', - 'oriel', - 'orixa', - 'orles', - 'orlon', - 'orlop', - 'ormer', - 'ornis', - 'orpin', - 'orris', - 'ortho', - 'orval', - 'orzos', - 'oscar', - 'oshac', - 'osier', - 'osmic', - 'osmol', - 'ossia', - 'ostia', - 'otaku', - 'otary', - 'ottar', - 'ottos', - 'oubit', - 'oucht', - 'ouens', - 'ouija', - 'oulks', - 'oumas', - 'oundy', - 'oupas', - 'ouped', - 'ouphe', - 'ouphs', - 'ourie', - 'ousel', - 'ousts', - 'outby', - 'outed', - 'outre', - 'outro', - 'outta', - 'ouzel', - 'ouzos', - 'ovals', - 'ovels', - 'ovens', - 'overs', - 'ovist', - 'ovoli', - 'ovolo', - 'ovule', - 'owche', - 'owies', - 'owled', - 'owler', - 'owlet', - 'owned', - 'owres', - 'owrie', - 'owsen', - 'oxbow', - 'oxers', - 'oxeye', - 'oxids', - 'oxies', - 'oxime', - 'oxims', - 'oxlip', - 'oxter', - 'oyers', - 'ozeki', - 'ozzie', - 'paals', - 'paans', - 'pacas', - 'paced', - 'pacer', - 'paces', - 'pacey', - 'pacha', - 'packs', - 'pacos', - 'pacta', - 'pacts', - 'padis', - 'padle', - 'padma', - 'padre', - 'padri', - 'paean', - 'paedo', - 'paeon', - 'paged', - 'pager', - 'pages', - 'pagle', - 'pagod', - 'pagri', - 'paiks', - 'pails', - 'pains', - 'paire', - 'pairs', - 'paisa', - 'paise', - 'pakka', - 'palas', - 'palay', - 'palea', - 'paled', - 'pales', - 'palet', - 'palis', - 'palki', - 'palla', - 'palls', - 'pally', - 'palms', - 'palmy', - 'palpi', - 'palps', - 'palsa', - 'pampa', - 'panax', - 'pance', - 'panda', - 'pands', - 'pandy', - 'paned', - 'panes', - 'panga', - 'pangs', - 'panim', - 'panko', - 'panne', - 'panni', - 'panto', - 'pants', - 'panty', - 'paoli', - 'paolo', - 'papas', - 'papaw', - 'papes', - 'pappi', - 'pappy', - 'parae', - 'paras', - 'parch', - 'pardi', - 'pards', - 'pardy', - 'pared', - 'paren', - 'pareo', - 'pares', - 'pareu', - 'parev', - 'parge', - 'pargo', - 'paris', - 'parki', - 'parks', - 'parky', - 'parle', - 'parly', - 'parma', - 'parol', - 'parps', - 'parra', - 'parrs', - 'parti', - 'parts', - 'parve', - 'parvo', - 'paseo', - 'pases', - 'pasha', - 'pashm', - 'paska', - 'paspy', - 'passe', - 'pasts', - 'pated', - 'paten', - 'pater', - 'pates', - 'paths', - 'patin', - 'patka', - 'patly', - 'patte', - 'patus', - 'pauas', - 'pauls', - 'pavan', - 'paved', - 'paven', - 'paver', - 'paves', - 'pavid', - 'pavin', - 'pavis', - 'pawas', - 'pawaw', - 'pawed', - 'pawer', - 'pawks', - 'pawky', - 'pawls', - 'pawns', - 'paxes', - 'payed', - 'payor', - 'paysd', - 'peage', - 'peags', - 'peaks', - 'peaky', - 'peals', - 'peans', - 'peare', - 'pears', - 'peart', - 'pease', - 'peats', - 'peaty', - 'peavy', - 'peaze', - 'pebas', - 'pechs', - 'pecke', - 'pecks', - 'pecky', - 'pedes', - 'pedis', - 'pedro', - 'peece', - 'peeks', - 'peels', - 'peens', - 'peeoy', - 'peepe', - 'peeps', - 'peers', - 'peery', - 'peeve', - 'peggy', - 'peghs', - 'peins', - 'peise', - 'peize', - 'pekan', - 'pekes', - 'pekin', - 'pekoe', - 'pelas', - 'pelau', - 'peles', - 'pelfs', - 'pells', - 'pelma', - 'pelon', - 'pelta', - 'pelts', - 'pends', - 'pendu', - 'pened', - 'penes', - 'pengo', - 'penie', - 'penis', - 'penks', - 'penna', - 'penni', - 'pents', - 'peons', - 'peony', - 'pepla', - 'pepos', - 'peppy', - 'pepsi', - 'perai', - 'perce', - 'percs', - 'perdu', - 'perdy', - 'perea', - 'peres', - 'peris', - 'perks', - 'perms', - 'perns', - 'perog', - 'perps', - 'perry', - 'perse', - 'perst', - 'perts', - 'perve', - 'pervo', - 'pervs', - 'pervy', - 'pesos', - 'pests', - 'pesty', - 'petar', - 'peter', - 'petit', - 'petre', - 'petri', - 'petti', - 'petto', - 'pewee', - 'pewit', - 'peyse', - 'phage', - 'phang', - 'phare', - 'pharm', - 'pheer', - 'phene', - 'pheon', - 'phese', - 'phial', - 'phish', - 'phizz', - 'phlox', - 'phoca', - 'phono', - 'phons', - 'phots', - 'phpht', - 'phuts', - 'phyla', - 'phyle', - 'piani', - 'pians', - 'pibal', - 'pical', - 'picas', - 'piccy', - 'picks', - 'picot', - 'picra', - 'picul', - 'piend', - 'piers', - 'piert', - 'pieta', - 'piets', - 'piezo', - 'pight', - 'pigmy', - 'piing', - 'pikas', - 'pikau', - 'piked', - 'piker', - 'pikes', - 'pikey', - 'pikis', - 'pikul', - 'pilae', - 'pilaf', - 'pilao', - 'pilar', - 'pilau', - 'pilaw', - 'pilch', - 'pilea', - 'piled', - 'pilei', - 'piler', - 'piles', - 'pilis', - 'pills', - 'pilow', - 'pilum', - 'pilus', - 'pimas', - 'pimps', - 'pinas', - 'pined', - 'pines', - 'pingo', - 'pings', - 'pinko', - 'pinks', - 'pinna', - 'pinny', - 'pinon', - 'pinot', - 'pinta', - 'pints', - 'pinup', - 'pions', - 'piony', - 'pious', - 'pioye', - 'pioys', - 'pipal', - 'pipas', - 'piped', - 'pipes', - 'pipet', - 'pipis', - 'pipit', - 'pippy', - 'pipul', - 'pirai', - 'pirls', - 'pirns', - 'pirog', - 'pisco', - 'pises', - 'pisky', - 'pisos', - 'pissy', - 'piste', - 'pitas', - 'piths', - 'piton', - 'pitot', - 'pitta', - 'piums', - 'pixes', - 'pized', - 'pizes', - 'plaas', - 'plack', - 'plage', - 'plans', - 'plaps', - 'plash', - 'plasm', - 'plast', - 'plats', - 'platt', - 'platy', - 'playa', - 'plays', - 'pleas', - 'plebe', - 'plebs', - 'plena', - 'pleon', - 'plesh', - 'plews', - 'plica', - 'plies', - 'plims', - 'pling', - 'plink', - 'ploat', - 'plods', - 'plong', - 'plonk', - 'plook', - 'plops', - 'plots', - 'plotz', - 'plouk', - 'plows', - 'ploye', - 'ploys', - 'plues', - 'pluff', - 'plugs', - 'plums', - 'plumy', - 'pluot', - 'pluto', - 'plyer', - 'poach', - 'poaka', - 'poake', - 'poboy', - 'pocks', - 'pocky', - 'podal', - 'poddy', - 'podex', - 'podge', - 'podgy', - 'podia', - 'poems', - 'poeps', - 'poets', - 'pogey', - 'pogge', - 'pogos', - 'pohed', - 'poilu', - 'poind', - 'pokal', - 'poked', - 'pokes', - 'pokey', - 'pokie', - 'poled', - 'poler', - 'poles', - 'poley', - 'polio', - 'polis', - 'polje', - 'polks', - 'polls', - 'polly', - 'polos', - 'polts', - 'polys', - 'pombe', - 'pomes', - 'pommy', - 'pomos', - 'pomps', - 'ponce', - 'poncy', - 'ponds', - 'pones', - 'poney', - 'ponga', - 'pongo', - 'pongs', - 'pongy', - 'ponks', - 'ponts', - 'ponty', - 'ponzu', - 'poods', - 'pooed', - 'poofs', - 'poofy', - 'poohs', - 'pooja', - 'pooka', - 'pooks', - 'pools', - 'poons', - 'poops', - 'poopy', - 'poori', - 'poort', - 'poots', - 'poove', - 'poovy', - 'popes', - 'poppa', - 'popsy', - 'porae', - 'poral', - 'pored', - 'porer', - 'pores', - 'porge', - 'porgy', - 'porin', - 'porks', - 'porky', - 'porno', - 'porns', - 'porny', - 'porta', - 'ports', - 'porty', - 'posed', - 'poses', - 'posey', - 'posho', - 'posts', - 'potae', - 'potch', - 'poted', - 'potes', - 'potin', - 'potoo', - 'potsy', - 'potto', - 'potts', - 'potty', - 'pouff', - 'poufs', - 'pouke', - 'pouks', - 'poule', - 'poulp', - 'poult', - 'poupe', - 'poupt', - 'pours', - 'pouts', - 'powan', - 'powin', - 'pownd', - 'powns', - 'powny', - 'powre', - 'poxed', - 'poxes', - 'poynt', - 'poyou', - 'poyse', - 'pozzy', - 'praam', - 'prads', - 'prahu', - 'prams', - 'prana', - 'prang', - 'praos', - 'prase', - 'prate', - 'prats', - 'pratt', - 'praty', - 'praus', - 'prays', - 'predy', - 'preed', - 'prees', - 'preif', - 'prems', - 'premy', - 'prent', - 'preon', - 'preop', - 'preps', - 'presa', - 'prese', - 'prest', - 'preve', - 'prexy', - 'preys', - 'prial', - 'pricy', - 'prief', - 'prier', - 'pries', - 'prigs', - 'prill', - 'prima', - 'primi', - 'primp', - 'prims', - 'primy', - 'prink', - 'prion', - 'prise', - 'priss', - 'proas', - 'probs', - 'prods', - 'proem', - 'profs', - 'progs', - 'proin', - 'proke', - 'prole', - 'proll', - 'promo', - 'proms', - 'pronk', - 'props', - 'prore', - 'proso', - 'pross', - 'prost', - 'prosy', - 'proto', - 'proul', - 'prows', - 'proyn', - 'prunt', - 'pruta', - 'pryer', - 'pryse', - 'pseud', - 'pshaw', - 'psion', - 'psoae', - 'psoai', - 'psoas', - 'psora', - 'psych', - 'psyop', - 'pubco', - 'pubes', - 'pubis', - 'pucan', - 'pucer', - 'puces', - 'pucka', - 'pucks', - 'puddy', - 'pudge', - 'pudic', - 'pudor', - 'pudsy', - 'pudus', - 'puers', - 'puffa', - 'puffs', - 'puggy', - 'pugil', - 'puhas', - 'pujah', - 'pujas', - 'pukas', - 'puked', - 'puker', - 'pukes', - 'pukey', - 'pukka', - 'pukus', - 'pulao', - 'pulas', - 'puled', - 'puler', - 'pules', - 'pulik', - 'pulis', - 'pulka', - 'pulks', - 'pulli', - 'pulls', - 'pully', - 'pulmo', - 'pulps', - 'pulus', - 'pumas', - 'pumie', - 'pumps', - 'punas', - 'punce', - 'punga', - 'pungs', - 'punji', - 'punka', - 'punks', - 'punky', - 'punny', - 'punto', - 'punts', - 'punty', - 'pupae', - 'pupas', - 'pupus', - 'purda', - 'pured', - 'pures', - 'purin', - 'puris', - 'purls', - 'purpy', - 'purrs', - 'pursy', - 'purty', - 'puses', - 'pusle', - 'pussy', - 'putid', - 'puton', - 'putti', - 'putto', - 'putts', - 'puzel', - 'pwned', - 'pyats', - 'pyets', - 'pygal', - 'pyins', - 'pylon', - 'pyned', - 'pynes', - 'pyoid', - 'pyots', - 'pyral', - 'pyran', - 'pyres', - 'pyrex', - 'pyric', - 'pyros', - 'pyxed', - 'pyxes', - 'pyxie', - 'pyxis', - 'pzazz', - 'qadis', - 'qaids', - 'qajaq', - 'qanat', - 'qapik', - 'qibla', - 'qophs', - 'qorma', - 'quads', - 'quaff', - 'quags', - 'quair', - 'quais', - 'quaky', - 'quale', - 'quant', - 'quare', - 'quass', - 'quate', - 'quats', - 'quayd', - 'quays', - 'qubit', - 'quean', - 'queme', - 'quena', - 'quern', - 'queyn', - 'queys', - 'quich', - 'quids', - 'quiff', - 'quims', - 'quina', - 'quine', - 'quino', - 'quins', - 'quint', - 'quipo', - 'quips', - 'quipu', - 'quire', - 'quirt', - 'quist', - 'quits', - 'quoad', - 'quods', - 'quoif', - 'quoin', - 'quoit', - 'quoll', - 'quonk', - 'quops', - 'qursh', - 'quyte', - 'rabat', - 'rabic', - 'rabis', - 'raced', - 'races', - 'rache', - 'racks', - 'racon', - 'radge', - 'radix', - 'radon', - 'raffs', - 'rafts', - 'ragas', - 'ragde', - 'raged', - 'ragee', - 'rager', - 'rages', - 'ragga', - 'raggs', - 'raggy', - 'ragis', - 'ragus', - 'rahed', - 'rahui', - 'raias', - 'raids', - 'raiks', - 'raile', - 'rails', - 'raine', - 'rains', - 'raird', - 'raita', - 'raits', - 'rajas', - 'rajes', - 'raked', - 'rakee', - 'raker', - 'rakes', - 'rakia', - 'rakis', - 'rakus', - 'rales', - 'ramal', - 'ramee', - 'ramet', - 'ramie', - 'ramin', - 'ramis', - 'rammy', - 'ramps', - 'ramus', - 'ranas', - 'rance', - 'rands', - 'ranee', - 'ranga', - 'rangi', - 'rangs', - 'rangy', - 'ranid', - 'ranis', - 'ranke', - 'ranks', - 'rants', - 'raped', - 'raper', - 'rapes', - 'raphe', - 'rappe', - 'rared', - 'raree', - 'rares', - 'rarks', - 'rased', - 'raser', - 'rases', - 'rasps', - 'rasse', - 'rasta', - 'ratal', - 'ratan', - 'ratas', - 'ratch', - 'rated', - 'ratel', - 'rater', - 'rates', - 'ratha', - 'rathe', - 'raths', - 'ratoo', - 'ratos', - 'ratus', - 'rauns', - 'raupo', - 'raved', - 'ravel', - 'raver', - 'raves', - 'ravey', - 'ravin', - 'rawer', - 'rawin', - 'rawly', - 'rawns', - 'raxed', - 'raxes', - 'rayah', - 'rayas', - 'rayed', - 'rayle', - 'rayne', - 'razed', - 'razee', - 'razer', - 'razes', - 'razoo', - 'readd', - 'reads', - 'reais', - 'reaks', - 'realo', - 'reals', - 'reame', - 'reams', - 'reamy', - 'reans', - 'reaps', - 'rears', - 'reast', - 'reata', - 'reate', - 'reave', - 'rebbe', - 'rebec', - 'rebid', - 'rebit', - 'rebop', - 'rebuy', - 'recal', - 'recce', - 'recco', - 'reccy', - 'recit', - 'recks', - 'recon', - 'recta', - 'recti', - 'recto', - 'redan', - 'redds', - 'reddy', - 'reded', - 'redes', - 'redia', - 'redid', - 'redip', - 'redly', - 'redon', - 'redos', - 'redox', - 'redry', - 'redub', - 'redux', - 'redye', - 'reech', - 'reede', - 'reeds', - 'reefs', - 'reefy', - 'reeks', - 'reeky', - 'reels', - 'reens', - 'reest', - 'reeve', - 'refed', - 'refel', - 'reffo', - 'refis', - 'refix', - 'refly', - 'refry', - 'regar', - 'reges', - 'reggo', - 'regie', - 'regma', - 'regna', - 'regos', - 'regur', - 'rehem', - 'reifs', - 'reify', - 'reiki', - 'reiks', - 'reink', - 'reins', - 'reird', - 'reist', - 'reive', - 'rejig', - 'rejon', - 'reked', - 'rekes', - 'rekey', - 'relet', - 'relie', - 'relit', - 'rello', - 'reman', - 'remap', - 'remen', - 'remet', - 'remex', - 'remix', - 'renay', - 'rends', - 'reney', - 'renga', - 'renig', - 'renin', - 'renne', - 'renos', - 'rente', - 'rents', - 'reoil', - 'reorg', - 'repeg', - 'repin', - 'repla', - 'repos', - 'repot', - 'repps', - 'repro', - 'reran', - 'rerig', - 'resat', - 'resaw', - 'resay', - 'resee', - 'reses', - 'resew', - 'resid', - 'resit', - 'resod', - 'resow', - 'resto', - 'rests', - 'resty', - 'resus', - 'retag', - 'retax', - 'retem', - 'retia', - 'retie', - 'retox', - 'revet', - 'revie', - 'rewan', - 'rewax', - 'rewed', - 'rewet', - 'rewin', - 'rewon', - 'rewth', - 'rexes', - 'rezes', - 'rheas', - 'rheme', - 'rheum', - 'rhies', - 'rhime', - 'rhine', - 'rhody', - 'rhomb', - 'rhone', - 'rhumb', - 'rhyne', - 'rhyta', - 'riads', - 'rials', - 'riant', - 'riata', - 'ribas', - 'ribby', - 'ribes', - 'riced', - 'ricer', - 'rices', - 'ricey', - 'richt', - 'ricin', - 'ricks', - 'rides', - 'ridgy', - 'ridic', - 'riels', - 'riems', - 'rieve', - 'rifer', - 'riffs', - 'rifte', - 'rifts', - 'rifty', - 'riggs', - 'rigol', - 'riled', - 'riles', - 'riley', - 'rille', - 'rills', - 'rimae', - 'rimed', - 'rimer', - 'rimes', - 'rimus', - 'rinds', - 'rindy', - 'rines', - 'rings', - 'rinks', - 'rioja', - 'riots', - 'riped', - 'ripes', - 'ripps', - 'rises', - 'rishi', - 'risks', - 'risps', - 'risus', - 'rites', - 'ritts', - 'ritzy', - 'rivas', - 'rived', - 'rivel', - 'riven', - 'rives', - 'riyal', - 'rizas', - 'roads', - 'roams', - 'roans', - 'roars', - 'roary', - 'roate', - 'robed', - 'robes', - 'roble', - 'rocks', - 'roded', - 'rodes', - 'roguy', - 'rohes', - 'roids', - 'roils', - 'roily', - 'roins', - 'roist', - 'rojak', - 'rojis', - 'roked', - 'roker', - 'rokes', - 'rolag', - 'roles', - 'rolfs', - 'rolls', - 'romal', - 'roman', - 'romeo', - 'romps', - 'ronde', - 'rondo', - 'roneo', - 'rones', - 'ronin', - 'ronne', - 'ronte', - 'ronts', - 'roods', - 'roofs', - 'roofy', - 'rooks', - 'rooky', - 'rooms', - 'roons', - 'roops', - 'roopy', - 'roosa', - 'roose', - 'roots', - 'rooty', - 'roped', - 'roper', - 'ropes', - 'ropey', - 'roque', - 'roral', - 'rores', - 'roric', - 'rorid', - 'rorie', - 'rorts', - 'rorty', - 'rosed', - 'roses', - 'roset', - 'roshi', - 'rosin', - 'rosit', - 'rosti', - 'rosts', - 'rotal', - 'rotan', - 'rotas', - 'rotch', - 'roted', - 'rotes', - 'rotis', - 'rotls', - 'roton', - 'rotos', - 'rotte', - 'rouen', - 'roues', - 'roule', - 'rouls', - 'roums', - 'roups', - 'roupy', - 'roust', - 'routh', - 'routs', - 'roved', - 'roven', - 'roves', - 'rowan', - 'rowed', - 'rowel', - 'rowen', - 'rowie', - 'rowme', - 'rownd', - 'rowth', - 'rowts', - 'royne', - 'royst', - 'rozet', - 'rozit', - 'ruana', - 'rubai', - 'rubby', - 'rubel', - 'rubes', - 'rubin', - 'ruble', - 'rubli', - 'rubus', - 'ruche', - 'rucks', - 'rudas', - 'rudds', - 'rudes', - 'rudie', - 'rudis', - 'rueda', - 'ruers', - 'ruffe', - 'ruffs', - 'rugae', - 'rugal', - 'ruggy', - 'ruing', - 'ruins', - 'rukhs', - 'ruled', - 'rules', - 'rumal', - 'rumbo', - 'rumen', - 'rumes', - 'rumly', - 'rummy', - 'rumpo', - 'rumps', - 'rumpy', - 'runch', - 'runds', - 'runed', - 'runes', - 'rungs', - 'runic', - 'runny', - 'runts', - 'runty', - 'rupia', - 'rurps', - 'rurus', - 'rusas', - 'ruses', - 'rushy', - 'rusks', - 'rusma', - 'russe', - 'rusts', - 'ruths', - 'rutin', - 'rutty', - 'ryals', - 'rybat', - 'ryked', - 'rykes', - 'rymme', - 'rynds', - 'ryots', - 'ryper', - 'saags', - 'sabal', - 'sabed', - 'saber', - 'sabes', - 'sabha', - 'sabin', - 'sabir', - 'sable', - 'sabot', - 'sabra', - 'sabre', - 'sacks', - 'sacra', - 'saddo', - 'sades', - 'sadhe', - 'sadhu', - 'sadis', - 'sados', - 'sadza', - 'safed', - 'safes', - 'sagas', - 'sager', - 'sages', - 'saggy', - 'sagos', - 'sagum', - 'saheb', - 'sahib', - 'saice', - 'saick', - 'saics', - 'saids', - 'saiga', - 'sails', - 'saims', - 'saine', - 'sains', - 'sairs', - 'saist', - 'saith', - 'sajou', - 'sakai', - 'saker', - 'sakes', - 'sakia', - 'sakis', - 'sakti', - 'salal', - 'salat', - 'salep', - 'sales', - 'salet', - 'salic', - 'salix', - 'salle', - 'salmi', - 'salol', - 'salop', - 'salpa', - 'salps', - 'salse', - 'salto', - 'salts', - 'salue', - 'salut', - 'saman', - 'samas', - 'samba', - 'sambo', - 'samek', - 'samel', - 'samen', - 'sames', - 'samey', - 'samfu', - 'sammy', - 'sampi', - 'samps', - 'sands', - 'saned', - 'sanes', - 'sanga', - 'sangh', - 'sango', - 'sangs', - 'sanko', - 'sansa', - 'santo', - 'sants', - 'saola', - 'sapan', - 'sapid', - 'sapor', - 'saran', - 'sards', - 'sared', - 'saree', - 'sarge', - 'sargo', - 'sarin', - 'saris', - 'sarks', - 'sarky', - 'sarod', - 'saros', - 'sarus', - 'saser', - 'sasin', - 'sasse', - 'satai', - 'satay', - 'sated', - 'satem', - 'sates', - 'satis', - 'sauba', - 'sauch', - 'saugh', - 'sauls', - 'sault', - 'saunt', - 'saury', - 'sauts', - 'saved', - 'saver', - 'saves', - 'savey', - 'savin', - 'sawah', - 'sawed', - 'sawer', - 'saxes', - 'sayed', - 'sayer', - 'sayid', - 'sayne', - 'sayon', - 'sayst', - 'sazes', - 'scabs', - 'scads', - 'scaff', - 'scags', - 'scail', - 'scala', - 'scall', - 'scams', - 'scand', - 'scans', - 'scapa', - 'scape', - 'scapi', - 'scarp', - 'scars', - 'scart', - 'scath', - 'scats', - 'scatt', - 'scaud', - 'scaup', - 'scaur', - 'scaws', - 'sceat', - 'scena', - 'scend', - 'schav', - 'schmo', - 'schul', - 'schwa', - 'sclim', - 'scody', - 'scogs', - 'scoog', - 'scoot', - 'scopa', - 'scops', - 'scots', - 'scoug', - 'scoup', - 'scowp', - 'scows', - 'scrab', - 'scrae', - 'scrag', - 'scran', - 'scrat', - 'scraw', - 'scray', - 'scrim', - 'scrip', - 'scrob', - 'scrod', - 'scrog', - 'scrow', - 'scudi', - 'scudo', - 'scuds', - 'scuff', - 'scuft', - 'scugs', - 'sculk', - 'scull', - 'sculp', - 'sculs', - 'scums', - 'scups', - 'scurf', - 'scurs', - 'scuse', - 'scuta', - 'scute', - 'scuts', - 'scuzz', - 'scyes', - 'sdayn', - 'sdein', - 'seals', - 'seame', - 'seams', - 'seamy', - 'seans', - 'seare', - 'sears', - 'sease', - 'seats', - 'seaze', - 'sebum', - 'secco', - 'sechs', - 'sects', - 'seder', - 'sedes', - 'sedge', - 'sedgy', - 'sedum', - 'seeds', - 'seeks', - 'seeld', - 'seels', - 'seely', - 'seems', - 'seeps', - 'seepy', - 'seers', - 'sefer', - 'segar', - 'segni', - 'segno', - 'segol', - 'segos', - 'sehri', - 'seifs', - 'seils', - 'seine', - 'seirs', - 'seise', - 'seism', - 'seity', - 'seiza', - 'sekos', - 'sekts', - 'selah', - 'seles', - 'selfs', - 'sella', - 'selle', - 'sells', - 'selva', - 'semee', - 'semes', - 'semie', - 'semis', - 'senas', - 'sends', - 'senes', - 'sengi', - 'senna', - 'senor', - 'sensa', - 'sensi', - 'sente', - 'senti', - 'sents', - 'senvy', - 'senza', - 'sepad', - 'sepal', - 'sepic', - 'sepoy', - 'septa', - 'septs', - 'serac', - 'serai', - 'seral', - 'sered', - 'serer', - 'seres', - 'serfs', - 'serge', - 'seric', - 'serin', - 'serks', - 'seron', - 'serow', - 'serra', - 'serre', - 'serrs', - 'serry', - 'servo', - 'sesey', - 'sessa', - 'setae', - 'setal', - 'seton', - 'setts', - 'sewan', - 'sewar', - 'sewed', - 'sewel', - 'sewen', - 'sewin', - 'sexed', - 'sexer', - 'sexes', - 'sexto', - 'sexts', - 'seyen', - 'shads', - 'shags', - 'shahs', - 'shako', - 'shakt', - 'shalm', - 'shaly', - 'shama', - 'shams', - 'shand', - 'shans', - 'shaps', - 'sharn', - 'shash', - 'shaul', - 'shawm', - 'shawn', - 'shaws', - 'shaya', - 'shays', - 'shchi', - 'sheaf', - 'sheal', - 'sheas', - 'sheds', - 'sheel', - 'shend', - 'shent', - 'sheol', - 'sherd', - 'shere', - 'shero', - 'shets', - 'sheva', - 'shewn', - 'shews', - 'shiai', - 'shiel', - 'shier', - 'shies', - 'shill', - 'shily', - 'shims', - 'shins', - 'ships', - 'shirr', - 'shirs', - 'shish', - 'shiso', - 'shist', - 'shite', - 'shits', - 'shiur', - 'shiva', - 'shive', - 'shivs', - 'shlep', - 'shlub', - 'shmek', - 'shmoe', - 'shoat', - 'shoed', - 'shoer', - 'shoes', - 'shogi', - 'shogs', - 'shoji', - 'shojo', - 'shola', - 'shool', - 'shoon', - 'shoos', - 'shope', - 'shops', - 'shorl', - 'shote', - 'shots', - 'shott', - 'showd', - 'shows', - 'shoyu', - 'shred', - 'shris', - 'shrow', - 'shtik', - 'shtum', - 'shtup', - 'shule', - 'shuln', - 'shuls', - 'shuns', - 'shura', - 'shute', - 'shuts', - 'shwas', - 'shyer', - 'sials', - 'sibbs', - 'sibyl', - 'sices', - 'sicht', - 'sicko', - 'sicks', - 'sicky', - 'sidas', - 'sided', - 'sider', - 'sides', - 'sidha', - 'sidhe', - 'sidle', - 'sield', - 'siens', - 'sient', - 'sieth', - 'sieur', - 'sifts', - 'sighs', - 'sigil', - 'sigla', - 'signa', - 'signs', - 'sijos', - 'sikas', - 'siker', - 'sikes', - 'silds', - 'siled', - 'silen', - 'siler', - 'siles', - 'silex', - 'silks', - 'sills', - 'silos', - 'silts', - 'silty', - 'silva', - 'simar', - 'simas', - 'simba', - 'simis', - 'simps', - 'simul', - 'sinds', - 'sined', - 'sines', - 'sings', - 'sinhs', - 'sinks', - 'sinky', - 'sinus', - 'siped', - 'sipes', - 'sippy', - 'sired', - 'siree', - 'sires', - 'sirih', - 'siris', - 'siroc', - 'sirra', - 'sirup', - 'sisal', - 'sises', - 'sista', - 'sists', - 'sitar', - 'sited', - 'sites', - 'sithe', - 'sitka', - 'situp', - 'situs', - 'siver', - 'sixer', - 'sixes', - 'sixmo', - 'sixte', - 'sizar', - 'sized', - 'sizel', - 'sizer', - 'sizes', - 'skags', - 'skail', - 'skald', - 'skank', - 'skart', - 'skats', - 'skatt', - 'skaws', - 'skean', - 'skear', - 'skeds', - 'skeed', - 'skeef', - 'skeen', - 'skeer', - 'skees', - 'skeet', - 'skegg', - 'skegs', - 'skein', - 'skelf', - 'skell', - 'skelm', - 'skelp', - 'skene', - 'skens', - 'skeos', - 'skeps', - 'skers', - 'skets', - 'skews', - 'skids', - 'skied', - 'skies', - 'skiey', - 'skimo', - 'skims', - 'skink', - 'skins', - 'skint', - 'skios', - 'skips', - 'skirl', - 'skirr', - 'skite', - 'skits', - 'skive', - 'skivy', - 'sklim', - 'skoal', - 'skody', - 'skoff', - 'skogs', - 'skols', - 'skool', - 'skort', - 'skosh', - 'skran', - 'skrik', - 'skuas', - 'skugs', - 'skyed', - 'skyer', - 'skyey', - 'skyfs', - 'skyre', - 'skyrs', - 'skyte', - 'slabs', - 'slade', - 'slaes', - 'slags', - 'slaid', - 'slake', - 'slams', - 'slane', - 'slank', - 'slaps', - 'slart', - 'slats', - 'slaty', - 'slaws', - 'slays', - 'slebs', - 'sleds', - 'sleer', - 'slews', - 'sleys', - 'slier', - 'slily', - 'slims', - 'slipe', - 'slips', - 'slipt', - 'slish', - 'slits', - 'slive', - 'sloan', - 'slobs', - 'sloes', - 'slogs', - 'sloid', - 'slojd', - 'slomo', - 'sloom', - 'sloot', - 'slops', - 'slopy', - 'slorm', - 'slots', - 'slove', - 'slows', - 'sloyd', - 'slubb', - 'slubs', - 'slued', - 'slues', - 'sluff', - 'slugs', - 'sluit', - 'slums', - 'slurb', - 'slurs', - 'sluse', - 'sluts', - 'slyer', - 'slype', - 'smaak', - 'smaik', - 'smalm', - 'smalt', - 'smarm', - 'smaze', - 'smeek', - 'smees', - 'smeik', - 'smeke', - 'smerk', - 'smews', - 'smirr', - 'smirs', - 'smits', - 'smogs', - 'smoko', - 'smolt', - 'smoor', - 'smoot', - 'smore', - 'smorg', - 'smout', - 'smowt', - 'smugs', - 'smurs', - 'smush', - 'smuts', - 'snabs', - 'snafu', - 'snags', - 'snaps', - 'snarf', - 'snark', - 'snars', - 'snary', - 'snash', - 'snath', - 'snaws', - 'snead', - 'sneap', - 'snebs', - 'sneck', - 'sneds', - 'sneed', - 'snees', - 'snell', - 'snibs', - 'snick', - 'snies', - 'snift', - 'snigs', - 'snips', - 'snipy', - 'snirt', - 'snits', - 'snobs', - 'snods', - 'snoek', - 'snoep', - 'snogs', - 'snoke', - 'snood', - 'snook', - 'snool', - 'snoot', - 'snots', - 'snowk', - 'snows', - 'snubs', - 'snugs', - 'snush', - 'snyes', - 'soaks', - 'soaps', - 'soare', - 'soars', - 'soave', - 'sobas', - 'socas', - 'soces', - 'socko', - 'socks', - 'socle', - 'sodas', - 'soddy', - 'sodic', - 'sodom', - 'sofar', - 'sofas', - 'softa', - 'softs', - 'softy', - 'soger', - 'sohur', - 'soils', - 'soily', - 'sojas', - 'sojus', - 'sokah', - 'soken', - 'sokes', - 'sokol', - 'solah', - 'solan', - 'solas', - 'solde', - 'soldi', - 'soldo', - 'solds', - 'soled', - 'solei', - 'soler', - 'soles', - 'solon', - 'solos', - 'solum', - 'solus', - 'soman', - 'somas', - 'sonce', - 'sonde', - 'sones', - 'songs', - 'sonly', - 'sonne', - 'sonny', - 'sonse', - 'sonsy', - 'sooey', - 'sooks', - 'sooky', - 'soole', - 'sools', - 'sooms', - 'soops', - 'soote', - 'soots', - 'sophs', - 'sophy', - 'sopor', - 'soppy', - 'sopra', - 'soral', - 'soras', - 'sorbo', - 'sorbs', - 'sorda', - 'sordo', - 'sords', - 'sored', - 'soree', - 'sorel', - 'sorer', - 'sores', - 'sorex', - 'sorgo', - 'sorns', - 'sorra', - 'sorta', - 'sorts', - 'sorus', - 'soths', - 'sotol', - 'souce', - 'souct', - 'sough', - 'souks', - 'souls', - 'soums', - 'soups', - 'soupy', - 'sours', - 'souse', - 'souts', - 'sowar', - 'sowce', - 'sowed', - 'sowff', - 'sowfs', - 'sowle', - 'sowls', - 'sowms', - 'sownd', - 'sowne', - 'sowps', - 'sowse', - 'sowth', - 'soyas', - 'soyle', - 'soyuz', - 'sozin', - 'spacy', - 'spado', - 'spaed', - 'spaer', - 'spaes', - 'spags', - 'spahi', - 'spail', - 'spain', - 'spait', - 'spake', - 'spald', - 'spale', - 'spall', - 'spalt', - 'spams', - 'spane', - 'spang', - 'spans', - 'spard', - 'spars', - 'spart', - 'spate', - 'spats', - 'spaul', - 'spawl', - 'spaws', - 'spayd', - 'spays', - 'spaza', - 'spazz', - 'speal', - 'spean', - 'speat', - 'specs', - 'spect', - 'speel', - 'speer', - 'speil', - 'speir', - 'speks', - 'speld', - 'spelk', - 'speos', - 'spets', - 'speug', - 'spews', - 'spewy', - 'spial', - 'spica', - 'spick', - 'spics', - 'spide', - 'spier', - 'spies', - 'spiff', - 'spifs', - 'spiks', - 'spile', - 'spims', - 'spina', - 'spink', - 'spins', - 'spirt', - 'spiry', - 'spits', - 'spitz', - 'spivs', - 'splay', - 'splog', - 'spode', - 'spods', - 'spoom', - 'spoor', - 'spoot', - 'spork', - 'sposh', - 'spots', - 'sprad', - 'sprag', - 'sprat', - 'spred', - 'sprew', - 'sprit', - 'sprod', - 'sprog', - 'sprue', - 'sprug', - 'spuds', - 'spued', - 'spuer', - 'spues', - 'spugs', - 'spule', - 'spume', - 'spumy', - 'spurs', - 'sputa', - 'spyal', - 'spyre', - 'squab', - 'squaw', - 'squeg', - 'squid', - 'squit', - 'squiz', - 'stabs', - 'stade', - 'stags', - 'stagy', - 'staig', - 'stane', - 'stang', - 'staph', - 'staps', - 'starn', - 'starr', - 'stars', - 'stats', - 'staun', - 'staws', - 'stays', - 'stean', - 'stear', - 'stedd', - 'stede', - 'steds', - 'steek', - 'steem', - 'steen', - 'steil', - 'stela', - 'stele', - 'stell', - 'steme', - 'stems', - 'stend', - 'steno', - 'stens', - 'stent', - 'steps', - 'stept', - 'stere', - 'stets', - 'stews', - 'stewy', - 'steys', - 'stich', - 'stied', - 'sties', - 'stilb', - 'stile', - 'stime', - 'stims', - 'stimy', - 'stipa', - 'stipe', - 'stire', - 'stirk', - 'stirp', - 'stirs', - 'stive', - 'stivy', - 'stoae', - 'stoai', - 'stoas', - 'stoat', - 'stobs', - 'stoep', - 'stogy', - 'stoit', - 'stoln', - 'stoma', - 'stond', - 'stong', - 'stonk', - 'stonn', - 'stook', - 'stoor', - 'stope', - 'stops', - 'stopt', - 'stoss', - 'stots', - 'stott', - 'stoun', - 'stoup', - 'stour', - 'stown', - 'stowp', - 'stows', - 'strad', - 'strae', - 'strag', - 'strak', - 'strep', - 'strew', - 'stria', - 'strig', - 'strim', - 'strop', - 'strow', - 'stroy', - 'strum', - 'stubs', - 'stude', - 'studs', - 'stull', - 'stulm', - 'stumm', - 'stums', - 'stuns', - 'stupa', - 'stupe', - 'sture', - 'sturt', - 'styed', - 'styes', - 'styli', - 'stylo', - 'styme', - 'stymy', - 'styre', - 'styte', - 'subah', - 'subas', - 'subby', - 'suber', - 'subha', - 'succi', - 'sucks', - 'sucky', - 'sucre', - 'sudds', - 'sudor', - 'sudsy', - 'suede', - 'suent', - 'suers', - 'suete', - 'suets', - 'suety', - 'sugan', - 'sughs', - 'sugos', - 'suhur', - 'suids', - 'suint', - 'suits', - 'sujee', - 'sukhs', - 'sukuk', - 'sulci', - 'sulfa', - 'sulfo', - 'sulks', - 'sulph', - 'sulus', - 'sumis', - 'summa', - 'sumos', - 'sumph', - 'sumps', - 'sunis', - 'sunks', - 'sunna', - 'sunns', - 'sunup', - 'supes', - 'supra', - 'surah', - 'sural', - 'suras', - 'surat', - 'surds', - 'sured', - 'sures', - 'surfs', - 'surfy', - 'surgy', - 'surra', - 'sused', - 'suses', - 'susus', - 'sutor', - 'sutra', - 'sutta', - 'swabs', - 'swack', - 'swads', - 'swage', - 'swags', - 'swail', - 'swain', - 'swale', - 'swaly', - 'swamy', - 'swang', - 'swank', - 'swans', - 'swaps', - 'swapt', - 'sward', - 'sware', - 'swarf', - 'swart', - 'swats', - 'swayl', - 'sways', - 'sweal', - 'swede', - 'sweed', - 'sweel', - 'sweer', - 'swees', - 'sweir', - 'swelt', - 'swerf', - 'sweys', - 'swies', - 'swigs', - 'swile', - 'swims', - 'swink', - 'swipe', - 'swire', - 'swiss', - 'swith', - 'swits', - 'swive', - 'swizz', - 'swobs', - 'swole', - 'swoln', - 'swops', - 'swopt', - 'swots', - 'swoun', - 'sybbe', - 'sybil', - 'syboe', - 'sybow', - 'sycee', - 'syces', - 'sycon', - 'syens', - 'syker', - 'sykes', - 'sylis', - 'sylph', - 'sylva', - 'symar', - 'synch', - 'syncs', - 'synds', - 'syned', - 'synes', - 'synth', - 'syped', - 'sypes', - 'syphs', - 'syrah', - 'syren', - 'sysop', - 'sythe', - 'syver', - 'taals', - 'taata', - 'taber', - 'tabes', - 'tabid', - 'tabis', - 'tabla', - 'tabor', - 'tabun', - 'tabus', - 'tacan', - 'taces', - 'tacet', - 'tache', - 'tacho', - 'tachs', - 'tacks', - 'tacos', - 'tacts', - 'taels', - 'tafia', - 'taggy', - 'tagma', - 'tahas', - 'tahrs', - 'taiga', - 'taigs', - 'taiko', - 'tails', - 'tains', - 'taira', - 'taish', - 'taits', - 'tajes', - 'takas', - 'takes', - 'takhi', - 'takin', - 'takis', - 'takky', - 'talak', - 'talaq', - 'talar', - 'talas', - 'talcs', - 'talcy', - 'talea', - 'taler', - 'tales', - 'talks', - 'talky', - 'talls', - 'talma', - 'talpa', - 'taluk', - 'talus', - 'tamal', - 'tamed', - 'tames', - 'tamin', - 'tamis', - 'tammy', - 'tamps', - 'tanas', - 'tanga', - 'tangi', - 'tangs', - 'tanhs', - 'tanka', - 'tanks', - 'tanky', - 'tanna', - 'tansy', - 'tanti', - 'tanto', - 'tanty', - 'tapas', - 'taped', - 'tapen', - 'tapes', - 'tapet', - 'tapis', - 'tappa', - 'tapus', - 'taras', - 'tardo', - 'tared', - 'tares', - 'targa', - 'targe', - 'tarns', - 'taroc', - 'tarok', - 'taros', - 'tarps', - 'tarre', - 'tarry', - 'tarsi', - 'tarts', - 'tarty', - 'tasar', - 'tased', - 'taser', - 'tases', - 'tasks', - 'tassa', - 'tasse', - 'tasso', - 'tatar', - 'tater', - 'tates', - 'taths', - 'tatie', - 'tatou', - 'tatts', - 'tatus', - 'taube', - 'tauld', - 'tauon', - 'taupe', - 'tauts', - 'tavah', - 'tavas', - 'taver', - 'tawai', - 'tawas', - 'tawed', - 'tawer', - 'tawie', - 'tawse', - 'tawts', - 'taxed', - 'taxer', - 'taxes', - 'taxis', - 'taxol', - 'taxon', - 'taxor', - 'taxus', - 'tayra', - 'tazza', - 'tazze', - 'teade', - 'teads', - 'teaed', - 'teaks', - 'teals', - 'teams', - 'tears', - 'teats', - 'teaze', - 'techs', - 'techy', - 'tecta', - 'teels', - 'teems', - 'teend', - 'teene', - 'teens', - 'teeny', - 'teers', - 'teffs', - 'teggs', - 'tegua', - 'tegus', - 'tehrs', - 'teiid', - 'teils', - 'teind', - 'teins', - 'telae', - 'telco', - 'teles', - 'telex', - 'telia', - 'telic', - 'tells', - 'telly', - 'teloi', - 'telos', - 'temed', - 'temes', - 'tempi', - 'temps', - 'tempt', - 'temse', - 'tench', - 'tends', - 'tendu', - 'tenes', - 'tenge', - 'tenia', - 'tenne', - 'tenno', - 'tenny', - 'tenon', - 'tents', - 'tenty', - 'tenue', - 'tepal', - 'tepas', - 'tepoy', - 'terai', - 'teras', - 'terce', - 'terek', - 'teres', - 'terfe', - 'terfs', - 'terga', - 'terms', - 'terne', - 'terns', - 'terry', - 'terts', - 'tesla', - 'testa', - 'teste', - 'tests', - 'tetes', - 'teths', - 'tetra', - 'tetri', - 'teuch', - 'teugh', - 'tewed', - 'tewel', - 'tewit', - 'texas', - 'texes', - 'texts', - 'thack', - 'thagi', - 'thaim', - 'thale', - 'thali', - 'thana', - 'thane', - 'thang', - 'thans', - 'thanx', - 'tharm', - 'thars', - 'thaws', - 'thawy', - 'thebe', - 'theca', - 'theed', - 'theek', - 'thees', - 'thegn', - 'theic', - 'thein', - 'thelf', - 'thema', - 'thens', - 'theow', - 'therm', - 'thesp', - 'thete', - 'thews', - 'thewy', - 'thigs', - 'thilk', - 'thill', - 'thine', - 'thins', - 'thiol', - 'thirl', - 'thoft', - 'thole', - 'tholi', - 'thoro', - 'thorp', - 'thous', - 'thowl', - 'thrae', - 'thraw', - 'thrid', - 'thrip', - 'throe', - 'thuds', - 'thugs', - 'thuja', - 'thunk', - 'thurl', - 'thuya', - 'thymi', - 'thymy', - 'tians', - 'tiars', - 'tical', - 'ticca', - 'ticed', - 'tices', - 'tichy', - 'ticks', - 'ticky', - 'tiddy', - 'tided', - 'tides', - 'tiers', - 'tiffs', - 'tifos', - 'tifts', - 'tiges', - 'tigon', - 'tikas', - 'tikes', - 'tikis', - 'tikka', - 'tilak', - 'tiled', - 'tiler', - 'tiles', - 'tills', - 'tilly', - 'tilth', - 'tilts', - 'timbo', - 'timed', - 'times', - 'timon', - 'timps', - 'tinas', - 'tinct', - 'tinds', - 'tinea', - 'tined', - 'tines', - 'tinge', - 'tings', - 'tinks', - 'tinny', - 'tints', - 'tinty', - 'tipis', - 'tippy', - 'tired', - 'tires', - 'tirls', - 'tiros', - 'tirrs', - 'titch', - 'titer', - 'titis', - 'titre', - 'titty', - 'titup', - 'tiyin', - 'tiyns', - 'tizes', - 'tizzy', - 'toads', - 'toady', - 'toaze', - 'tocks', - 'tocky', - 'tocos', - 'todde', - 'toeas', - 'toffs', - 'toffy', - 'tofts', - 'tofus', - 'togae', - 'togas', - 'toged', - 'toges', - 'togue', - 'tohos', - 'toile', - 'toils', - 'toing', - 'toise', - 'toits', - 'tokay', - 'toked', - 'toker', - 'tokes', - 'tokos', - 'tolan', - 'tolar', - 'tolas', - 'toled', - 'toles', - 'tolls', - 'tolly', - 'tolts', - 'tolus', - 'tolyl', - 'toman', - 'tombs', - 'tomes', - 'tomia', - 'tommy', - 'tomos', - 'tondi', - 'tondo', - 'toned', - 'toner', - 'tones', - 'toney', - 'tongs', - 'tonka', - 'tonks', - 'tonne', - 'tonus', - 'tools', - 'tooms', - 'toons', - 'toots', - 'toped', - 'topee', - 'topek', - 'toper', - 'topes', - 'tophe', - 'tophi', - 'tophs', - 'topis', - 'topoi', - 'topos', - 'toppy', - 'toque', - 'torah', - 'toran', - 'toras', - 'torcs', - 'tores', - 'toric', - 'torii', - 'toros', - 'torot', - 'torrs', - 'torse', - 'torsi', - 'torsk', - 'torta', - 'torte', - 'torts', - 'tosas', - 'tosed', - 'toses', - 'toshy', - 'tossy', - 'toted', - 'toter', - 'totes', - 'totty', - 'touks', - 'touns', - 'tours', - 'touse', - 'tousy', - 'touts', - 'touze', - 'touzy', - 'towed', - 'towie', - 'towns', - 'towny', - 'towse', - 'towsy', - 'towts', - 'towze', - 'towzy', - 'toyed', - 'toyer', - 'toyon', - 'toyos', - 'tozed', - 'tozes', - 'tozie', - 'trabs', - 'trads', - 'tragi', - 'traik', - 'trams', - 'trank', - 'tranq', - 'trans', - 'trant', - 'trape', - 'traps', - 'trapt', - 'trass', - 'trats', - 'tratt', - 'trave', - 'trayf', - 'trays', - 'treck', - 'treed', - 'treen', - 'trees', - 'trefa', - 'treif', - 'treks', - 'trema', - 'trems', - 'tress', - 'trest', - 'trets', - 'trews', - 'treyf', - 'treys', - 'triac', - 'tride', - 'trier', - 'tries', - 'triff', - 'trigo', - 'trigs', - 'trike', - 'trild', - 'trill', - 'trims', - 'trine', - 'trins', - 'triol', - 'trior', - 'trios', - 'trips', - 'tripy', - 'trist', - 'troad', - 'troak', - 'troat', - 'trock', - 'trode', - 'trods', - 'trogs', - 'trois', - 'troke', - 'tromp', - 'trona', - 'tronc', - 'trone', - 'tronk', - 'trons', - 'trooz', - 'troth', - 'trots', - 'trows', - 'troys', - 'trued', - 'trues', - 'trugo', - 'trugs', - 'trull', - 'tryer', - 'tryke', - 'tryma', - 'tryps', - 'tsade', - 'tsadi', - 'tsars', - 'tsked', - 'tsuba', - 'tsubo', - 'tuans', - 'tuart', - 'tuath', - 'tubae', - 'tubar', - 'tubas', - 'tubby', - 'tubed', - 'tubes', - 'tucks', - 'tufas', - 'tuffe', - 'tuffs', - 'tufts', - 'tufty', - 'tugra', - 'tuile', - 'tuina', - 'tuism', - 'tuktu', - 'tules', - 'tulpa', - 'tulsi', - 'tumid', - 'tummy', - 'tumps', - 'tumpy', - 'tunas', - 'tunds', - 'tuned', - 'tuner', - 'tunes', - 'tungs', - 'tunny', - 'tupek', - 'tupik', - 'tuple', - 'tuque', - 'turds', - 'turfs', - 'turfy', - 'turks', - 'turme', - 'turms', - 'turns', - 'turnt', - 'turps', - 'turrs', - 'tushy', - 'tusks', - 'tusky', - 'tutee', - 'tutti', - 'tutty', - 'tutus', - 'tuxes', - 'tuyer', - 'twaes', - 'twain', - 'twals', - 'twank', - 'twats', - 'tways', - 'tweel', - 'tween', - 'tweep', - 'tweer', - 'twerk', - 'twerp', - 'twier', - 'twigs', - 'twill', - 'twilt', - 'twink', - 'twins', - 'twiny', - 'twire', - 'twirp', - 'twite', - 'twits', - 'twoer', - 'twyer', - 'tyees', - 'tyers', - 'tyiyn', - 'tykes', - 'tyler', - 'tymps', - 'tynde', - 'tyned', - 'tynes', - 'typal', - 'typed', - 'types', - 'typey', - 'typic', - 'typos', - 'typps', - 'typto', - 'tyran', - 'tyred', - 'tyres', - 'tyros', - 'tythe', - 'tzars', - 'udals', - 'udons', - 'ugali', - 'ugged', - 'uhlan', - 'uhuru', - 'ukase', - 'ulama', - 'ulans', - 'ulema', - 'ulmin', - 'ulnad', - 'ulnae', - 'ulnar', - 'ulnas', - 'ulpan', - 'ulvas', - 'ulyie', - 'ulzie', - 'umami', - 'umbel', - 'umber', - 'umble', - 'umbos', - 'umbre', - 'umiac', - 'umiak', - 'umiaq', - 'ummah', - 'ummas', - 'ummed', - 'umped', - 'umphs', - 'umpie', - 'umpty', - 'umrah', - 'umras', - 'unais', - 'unapt', - 'unarm', - 'unary', - 'unaus', - 'unbag', - 'unban', - 'unbar', - 'unbed', - 'unbid', - 'unbox', - 'uncap', - 'unces', - 'uncia', - 'uncos', - 'uncoy', - 'uncus', - 'undam', - 'undee', - 'undos', - 'undug', - 'uneth', - 'unfix', - 'ungag', - 'unget', - 'ungod', - 'ungot', - 'ungum', - 'unhat', - 'unhip', - 'unica', - 'units', - 'unjam', - 'unked', - 'unket', - 'unkid', - 'unlaw', - 'unlay', - 'unled', - 'unlet', - 'unlid', - 'unman', - 'unmew', - 'unmix', - 'unpay', - 'unpeg', - 'unpen', - 'unpin', - 'unred', - 'unrid', - 'unrig', - 'unrip', - 'unsaw', - 'unsay', - 'unsee', - 'unsew', - 'unsex', - 'unsod', - 'untax', - 'untin', - 'unwet', - 'unwit', - 'unwon', - 'upbow', - 'upbye', - 'updos', - 'updry', - 'upend', - 'upjet', - 'uplay', - 'upled', - 'uplit', - 'upped', - 'upran', - 'uprun', - 'upsee', - 'upsey', - 'uptak', - 'upter', - 'uptie', - 'uraei', - 'urali', - 'uraos', - 'urare', - 'urari', - 'urase', - 'urate', - 'urbex', - 'urbia', - 'urdee', - 'ureal', - 'ureas', - 'uredo', - 'ureic', - 'urena', - 'urent', - 'urged', - 'urger', - 'urges', - 'urial', - 'urite', - 'urman', - 'urnal', - 'urned', - 'urped', - 'ursae', - 'ursid', - 'urson', - 'urubu', - 'urvas', - 'users', - 'usnea', - 'usque', - 'usure', - 'usury', - 'uteri', - 'uveal', - 'uveas', - 'uvula', - 'vacua', - 'vaded', - 'vades', - 'vagal', - 'vagus', - 'vails', - 'vaire', - 'vairs', - 'vairy', - 'vakas', - 'vakil', - 'vales', - 'valis', - 'valse', - 'vamps', - 'vampy', - 'vanda', - 'vaned', - 'vanes', - 'vangs', - 'vants', - 'vaped', - 'vaper', - 'vapes', - 'varan', - 'varas', - 'vardy', - 'varec', - 'vares', - 'varia', - 'varix', - 'varna', - 'varus', - 'varve', - 'vasal', - 'vases', - 'vasts', - 'vasty', - 'vatic', - 'vatus', - 'vauch', - 'vaute', - 'vauts', - 'vawte', - 'vaxes', - 'veale', - 'veals', - 'vealy', - 'veena', - 'veeps', - 'veers', - 'veery', - 'vegas', - 'veges', - 'vegie', - 'vegos', - 'vehme', - 'veils', - 'veily', - 'veins', - 'veiny', - 'velar', - 'velds', - 'veldt', - 'veles', - 'vells', - 'velum', - 'venae', - 'venal', - 'vends', - 'vendu', - 'veney', - 'venge', - 'venin', - 'vents', - 'venus', - 'verbs', - 'verra', - 'verry', - 'verst', - 'verts', - 'vertu', - 'vespa', - 'vesta', - 'vests', - 'vetch', - 'vexed', - 'vexer', - 'vexes', - 'vexil', - 'vezir', - 'vials', - 'viand', - 'vibes', - 'vibex', - 'vibey', - 'viced', - 'vices', - 'vichy', - 'viers', - 'views', - 'viewy', - 'vifda', - 'viffs', - 'vigas', - 'vigia', - 'vilde', - 'viler', - 'villi', - 'vills', - 'vimen', - 'vinal', - 'vinas', - 'vinca', - 'vined', - 'viner', - 'vines', - 'vinew', - 'vinic', - 'vinos', - 'vints', - 'viold', - 'viols', - 'vired', - 'vireo', - 'vires', - 'virga', - 'virge', - 'virid', - 'virls', - 'virtu', - 'visas', - 'vised', - 'vises', - 'visie', - 'visne', - 'vison', - 'visto', - 'vitae', - 'vitas', - 'vitex', - 'vitro', - 'vitta', - 'vivas', - 'vivat', - 'vivda', - 'viver', - 'vives', - 'vizir', - 'vizor', - 'vleis', - 'vlies', - 'vlogs', - 'voars', - 'vocab', - 'voces', - 'voddy', - 'vodou', - 'vodun', - 'voema', - 'vogie', - 'voids', - 'voile', - 'voips', - 'volae', - 'volar', - 'voled', - 'voles', - 'volet', - 'volks', - 'volta', - 'volte', - 'volti', - 'volts', - 'volva', - 'volve', - 'vomer', - 'voted', - 'votes', - 'vouge', - 'voulu', - 'vowed', - 'vower', - 'voxel', - 'vozhd', - 'vraic', - 'vrils', - 'vroom', - 'vrous', - 'vrouw', - 'vrows', - 'vuggs', - 'vuggy', - 'vughs', - 'vughy', - 'vulgo', - 'vulns', - 'vulva', - 'vutty', - 'waacs', - 'wacke', - 'wacko', - 'wacks', - 'wadds', - 'waddy', - 'waded', - 'wader', - 'wades', - 'wadge', - 'wadis', - 'wadts', - 'waffs', - 'wafts', - 'waged', - 'wages', - 'wagga', - 'wagyu', - 'wahoo', - 'waide', - 'waifs', - 'waift', - 'wails', - 'wains', - 'wairs', - 'waite', - 'waits', - 'wakas', - 'waked', - 'waken', - 'waker', - 'wakes', - 'wakfs', - 'waldo', - 'walds', - 'waled', - 'waler', - 'wales', - 'walie', - 'walis', - 'walks', - 'walla', - 'walls', - 'wally', - 'walty', - 'wamed', - 'wames', - 'wamus', - 'wands', - 'waned', - 'wanes', - 'waney', - 'wangs', - 'wanks', - 'wanky', - 'wanle', - 'wanly', - 'wanna', - 'wants', - 'wanty', - 'wanze', - 'waqfs', - 'warbs', - 'warby', - 'wards', - 'wared', - 'wares', - 'warez', - 'warks', - 'warms', - 'warns', - 'warps', - 'warre', - 'warst', - 'warts', - 'wases', - 'washy', - 'wasms', - 'wasps', - 'waspy', - 'wasts', - 'watap', - 'watts', - 'wauff', - 'waugh', - 'wauks', - 'waulk', - 'wauls', - 'waurs', - 'waved', - 'waves', - 'wavey', - 'wawas', - 'wawes', - 'wawls', - 'waxed', - 'waxer', - 'waxes', - 'wayed', - 'wazir', - 'wazoo', - 'weald', - 'weals', - 'weamb', - 'weans', - 'wears', - 'webby', - 'weber', - 'wecht', - 'wedel', - 'wedgy', - 'weeds', - 'weeke', - 'weeks', - 'weels', - 'weems', - 'weens', - 'weeny', - 'weeps', - 'weepy', - 'weest', - 'weete', - 'weets', - 'wefte', - 'wefts', - 'weids', - 'weils', - 'weirs', - 'weise', - 'weize', - 'wekas', - 'welds', - 'welke', - 'welks', - 'welkt', - 'wells', - 'welly', - 'welts', - 'wembs', - 'wends', - 'wenge', - 'wenny', - 'wents', - 'weros', - 'wersh', - 'wests', - 'wetas', - 'wetly', - 'wexed', - 'wexes', - 'whamo', - 'whams', - 'whang', - 'whaps', - 'whare', - 'whata', - 'whats', - 'whaup', - 'whaur', - 'wheal', - 'whear', - 'wheen', - 'wheep', - 'wheft', - 'whelk', - 'whelm', - 'whens', - 'whets', - 'whews', - 'wheys', - 'whids', - 'whift', - 'whigs', - 'whilk', - 'whims', - 'whins', - 'whios', - 'whips', - 'whipt', - 'whirr', - 'whirs', - 'whish', - 'whiss', - 'whist', - 'whits', - 'whity', - 'whizz', - 'whomp', - 'whoof', - 'whoot', - 'whops', - 'whore', - 'whorl', - 'whort', - 'whoso', - 'whows', - 'whump', - 'whups', - 'whyda', - 'wicca', - 'wicks', - 'wicky', - 'widdy', - 'wides', - 'wiels', - 'wifed', - 'wifes', - 'wifey', - 'wifie', - 'wifty', - 'wigan', - 'wigga', - 'wiggy', - 'wikis', - 'wilco', - 'wilds', - 'wiled', - 'wiles', - 'wilga', - 'wilis', - 'wilja', - 'wills', - 'wilts', - 'wimps', - 'winds', - 'wined', - 'wines', - 'winey', - 'winge', - 'wings', - 'wingy', - 'winks', - 'winna', - 'winns', - 'winos', - 'winze', - 'wiped', - 'wiper', - 'wipes', - 'wired', - 'wirer', - 'wires', - 'wirra', - 'wised', - 'wises', - 'wisha', - 'wisht', - 'wisps', - 'wists', - 'witan', - 'wited', - 'wites', - 'withe', - 'withs', - 'withy', - 'wived', - 'wiver', - 'wives', - 'wizen', - 'wizes', - 'woads', - 'woald', - 'wocks', - 'wodge', - 'woful', - 'wojus', - 'woker', - 'wokka', - 'wolds', - 'wolfs', - 'wolly', - 'wolve', - 'wombs', - 'womby', - 'womyn', - 'wonga', - 'wongi', - 'wonks', - 'wonky', - 'wonts', - 'woods', - 'wooed', - 'woofs', - 'woofy', - 'woold', - 'wools', - 'woons', - 'woops', - 'woopy', - 'woose', - 'woosh', - 'wootz', - 'words', - 'works', - 'worms', - 'wormy', - 'worts', - 'wowed', - 'wowee', - 'woxen', - 'wrang', - 'wraps', - 'wrapt', - 'wrast', - 'wrate', - 'wrawl', - 'wrens', - 'wrick', - 'wried', - 'wrier', - 'wries', - 'writs', - 'wroke', - 'wroot', - 'wroth', - 'wryer', - 'wuddy', - 'wudus', - 'wulls', - 'wurst', - 'wuses', - 'wushu', - 'wussy', - 'wuxia', - 'wyled', - 'wyles', - 'wynds', - 'wynns', - 'wyted', - 'wytes', - 'xebec', - 'xenia', - 'xenic', - 'xenon', - 'xeric', - 'xerox', - 'xerus', - 'xoana', - 'xrays', - 'xylan', - 'xylem', - 'xylic', - 'xylol', - 'xylyl', - 'xysti', - 'xysts', - 'yaars', - 'yabas', - 'yabba', - 'yabby', - 'yacca', - 'yacka', - 'yacks', - 'yaffs', - 'yager', - 'yages', - 'yagis', - 'yahoo', - 'yaird', - 'yakka', - 'yakow', - 'yales', - 'yamen', - 'yampy', - 'yamun', - 'yangs', - 'yanks', - 'yapok', - 'yapon', - 'yapps', - 'yappy', - 'yarak', - 'yarco', - 'yards', - 'yarer', - 'yarfa', - 'yarks', - 'yarns', - 'yarrs', - 'yarta', - 'yarto', - 'yates', - 'yauds', - 'yauld', - 'yaups', - 'yawed', - 'yawey', - 'yawls', - 'yawns', - 'yawny', - 'yawps', - 'ybore', - 'yclad', - 'ycled', - 'ycond', - 'ydrad', - 'ydred', - 'yeads', - 'yeahs', - 'yealm', - 'yeans', - 'yeard', - 'years', - 'yecch', - 'yechs', - 'yechy', - 'yedes', - 'yeeds', - 'yeesh', - 'yeggs', - 'yelks', - 'yells', - 'yelms', - 'yelps', - 'yelts', - 'yenta', - 'yente', - 'yerba', - 'yerds', - 'yerks', - 'yeses', - 'yesks', - 'yests', - 'yesty', - 'yetis', - 'yetts', - 'yeuks', - 'yeuky', - 'yeven', - 'yeves', - 'yewen', - 'yexed', - 'yexes', - 'yfere', - 'yiked', - 'yikes', - 'yills', - 'yince', - 'yipes', - 'yippy', - 'yirds', - 'yirks', - 'yirrs', - 'yirth', - 'yites', - 'yitie', - 'ylems', - 'ylike', - 'ylkes', - 'ymolt', - 'ympes', - 'yobbo', - 'yobby', - 'yocks', - 'yodel', - 'yodhs', - 'yodle', - 'yogas', - 'yogee', - 'yoghs', - 'yogic', - 'yogin', - 'yogis', - 'yoick', - 'yojan', - 'yoked', - 'yokel', - 'yoker', - 'yokes', - 'yokul', - 'yolks', - 'yolky', - 'yomim', - 'yomps', - 'yonic', - 'yonis', - 'yonks', - 'yoofs', - 'yoops', - 'yores', - 'yorks', - 'yorps', - 'youks', - 'yourn', - 'yours', - 'yourt', - 'youse', - 'yowed', - 'yowes', - 'yowie', - 'yowls', - 'yowza', - 'yrapt', - 'yrent', - 'yrivd', - 'yrneh', - 'ysame', - 'ytost', - 'yuans', - 'yucas', - 'yucca', - 'yucch', - 'yucko', - 'yucks', - 'yucky', - 'yufts', - 'yugas', - 'yuked', - 'yukes', - 'yukky', - 'yukos', - 'yulan', - 'yules', - 'yummo', - 'yummy', - 'yumps', - 'yupon', - 'yuppy', - 'yurta', - 'yurts', - 'yuzus', - 'zabra', - 'zacks', - 'zaida', - 'zaidy', - 'zaire', - 'zakat', - 'zaman', - 'zambo', - 'zamia', - 'zanja', - 'zante', - 'zanza', - 'zanze', - 'zappy', - 'zarfs', - 'zaris', - 'zatis', - 'zaxes', - 'zayin', - 'zazen', - 'zeals', - 'zebec', - 'zebub', - 'zebus', - 'zedas', - 'zeins', - 'zendo', - 'zerda', - 'zerks', - 'zeros', - 'zests', - 'zetas', - 'zexes', - 'zezes', - 'zhomo', - 'zibet', - 'ziffs', - 'zigan', - 'zilas', - 'zilch', - 'zilla', - 'zills', - 'zimbi', - 'zimbs', - 'zinco', - 'zincs', - 'zincy', - 'zineb', - 'zines', - 'zings', - 'zingy', - 'zinke', - 'zinky', - 'zippo', - 'zippy', - 'ziram', - 'zitis', - 'zizel', - 'zizit', - 'zlote', - 'zloty', - 'zoaea', - 'zobos', - 'zobus', - 'zocco', - 'zoeae', - 'zoeal', - 'zoeas', - 'zoism', - 'zoist', - 'zombi', - 'zonae', - 'zonda', - 'zoned', - 'zoner', - 'zones', - 'zonks', - 'zooea', - 'zooey', - 'zooid', - 'zooks', - 'zooms', - 'zoons', - 'zooty', - 'zoppa', - 'zoppo', - 'zoril', - 'zoris', - 'zorro', - 'zouks', - 'zowee', - 'zowie', - 'zulus', - 'zupan', - 'zupas', - 'zuppa', - 'zurfs', - 'zuzim', - 'zygal', - 'zygon', - 'zymes', - 'zymic' -]); diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Vista Win7 Style Builder Full Version.md b/spaces/rorallitri/biomedical-language-models/logs/Download Vista Win7 Style Builder Full Version.md deleted file mode 100644 index d254e9a81d808ed641afceec645a08d70ced494f..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Vista Win7 Style Builder Full Version.md +++ /dev/null @@ -1,34 +0,0 @@ -

    Download Vista Win7 Style Builder Full Version


    Download Ziphttps://tinurll.com/2uznVf



    - -GIMP 2.6.4 Release Notes - The GIMP Development Team - -in Preview: support for the latest FFmpeg libav* libraries for playback and encoding; a new GEGL backend for XMP metadata; support for S3TC encoding in -bicubic export mode; improved tiling and GEGL memory usage; improved patch handling and detecting of the installed - -What is GIMP? - -Image editing is fast becoming a multi-app affair. If you’re more of a designer than a traditional photo editor, you’re unlikely to want to settle for what comes bundled with your operating system. - - - - #id,jdbcType=BIGINT, - - #name,jdbcType=VARCHAR, - - #opt,jdbcType=VARCHAR, - - #note,jdbcType=VARCHAR, - - #addTime,jdbcType=TIMESTAMP, - - #updateTime,jdbcType=TIMESTAMP, - - #price,jdbcType=DECIMAL, - - #unit,jdbcType=VARCHAR, - - #num,jdbcType=BIGINT, - - https://tinurll.com/2uznRW



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/English Pirati Del Cielo Free Download.md b/spaces/rorallitri/biomedical-language-models/logs/English Pirati Del Cielo Free Download.md deleted file mode 100644 index b1de553875c30f8b2a43aa669b7c48db7c747e30..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/English Pirati Del Cielo Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    english Pirati del cielo free download


    Download File ––– https://tinurll.com/2uzouk



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/K To 12 Pc Hardware Servicing Learning Module Free Download.md b/spaces/rorallitri/biomedical-language-models/logs/K To 12 Pc Hardware Servicing Learning Module Free Download.md deleted file mode 100644 index f1013112feed0aaa87f6631026ae49290417a1b9..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/K To 12 Pc Hardware Servicing Learning Module Free Download.md +++ /dev/null @@ -1,17 +0,0 @@ - -

    How to Download K to 12 PC Hardware Servicing Learning Module for Free

    -

    If you are a student or a teacher of Information and Communications Technology (ICT) in the Philippines, you might be looking for a learning module on PC hardware servicing. PC hardware servicing is the skill of doing repairs and maintenance on the physical components of a computer and its peripherals, such as fans, hard drives, keyboards and printers[^2^]. It is an important skill to have in today's digital world.

    -

    k to 12 pc hardware servicing learning module free download


    DOWNLOAD »»» https://tinurll.com/2uznlC



    -

    Fortunately, there is a free learning module available for you to download. It is part of the K to 12 Basic Education Curriculum Technology and Livelihood Education (TLE) program of the Department of Education (DepEd). The K to 12 curriculum aims to provide learners with sufficient knowledge and skills to prepare them for life and work in the 21st century[^3^]. The ICT Computer Hardware Servicing learning module covers topics such as safety practices, tools and equipment, troubleshooting techniques, preventive maintenance and upgrading of computer systems.

    -

    To download the learning module, you can visit the website of Listph.com[^1^], which provides various educational resources for Filipino learners. You can find the link to the learning module under the Exploratory Course: Grade 7 and 8 section. The learning module is in PDF format and has 128 pages. You can save it on your computer or print it out for your convenience.

    -

    By downloading this learning module, you can enhance your knowledge and skills in PC hardware servicing. You can also use it as a reference or a guide for your studies or teaching. We hope you find this learning module useful and informative.

    -

    - -

    PC hardware servicing is not only a useful skill for ICT students and teachers, but also for anyone who owns or uses a computer. By learning how to service your own computer, you can save money on repairs, improve its performance and extend its lifespan. You can also prevent common problems such as overheating, slow booting, blue screen errors and data loss.

    -

    The K to 12 PC Hardware Servicing learning module will teach you the basics of PC hardware servicing, such as identifying the parts and functions of a computer system, assembling and disassembling a computer, installing and configuring operating systems and applications, and testing and evaluating computer performance. You will also learn how to apply safety measures and ethical standards when working with computers.

    -

    The learning module is designed to be interactive and engaging, with activities, exercises, quizzes and projects that will help you apply what you have learned. You will also find tips, reminders, examples and illustrations that will make the learning process easier and more fun. The learning module is aligned with the K to 12 curriculum standards and competencies, so you can be assured that you are getting quality education.

    - -

    If you want to learn more about PC hardware servicing, you can also check out other online resources that are available for free. For example, you can watch video tutorials on YouTube, read blogs and articles on various websites, or join online forums and communities where you can ask questions and share your experiences. You can also enroll in online courses or certifications that will enhance your skills and credentials.

    -

    PC hardware servicing is a valuable skill that can help you in your personal and professional life. By downloading the K to 12 PC Hardware Servicing learning module, you can start your journey of becoming a competent and confident PC hardware technician. You can also share this learning module with your friends, classmates or colleagues who might be interested in learning PC hardware servicing. Together, you can make the most of this free and accessible educational resource.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Khiladi 786 Movie English Subtitle Download.md b/spaces/rorallitri/biomedical-language-models/logs/Khiladi 786 Movie English Subtitle Download.md deleted file mode 100644 index c193c6d0221ec7b92083a972ae9ca7ab2df26bae..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Khiladi 786 Movie English Subtitle Download.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    To download our subtitles, install Chrome extension; click on

    1. "Add to Chrome"
    2. "Add Extension"

    If you install our extension you will remove all ads and waiting time on this website

    Thank you !

    -

    To download our subtitles, install Firefox add-on; click on

    1. "Add to Firefox"
    2. "Add"

    If you install our extension you will remove all ads and waiting time on this website

    Thank you !

    -

    Khiladi 786 Movie English Subtitle Download


    Download ===> https://tinurll.com/2uzmfz



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/rosenthal/chess/chessfenbot/helper_image_loading.py b/spaces/rosenthal/chess/chessfenbot/helper_image_loading.py deleted file mode 100644 index 1ba319bb7e4bf9d349912fedf0f54261f3068714..0000000000000000000000000000000000000000 --- a/spaces/rosenthal/chess/chessfenbot/helper_image_loading.py +++ /dev/null @@ -1,109 +0,0 @@ -import numpy as np - -# Imports for visualization -import PIL.Image -from io import BytesIO -try: - # Python 3 - from urllib.request import urlopen, Request - from urllib.parse import quote -except ImportError: - # Python 2 - from urllib2 import urlopen, Request - from urllib2 import quote - - -# Imports for pulling metadata from imgur url -import requests -from bs4 import BeautifulSoup - -# All images are returned as PIL images, not numpy arrays -def loadImageGrayscale(img_file): - """Load image from file, convert to grayscale float32 numpy array""" - img = PIL.Image.open(img_file) - - # Convert to grayscale and return - return img.convert("L") - -def loadImageFromURL(url, max_size_bytes=4000000): - """Load image from url. - - If the url has more data than max_size_bytes, fail out - Try and update with metadata url link if an imgur link""" - - # If imgur try to load from metadata - url = tryUpdateImgurURL(url) - - # Try loading image from url directly - try: - req = Request(url, headers={'User-Agent' : "TensorFlow Chessbot"}) - con = urlopen(req) - # Load up to max_size_bytes of data from url - data = con.read(max_size_bytes) - # If there is more, image is too big, skip - if len(con.read(1)) != 0: - print("Skipping, url data larger than %d bytes" % max_size_bytes) - return None, url - - # Process into PIL image - img = PIL.Image.open(BytesIO(data)) - # Return PIL image and url used - return img, url - except IOError as e: - # Return None on failure to load image from url - return None, url - -def tryUpdateImgurURL(url): - """Try to get actual image url from imgur metadata""" - if 'imgur' not in url: # Only attempt on urls that have imgur in it - return url - - soup = BeautifulSoup(requests.get(url).content, "lxml") - - # Get metadata tags - meta = soup.find_all('meta') - # Get the specific tag, ex. - # - tags = list(filter(lambda tag: 'name' in tag.attrs and tag.attrs['name'] == "twitter:image", meta)) - - if tags: - # Replace url with metadata url - url = tags[0]['content'] - - return url - -def loadImageFromPath(img_path): - """Load PIL image from image filepath, keep as color""" - return PIL.Image.open(open(img_path,'rb')) - - -def resizeAsNeeded(img, max_size=(2000,2000), max_fail_size=(2000,2000)): - if not PIL.Image.isImageType(img): - img = PIL.Image.fromarray(img) # Convert to PIL Image if not already - - # If image is larger than fail size, don't try resizing and give up - if img.size[0] > max_fail_size[0] or img.size[1] > max_fail_size[1]: - return None - - """Resize if image larger than max size""" - if img.size[0] > max_size[0] or img.size[1] > max_size[1]: - print("Image too big (%d x %d)" % (img.size[0], img.size[1])) - new_size = np.min(max_size) # px - if img.size[0] > img.size[1]: - # resize by width to new limit - ratio = np.float(new_size) / img.size[0] - else: - # resize by height - ratio = np.float(new_size) / img.size[1] - print("Reducing by factor of %.2g" % (1./ratio)) - new_size = (np.array(img.size) * ratio).astype(int) - print("New size: (%d x %d)" % (new_size[0], new_size[1])) - img = img.resize(new_size, PIL.Image.BILINEAR) - return img - -def getVisualizeLink(corners, url): - """Return online link to visualize found corners for url""" - encoded_url = quote(url, safe='') - - return ("http://tetration.xyz/tensorflow_chessbot/overlay_chessboard.html?%d,%d,%d,%d,%s" % - (corners[0], corners[1], corners[2], corners[3], encoded_url)) \ No newline at end of file diff --git a/spaces/safi842/FashionGen/netdissect/upsegmodel/prroi_pool/build.py b/spaces/safi842/FashionGen/netdissect/upsegmodel/prroi_pool/build.py deleted file mode 100644 index b198790817a2d11d65d6211b011f9408d9d34270..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/upsegmodel/prroi_pool/build.py +++ /dev/null @@ -1,50 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : build.py -# Author : Jiayuan Mao, Tete Xiao -# Email : maojiayuan@gmail.com, jasonhsiao97@gmail.com -# Date : 07/13/2018 -# -# This file is part of PreciseRoIPooling. -# Distributed under terms of the MIT license. -# Copyright (c) 2017 Megvii Technology Limited. - -import os -import torch - -from torch.utils.ffi import create_extension - -headers = [] -sources = [] -defines = [] -extra_objects = [] -with_cuda = False - -if torch.cuda.is_available(): - with_cuda = True - - headers+= ['src/prroi_pooling_gpu.h'] - sources += ['src/prroi_pooling_gpu.c'] - defines += [('WITH_CUDA', None)] - - this_file = os.path.dirname(os.path.realpath(__file__)) - extra_objects_cuda = ['src/prroi_pooling_gpu_impl.cu.o'] - extra_objects_cuda = [os.path.join(this_file, fname) for fname in extra_objects_cuda] - extra_objects.extend(extra_objects_cuda) -else: - # TODO(Jiayuan Mao @ 07/13): remove this restriction after we support the cpu implementation. - raise NotImplementedError('Precise RoI Pooling only supports GPU (cuda) implememtations.') - -ffi = create_extension( - '_prroi_pooling', - headers=headers, - sources=sources, - define_macros=defines, - relative_to=__file__, - with_cuda=with_cuda, - extra_objects=extra_objects -) - -if __name__ == '__main__': - ffi.build() - diff --git a/spaces/sagu7/sagu7-dating-avatar-model/README.md b/spaces/sagu7/sagu7-dating-avatar-model/README.md deleted file mode 100644 index a66adcc7a2fffed723ed4dabe257bac1ee051a3c..0000000000000000000000000000000000000000 --- a/spaces/sagu7/sagu7-dating-avatar-model/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sagu7 Dating Avatar Model -emoji: 📚 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/utils/transforms.py b/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/utils/transforms.py deleted file mode 100644 index c08ba1e3db751f3a5483a003be38c69c2cf2df85..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/utils/transforms.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch.nn import functional as F -from torchvision.transforms.functional import resize, to_pil_image # type: ignore - -from copy import deepcopy -from typing import Tuple - - -class ResizeLongestSide: - """ - Resizes images to the longest side 'target_length', as well as provides - methods for resizing coordinates and boxes. Provides methods for - transforming both numpy array and batched torch tensors. - """ - - def __init__(self, target_length: int) -> None: - self.target_length = target_length - - def apply_image(self, image: np.ndarray) -> np.ndarray: - """ - Expects a numpy array with shape HxWxC in uint8 format. - """ - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return np.array(resize(to_pil_image(image), target_size)) - - def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array of length 2 in the final dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).astype(float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array shape Bx4. Requires the original image size - in (H, W) format. - """ - boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: - """ - Expects batched images with shape BxCxHxW and float format. This - transformation may not exactly match apply_image. apply_image is - the transformation expected by the model. - """ - # Expects an image in BCHW format. May not exactly match apply_image. - target_size = self.get_preprocess_shape(image.shape[2], image.shape[3], self.target_length) - return F.interpolate( - image, target_size, mode="bilinear", align_corners=False, antialias=True - ) - - def apply_coords_torch( - self, coords: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with length 2 in the last dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).to(torch.float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes_torch( - self, boxes: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with shape Bx4. Requires the original image - size in (H, W) format. - """ - boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - @staticmethod - def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: - """ - Compute the output size given input size and target long side length. - """ - scale = long_side_length * 1.0 / max(oldh, oldw) - newh, neww = oldh * scale, oldw * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/prior.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/prior.py deleted file mode 100644 index 64ef7ea7eeb8bf251a56e9dd5fac752ab46241b3..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/prior.py +++ /dev/null @@ -1,12 +0,0 @@ -from torch.utils.data import DataLoader - - -class PriorDataLoader(DataLoader): - pass - # init accepts num_steps as first argument - - # has two attributes set on class or object level: - # num_features: int and - # num_outputs: int - # fuse_x_y: bool - # Optional: validate function that accepts a transformer model diff --git a/spaces/scedlatioru/img-to-music/example/CLAAS.Parts.Doc.v5.0.36.0.FULL.Version.rarl.md b/spaces/scedlatioru/img-to-music/example/CLAAS.Parts.Doc.v5.0.36.0.FULL.Version.rarl.md deleted file mode 100644 index 9f3d1ad69fe1c5a3be189df966a716b97a2b536b..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/CLAAS.Parts.Doc.v5.0.36.0.FULL.Version.rarl.md +++ /dev/null @@ -1,7 +0,0 @@ - -

    Its not advisable to use the free version if you want all the features that the full version offers, as this may cause problems. Nonetheless, the free version does come with one or two more tools, allowing you to preview your files before archiving them. If youre using the full version and you find it to be inconvenient, youll be able to convert it to the free version, which isnt difficult to do, as you can do it through the Menu option.

    -

    CLAAS.Parts.Doc.v5.0.36.0.FULL.Version.rarl


    Download File 🔗 https://gohhs.com/2uEyJs



    -

    What Youll notice is that this software has been selected for the whole family to use. Theres a free version available, with limited functionality, not all the features that the full version boasts of. Go for the full version if you value functionality, as it costs $29.90 but you can use the software without any issues until 1 June.
    Pros
    Supports all kinds of archives
    Smooth software interface
    Multi-platform compatible
    Cons
    No right click menu support
    No one hot teacher sex with othes crack
    pranvilite sivan porn bahubali download femjoy

    -

    The program is very good in giving the meaning of the parts used in a painting, as it is able to determine the signature of the painter.A good number of paintings in the Museum have detailed signature of the artists.The program is very good in giving the meaning of the parts used in a painting, as it is able to determine the signature of the painter.A good number of paintings in the Museum have detailed signature of the artists.The program is very good in giving the meaning of the parts used in a painting, as it is able to determine the signature of the painter.A good number of paintings in the Museum have detailed signature of the artists.Fully automatic generation of an anti-aliased image from a non-AA font. Simple
    You've designed your creative output in a commercial environment: it's time to take it into the next level.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Call Of Duty 4 - Modern Warfare Repack By R.G Catalyst [Monsters Keygen.md b/spaces/scedlatioru/img-to-music/example/Call Of Duty 4 - Modern Warfare Repack By R.G Catalyst [Monsters Keygen.md deleted file mode 100644 index 3f36f6e1b91dfafdc03c13779d78839c057377b3..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Call Of Duty 4 - Modern Warfare Repack By R.G Catalyst [Monsters Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Call of Duty 4 - Modern Warfare Repack By R.G Catalyst [Monsters keygen


    Downloadhttps://gohhs.com/2uEAo2



    -
    -Monster energy, distributed by coca-cola enterprises ltd, is joining forces with activision ... Key features and benefits: customizable speaker plates customize your ear force ... Download call of duty ghosts rg catalyst torrent call of duty ghosts rg. repack call of ... The pc demo for call of duty 4: modern warfare is now available. 1fdad05405
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Osu Auto Aimbot Download.md b/spaces/scedlatioru/img-to-music/example/Osu Auto Aimbot Download.md deleted file mode 100644 index b0957932c272c98b5d7bcb0f86ae31b325bc73fe..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Osu Auto Aimbot Download.md +++ /dev/null @@ -1,14 +0,0 @@ -

    osu auto aimbot download


    DOWNLOADhttps://gohhs.com/2uEABu



    -
    -Osu! Aim Assist (UNDETECTED) - Forum on hacking and cheating in other games. ... Download link for aimbot (updated to no password): ... aiimbot sight for CS:GO No1 - Forum. -The site contains the most complete list of cheats and codes for the game ... -aiimbot sight for CS:GO No1 ... Download link. -aiimbot scope for CS:GO No1 ... -aiimbot scope for CS:GO No1 - Forum. -The site contains the most complete list of cheats and codes for the game Counter-Strike: Global Offensive. -You can download trainers and saved games from us ... -Download aiimbot for cs go for free and without viruses! -Cheat aimbot for cs go from ... 8a78ff9644
    -
    -
    -

    diff --git a/spaces/sczhou/ProPainter/RAFT/demo.py b/spaces/sczhou/ProPainter/RAFT/demo.py deleted file mode 100644 index 096963bdbb36aed3df673f131d6e044d8c6f95ea..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/RAFT/demo.py +++ /dev/null @@ -1,79 +0,0 @@ -import sys -import argparse -import os -import cv2 -import glob -import numpy as np -import torch -from PIL import Image - -from .raft import RAFT -from .utils import flow_viz -from .utils.utils import InputPadder - - - -DEVICE = 'cuda' - -def load_image(imfile): - img = np.array(Image.open(imfile)).astype(np.uint8) - img = torch.from_numpy(img).permute(2, 0, 1).float() - return img - - -def load_image_list(image_files): - images = [] - for imfile in sorted(image_files): - images.append(load_image(imfile)) - - images = torch.stack(images, dim=0) - images = images.to(DEVICE) - - padder = InputPadder(images.shape) - return padder.pad(images)[0] - - -def viz(img, flo): - img = img[0].permute(1,2,0).cpu().numpy() - flo = flo[0].permute(1,2,0).cpu().numpy() - - # map flow to rgb image - flo = flow_viz.flow_to_image(flo) - # img_flo = np.concatenate([img, flo], axis=0) - img_flo = flo - - cv2.imwrite('/home/chengao/test/flow.png', img_flo[:, :, [2,1,0]]) - # cv2.imshow('image', img_flo[:, :, [2,1,0]]/255.0) - # cv2.waitKey() - - -def demo(args): - model = torch.nn.DataParallel(RAFT(args)) - model.load_state_dict(torch.load(args.model)) - - model = model.module - model.to(DEVICE) - model.eval() - - with torch.no_grad(): - images = glob.glob(os.path.join(args.path, '*.png')) + \ - glob.glob(os.path.join(args.path, '*.jpg')) - - images = load_image_list(images) - for i in range(images.shape[0]-1): - image1 = images[i,None] - image2 = images[i+1,None] - - flow_low, flow_up = model(image1, image2, iters=20, test_mode=True) - viz(image1, flow_up) - - -def RAFT_infer(args): - model = torch.nn.DataParallel(RAFT(args)) - model.load_state_dict(torch.load(args.model)) - - model = model.module - model.to(DEVICE) - model.eval() - - return model diff --git a/spaces/seanghay/KLEA/attentions.py b/spaces/seanghay/KLEA/attentions.py deleted file mode 100644 index 55642494c391652b72a12542ca99d43e973770a6..0000000000000000000000000000000000000000 --- a/spaces/seanghay/KLEA/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/segments-tobias/conex/espnet/asr/pytorch_backend/asr_init.py b/spaces/segments-tobias/conex/espnet/asr/pytorch_backend/asr_init.py deleted file mode 100644 index 5831abde090c17967d80cc0047d3c719ba1a0b51..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/asr/pytorch_backend/asr_init.py +++ /dev/null @@ -1,282 +0,0 @@ -"""Finetuning methods.""" - -import logging -import os -import torch - -from collections import OrderedDict - -from espnet.asr.asr_utils import get_model_conf -from espnet.asr.asr_utils import torch_load -from espnet.nets.asr_interface import ASRInterface -from espnet.nets.mt_interface import MTInterface -from espnet.nets.pytorch_backend.transducer.utils import custom_torch_load -from espnet.nets.tts_interface import TTSInterface -from espnet.utils.dynamic_import import dynamic_import - - -def freeze_modules(model, modules): - """Freeze model parameters according to modules list. - - Args: - model (torch.nn.Module): main model to update - modules (list): specified module list for freezing - - Return: - model (torch.nn.Module): updated model - model_params (filter): filtered model parameters - - """ - for mod, param in model.named_parameters(): - if any(mod.startswith(m) for m in modules): - logging.info(f"freezing {mod}, it will not be updated.") - param.requires_grad = False - - model_params = filter(lambda x: x.requires_grad, model.parameters()) - - return model, model_params - - -def transfer_verification(model_state_dict, partial_state_dict, modules): - """Verify tuples (key, shape) for input model modules match specified modules. - - Args: - model_state_dict (OrderedDict): the initial model state_dict - partial_state_dict (OrderedDict): the trained model state_dict - modules (list): specified module list for transfer - - Return: - (boolean): allow transfer - - """ - modules_model = [] - partial_modules = [] - - for key_p, value_p in partial_state_dict.items(): - if any(key_p.startswith(m) for m in modules): - partial_modules += [(key_p, value_p.shape)] - - for key_m, value_m in model_state_dict.items(): - if any(key_m.startswith(m) for m in modules): - modules_model += [(key_m, value_m.shape)] - - len_match = len(modules_model) == len(partial_modules) - - module_match = sorted(modules_model, key=lambda x: (x[0], x[1])) == sorted( - partial_modules, key=lambda x: (x[0], x[1]) - ) - - return len_match and module_match - - -def get_partial_state_dict(model_state_dict, modules): - """Create state_dict with specified modules matching input model modules. - - Note that get_partial_lm_state_dict is used if a LM specified. - - Args: - model_state_dict (OrderedDict): trained model state_dict - modules (list): specified module list for transfer - - Return: - new_state_dict (OrderedDict): the updated state_dict - - """ - new_state_dict = OrderedDict() - - for key, value in model_state_dict.items(): - if any(key.startswith(m) for m in modules): - new_state_dict[key] = value - - return new_state_dict - - -def get_lm_state_dict(lm_state_dict): - """Create compatible ASR decoder state dict from LM state dict. - - Args: - lm_state_dict (OrderedDict): pre-trained LM state_dict - - Return: - new_state_dict (OrderedDict): LM state_dict with updated keys - - """ - new_state_dict = OrderedDict() - - for key, value in list(lm_state_dict.items()): - if key == "predictor.embed.weight": - new_state_dict["dec.embed.weight"] = value - elif key.startswith("predictor.rnn."): - _split = key.split(".") - - new_key = "dec.decoder." + _split[2] + "." + _split[3] + "_l0" - new_state_dict[new_key] = value - - return new_state_dict - - -def filter_modules(model_state_dict, modules): - """Filter non-matched modules in module_state_dict. - - Args: - model_state_dict (OrderedDict): trained model state_dict - modules (list): specified module list for transfer - - Return: - new_mods (list): the update module list - - """ - new_mods = [] - incorrect_mods = [] - - mods_model = list(model_state_dict.keys()) - for mod in modules: - if any(key.startswith(mod) for key in mods_model): - new_mods += [mod] - else: - incorrect_mods += [mod] - - if incorrect_mods: - logging.warning( - "module(s) %s don't match or (partially match) " - "available modules in model.", - incorrect_mods, - ) - logging.warning("for information, the existing modules in model are:") - logging.warning("%s", mods_model) - - return new_mods - - -def load_trained_model(model_path, training=True): - """Load the trained model for recognition. - - Args: - model_path (str): Path to model.***.best - - """ - idim, odim, train_args = get_model_conf( - model_path, os.path.join(os.path.dirname(model_path), "model.json") - ) - - logging.warning("reading model parameters from " + model_path) - - if hasattr(train_args, "model_module"): - model_module = train_args.model_module - else: - model_module = "espnet.nets.pytorch_backend.e2e_asr:E2E" - # CTC Loss is not needed, default to builtin to prevent import errors - if hasattr(train_args, "ctc_type"): - train_args.ctc_type = "builtin" - - model_class = dynamic_import(model_module) - - if "transducer" in model_module: - model = model_class(idim, odim, train_args, training=training) - custom_torch_load(model_path, model, training=training) - else: - model = model_class(idim, odim, train_args) - torch_load(model_path, model) - - return model, train_args - - -def get_trained_model_state_dict(model_path): - """Extract the trained model state dict for pre-initialization. - - Args: - model_path (str): Path to model.***.best - - Return: - model.state_dict() (OrderedDict): the loaded model state_dict - (bool): Boolean defining whether the model is an LM - - """ - conf_path = os.path.join(os.path.dirname(model_path), "model.json") - if "rnnlm" in model_path: - logging.warning("reading model parameters from %s", model_path) - - return get_lm_state_dict(torch.load(model_path)) - - idim, odim, args = get_model_conf(model_path, conf_path) - - logging.warning("reading model parameters from " + model_path) - - if hasattr(args, "model_module"): - model_module = args.model_module - else: - model_module = "espnet.nets.pytorch_backend.e2e_asr:E2E" - - model_class = dynamic_import(model_module) - model = model_class(idim, odim, args) - torch_load(model_path, model) - assert ( - isinstance(model, MTInterface) - or isinstance(model, ASRInterface) - or isinstance(model, TTSInterface) - ) - - return model.state_dict() - - -def load_trained_modules(idim, odim, args, interface=ASRInterface): - """Load model encoder or/and decoder modules with ESPNET pre-trained model(s). - - Args: - idim (int): initial input dimension. - odim (int): initial output dimension. - args (Namespace): The initial model arguments. - interface (Interface): ASRInterface or STInterface or TTSInterface. - - Return: - model (torch.nn.Module): The model with pretrained modules. - - """ - - def print_new_keys(state_dict, modules, model_path): - logging.warning("loading %s from model: %s", modules, model_path) - - for k in state_dict.keys(): - logging.warning("override %s" % k) - - enc_model_path = args.enc_init - dec_model_path = args.dec_init - enc_modules = args.enc_init_mods - dec_modules = args.dec_init_mods - - model_class = dynamic_import(args.model_module) - main_model = model_class(idim, odim, args) - assert isinstance(main_model, interface) - - main_state_dict = main_model.state_dict() - - logging.warning("model(s) found for pre-initialization") - for model_path, modules in [ - (enc_model_path, enc_modules), - (dec_model_path, dec_modules), - ]: - if model_path is not None: - if os.path.isfile(model_path): - model_state_dict = get_trained_model_state_dict(model_path) - - modules = filter_modules(model_state_dict, modules) - - partial_state_dict = get_partial_state_dict(model_state_dict, modules) - - if partial_state_dict: - if transfer_verification( - main_state_dict, partial_state_dict, modules - ): - print_new_keys(partial_state_dict, modules, model_path) - main_state_dict.update(partial_state_dict) - else: - logging.warning( - f"modules {modules} in model {model_path} " - f"don't match your training config", - ) - else: - logging.warning("model was not found : %s", model_path) - - main_model.load_state_dict(main_state_dict) - - return main_model diff --git a/spaces/sgxz/bingo/src/components/chat-image.tsx b/spaces/sgxz/bingo/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
    -
    panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
    -
    -
    -
    -

    添加图像

    -
    -
    - -
    - e.stopPropagation()} - /> - -
    -
    - - -
    -
    - {panel === 'camera-mode' &&
    -
    -
    -
    -
    -
    -
    -
    } -
    -
    - ) -} diff --git a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/common/utils.py b/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/common/utils.py deleted file mode 100644 index 9979e0bc09de2bf3251c651434d7acd2f7305b96..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/common/utils.py +++ /dev/null @@ -1,292 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -import copy -import functools -import itertools - -import matplotlib.pyplot as plt - -######## -# unit # -######## - -def singleton(class_): - instances = {} - def getinstance(*args, **kwargs): - if class_ not in instances: - instances[class_] = class_(*args, **kwargs) - return instances[class_] - return getinstance - -def str2value(v): - v = v.strip() - try: - return int(v) - except: - pass - try: - return float(v) - except: - pass - if v in ('True', 'true'): - return True - elif v in ('False', 'false'): - return False - else: - return v - -@singleton -class get_unit(object): - def __init__(self): - self.unit = {} - self.register('none', None) - - # general convolution - self.register('conv' , nn.Conv2d) - self.register('bn' , nn.BatchNorm2d) - self.register('relu' , nn.ReLU) - self.register('relu6' , nn.ReLU6) - self.register('lrelu' , nn.LeakyReLU) - self.register('dropout' , nn.Dropout) - self.register('dropout2d', nn.Dropout2d) - self.register('sine', Sine) - self.register('relusine', ReLUSine) - - def register(self, - name, - unitf,): - - self.unit[name] = unitf - - def __call__(self, name): - if name is None: - return None - i = name.find('(') - i = len(name) if i==-1 else i - t = name[:i] - f = self.unit[t] - args = name[i:].strip('()') - if len(args) == 0: - args = {} - return f - else: - args = args.split('=') - args = [[','.join(i.split(',')[:-1]), i.split(',')[-1]] for i in args] - args = list(itertools.chain.from_iterable(args)) - args = [i.strip() for i in args if len(i)>0] - kwargs = {} - for k, v in zip(args[::2], args[1::2]): - if v[0]=='(' and v[-1]==')': - kwargs[k] = tuple([str2value(i) for i in v.strip('()').split(',')]) - elif v[0]=='[' and v[-1]==']': - kwargs[k] = [str2value(i) for i in v.strip('[]').split(',')] - else: - kwargs[k] = str2value(v) - return functools.partial(f, **kwargs) - -def register(name): - def wrapper(class_): - get_unit().register(name, class_) - return class_ - return wrapper - -class Sine(object): - def __init__(self, freq, gain=1): - self.freq = freq - self.gain = gain - self.repr = 'sine(freq={}, gain={})'.format(freq, gain) - - def __call__(self, x, gain=1): - act_gain = self.gain * gain - return torch.sin(self.freq * x) * act_gain - - def __repr__(self,): - return self.repr - -class ReLUSine(nn.Module): - def __init(self): - super().__init__() - - def forward(self, input): - a = torch.sin(30 * input) - b = nn.ReLU(inplace=False)(input) - return a+b - -@register('lrelu_agc') -# class lrelu_agc(nn.Module): -class lrelu_agc(object): - """ - The lrelu layer with alpha, gain and clamp - """ - def __init__(self, alpha=0.1, gain=1, clamp=None): - # super().__init__() - self.alpha = alpha - if gain == 'sqrt_2': - self.gain = np.sqrt(2) - else: - self.gain = gain - self.clamp = clamp - self.repr = 'lrelu_agc(alpha={}, gain={}, clamp={})'.format( - alpha, gain, clamp) - - # def forward(self, x, gain=1): - def __call__(self, x, gain=1): - x = F.leaky_relu(x, negative_slope=self.alpha, inplace=True) - act_gain = self.gain * gain - act_clamp = self.clamp * gain if self.clamp is not None else None - if act_gain != 1: - x = x * act_gain - if act_clamp is not None: - x = x.clamp(-act_clamp, act_clamp) - return x - - def __repr__(self,): - return self.repr - -#################### -# spatial encoding # -#################### - -@register('se') -class SpatialEncoding(nn.Module): - def __init__(self, - in_dim, - out_dim, - sigma = 6, - cat_input=True, - require_grad=False,): - - super().__init__() - assert out_dim % (2*in_dim) == 0, "dimension must be dividable" - - n = out_dim // 2 // in_dim - m = 2**np.linspace(0, sigma, n) - m = np.stack([m] + [np.zeros_like(m)]*(in_dim-1), axis=-1) - m = np.concatenate([np.roll(m, i, axis=-1) for i in range(in_dim)], axis=0) - self.emb = torch.FloatTensor(m) - if require_grad: - self.emb = nn.Parameter(self.emb, requires_grad=True) - self.in_dim = in_dim - self.out_dim = out_dim - self.sigma = sigma - self.cat_input = cat_input - self.require_grad = require_grad - - def forward(self, x, format='[n x c]'): - """ - Args: - x: [n x m1], - m1 usually is 2 - Outputs: - y: [n x m2] - m2 dimention number - """ - if format == '[bs x c x 2D]': - xshape = x.shape - x = x.permute(0, 2, 3, 1).contiguous() - x = x.view(-1, x.size(-1)) - elif format == '[n x c]': - pass - else: - raise ValueError - - if not self.require_grad: - self.emb = self.emb.to(x.device) - y = torch.mm(x, self.emb.T) - if self.cat_input: - z = torch.cat([x, torch.sin(y), torch.cos(y)], dim=-1) - else: - z = torch.cat([torch.sin(y), torch.cos(y)], dim=-1) - - if format == '[bs x c x 2D]': - z = z.view(xshape[0], xshape[2], xshape[3], -1) - z = z.permute(0, 3, 1, 2).contiguous() - return z - - def extra_repr(self): - outstr = 'SpatialEncoding (in={}, out={}, sigma={}, cat_input={}, require_grad={})'.format( - self.in_dim, self.out_dim, self.sigma, self.cat_input, self.require_grad) - return outstr - -@register('rffe') -class RFFEncoding(SpatialEncoding): - """ - Random Fourier Features - """ - def __init__(self, - in_dim, - out_dim, - sigma = 6, - cat_input=True, - require_grad=False,): - - super().__init__(in_dim, out_dim, sigma, cat_input, require_grad) - n = out_dim // 2 - m = np.random.normal(0, sigma, size=(n, in_dim)) - self.emb = torch.FloatTensor(m) - if require_grad: - self.emb = nn.Parameter(self.emb, requires_grad=True) - - def extra_repr(self): - outstr = 'RFFEncoding (in={}, out={}, sigma={}, cat_input={}, require_grad={})'.format( - self.in_dim, self.out_dim, self.sigma, self.cat_input, self.require_grad) - return outstr - -########## -# helper # -########## - -def freeze(net): - for m in net.modules(): - if isinstance(m, ( - nn.BatchNorm2d, - nn.SyncBatchNorm,)): - # inplace_abn not supported - m.eval() - for pi in net.parameters(): - pi.requires_grad = False - return net - -def common_init(m): - if isinstance(m, ( - nn.Conv2d, - nn.ConvTranspose2d,)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, ( - nn.BatchNorm2d, - nn.SyncBatchNorm,)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - else: - pass - -def init_module(module): - """ - Args: - module: [nn.module] list or nn.module - a list of module to be initialized. - """ - if isinstance(module, (list, tuple)): - module = list(module) - else: - module = [module] - - for mi in module: - for mii in mi.modules(): - common_init(mii) - -def get_total_param(net): - if getattr(net, 'parameters', None) is None: - return 0 - return sum(p.numel() for p in net.parameters()) - -def get_total_param_sum(net): - if getattr(net, 'parameters', None) is None: - return 0 - with torch.no_grad(): - s = sum(p.cpu().detach().numpy().sum().item() for p in net.parameters()) - return s diff --git a/spaces/shibing624/code-autocomplete/README.md b/spaces/shibing624/code-autocomplete/README.md deleted file mode 100644 index 5f136c52ac81cd7daf527099cd142341166c47b2..0000000000000000000000000000000000000000 --- a/spaces/shibing624/code-autocomplete/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Code Autocomplete -emoji: 📚 -colorFrom: blue -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/shikunl/prismer/prismer/experts/normal/models/submodules/decoder.py b/spaces/shikunl/prismer/prismer/experts/normal/models/submodules/decoder.py deleted file mode 100644 index 2a529a4495a04b87d01d9bfb677fdb5796085c2a..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/normal/models/submodules/decoder.py +++ /dev/null @@ -1,202 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from experts.normal.models.submodules.submodules import UpSampleBN, UpSampleGN, norm_normalize, sample_points - - -class Decoder(nn.Module): - def __init__(self, args): - super(Decoder, self).__init__() - - # hyper-parameter for sampling - self.sampling_ratio = args.sampling_ratio - self.importance_ratio = args.importance_ratio - - # feature-map - self.conv2 = nn.Conv2d(2048, 2048, kernel_size=1, stride=1, padding=0) - if args.architecture == 'BN': - self.up1 = UpSampleBN(skip_input=2048 + 176, output_features=1024) - self.up2 = UpSampleBN(skip_input=1024 + 64, output_features=512) - self.up3 = UpSampleBN(skip_input=512 + 40, output_features=256) - self.up4 = UpSampleBN(skip_input=256 + 24, output_features=128) - - elif args.architecture == 'GN': - self.up1 = UpSampleGN(skip_input=2048 + 176, output_features=1024) - self.up2 = UpSampleGN(skip_input=1024 + 64, output_features=512) - self.up3 = UpSampleGN(skip_input=512 + 40, output_features=256) - self.up4 = UpSampleGN(skip_input=256 + 24, output_features=128) - - else: - raise Exception('invalid architecture') - - # produces 1/8 res output - self.out_conv_res8 = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1) - - # produces 1/4 res output - self.out_conv_res4 = nn.Sequential( - nn.Conv1d(512 + 4, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 4, kernel_size=1), - ) - - # produces 1/2 res output - self.out_conv_res2 = nn.Sequential( - nn.Conv1d(256 + 4, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 4, kernel_size=1), - ) - - # produces 1/1 res output - self.out_conv_res1 = nn.Sequential( - nn.Conv1d(128 + 4, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 128, kernel_size=1), nn.ReLU(), - nn.Conv1d(128, 4, kernel_size=1), - ) - - def forward(self, features, gt_norm_mask=None, mode='test'): - x_block0, x_block1, x_block2, x_block3, x_block4 = features[4], features[5], features[6], features[8], features[11] - - # generate feature-map - - x_d0 = self.conv2(x_block4) # x_d0 : [2, 2048, 15, 20] 1/32 res - x_d1 = self.up1(x_d0, x_block3) # x_d1 : [2, 1024, 30, 40] 1/16 res - x_d2 = self.up2(x_d1, x_block2) # x_d2 : [2, 512, 60, 80] 1/8 res - x_d3 = self.up3(x_d2, x_block1) # x_d3: [2, 256, 120, 160] 1/4 res - x_d4 = self.up4(x_d3, x_block0) # x_d4: [2, 128, 240, 320] 1/2 res - - # 1/8 res output - out_res8 = self.out_conv_res8(x_d2) # out_res8: [2, 4, 60, 80] 1/8 res output - out_res8 = norm_normalize(out_res8) # out_res8: [2, 4, 60, 80] 1/8 res output - - ################################################################################################################ - # out_res4 - ################################################################################################################ - - if mode == 'train': - # upsampling ... out_res8: [2, 4, 60, 80] -> out_res8_res4: [2, 4, 120, 160] - out_res8_res4 = F.interpolate(out_res8, scale_factor=2, mode='bilinear', align_corners=True) - B, _, H, W = out_res8_res4.shape - - # samples: [B, 1, N, 2] - point_coords_res4, rows_int, cols_int = sample_points(out_res8_res4.detach(), gt_norm_mask, - sampling_ratio=self.sampling_ratio, - beta=self.importance_ratio) - - # output (needed for evaluation / visualization) - out_res4 = out_res8_res4 - - # grid_sample feature-map - feat_res4 = F.grid_sample(x_d2, point_coords_res4, mode='bilinear', align_corners=True) # (B, 512, 1, N) - init_pred = F.grid_sample(out_res8, point_coords_res4, mode='bilinear', align_corners=True) # (B, 4, 1, N) - feat_res4 = torch.cat([feat_res4, init_pred], dim=1) # (B, 512+4, 1, N) - - # prediction (needed to compute loss) - samples_pred_res4 = self.out_conv_res4(feat_res4[:, :, 0, :]) # (B, 4, N) - samples_pred_res4 = norm_normalize(samples_pred_res4) # (B, 4, N) - normalized - - for i in range(B): - out_res4[i, :, rows_int[i, :], cols_int[i, :]] = samples_pred_res4[i, :, :] - - else: - # grid_sample feature-map - feat_map = F.interpolate(x_d2, scale_factor=2, mode='bilinear', align_corners=True) - init_pred = F.interpolate(out_res8, scale_factor=2, mode='bilinear', align_corners=True) - feat_map = torch.cat([feat_map, init_pred], dim=1) # (B, 512+4, H, W) - B, _, H, W = feat_map.shape - - # try all pixels - out_res4 = self.out_conv_res4(feat_map.view(B, 512 + 4, -1)) # (B, 4, N) - out_res4 = norm_normalize(out_res4) # (B, 4, N) - normalized - out_res4 = out_res4.view(B, 4, H, W) - samples_pred_res4 = point_coords_res4 = None - - ################################################################################################################ - # out_res2 - ################################################################################################################ - - if mode == 'train': - - # upsampling ... out_res4: [2, 4, 120, 160] -> out_res4_res2: [2, 4, 240, 320] - out_res4_res2 = F.interpolate(out_res4, scale_factor=2, mode='bilinear', align_corners=True) - B, _, H, W = out_res4_res2.shape - - # samples: [B, 1, N, 2] - point_coords_res2, rows_int, cols_int = sample_points(out_res4_res2.detach(), gt_norm_mask, - sampling_ratio=self.sampling_ratio, - beta=self.importance_ratio) - - # output (needed for evaluation / visualization) - out_res2 = out_res4_res2 - - # grid_sample feature-map - feat_res2 = F.grid_sample(x_d3, point_coords_res2, mode='bilinear', align_corners=True) # (B, 256, 1, N) - init_pred = F.grid_sample(out_res4, point_coords_res2, mode='bilinear', align_corners=True) # (B, 4, 1, N) - feat_res2 = torch.cat([feat_res2, init_pred], dim=1) # (B, 256+4, 1, N) - - # prediction (needed to compute loss) - samples_pred_res2 = self.out_conv_res2(feat_res2[:, :, 0, :]) # (B, 4, N) - samples_pred_res2 = norm_normalize(samples_pred_res2) # (B, 4, N) - normalized - - for i in range(B): - out_res2[i, :, rows_int[i, :], cols_int[i, :]] = samples_pred_res2[i, :, :] - - else: - # grid_sample feature-map - feat_map = F.interpolate(x_d3, scale_factor=2, mode='bilinear', align_corners=True) - init_pred = F.interpolate(out_res4, scale_factor=2, mode='bilinear', align_corners=True) - feat_map = torch.cat([feat_map, init_pred], dim=1) # (B, 512+4, H, W) - B, _, H, W = feat_map.shape - - out_res2 = self.out_conv_res2(feat_map.view(B, 256 + 4, -1)) # (B, 4, N) - out_res2 = norm_normalize(out_res2) # (B, 4, N) - normalized - out_res2 = out_res2.view(B, 4, H, W) - samples_pred_res2 = point_coords_res2 = None - - ################################################################################################################ - # out_res1 - ################################################################################################################ - - if mode == 'train': - # upsampling ... out_res4: [2, 4, 120, 160] -> out_res4_res2: [2, 4, 240, 320] - out_res2_res1 = F.interpolate(out_res2, scale_factor=2, mode='bilinear', align_corners=True) - B, _, H, W = out_res2_res1.shape - - # samples: [B, 1, N, 2] - point_coords_res1, rows_int, cols_int = sample_points(out_res2_res1.detach(), gt_norm_mask, - sampling_ratio=self.sampling_ratio, - beta=self.importance_ratio) - - # output (needed for evaluation / visualization) - out_res1 = out_res2_res1 - - # grid_sample feature-map - feat_res1 = F.grid_sample(x_d4, point_coords_res1, mode='bilinear', align_corners=True) # (B, 128, 1, N) - init_pred = F.grid_sample(out_res2, point_coords_res1, mode='bilinear', align_corners=True) # (B, 4, 1, N) - feat_res1 = torch.cat([feat_res1, init_pred], dim=1) # (B, 128+4, 1, N) - - # prediction (needed to compute loss) - samples_pred_res1 = self.out_conv_res1(feat_res1[:, :, 0, :]) # (B, 4, N) - samples_pred_res1 = norm_normalize(samples_pred_res1) # (B, 4, N) - normalized - - for i in range(B): - out_res1[i, :, rows_int[i, :], cols_int[i, :]] = samples_pred_res1[i, :, :] - - else: - # grid_sample feature-map - feat_map = F.interpolate(x_d4, scale_factor=2, mode='bilinear', align_corners=True) - init_pred = F.interpolate(out_res2, scale_factor=2, mode='bilinear', align_corners=True) - feat_map = torch.cat([feat_map, init_pred], dim=1) # (B, 512+4, H, W) - B, _, H, W = feat_map.shape - - out_res1 = self.out_conv_res1(feat_map.view(B, 128 + 4, -1)) # (B, 4, N) - out_res1 = norm_normalize(out_res1) # (B, 4, N) - normalized - out_res1 = out_res1.view(B, 4, H, W) - samples_pred_res1 = point_coords_res1 = None - - return [out_res8, out_res4, out_res2, out_res1], \ - [out_res8, samples_pred_res4, samples_pred_res2, samples_pred_res1], \ - [None, point_coords_res4, point_coords_res2, point_coords_res1] - diff --git a/spaces/shivkumarganesh/whisper-demo-hi/README.md b/spaces/shivkumarganesh/whisper-demo-hi/README.md deleted file mode 100644 index 7843cbfd85cf6bb73dfbef2f8863bb01aed27aa2..0000000000000000000000000000000000000000 --- a/spaces/shivkumarganesh/whisper-demo-hi/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Whisper Demo -emoji: 🤫 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: whisper-event/whisper-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/training/loss.py b/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/training/loss.py deleted file mode 100644 index aa59b61bf316f73f269849b54ec3bb35b6a0d61d..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/training/loss.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Loss functions.""" - -import tensorflow as tf -import dnnlib.tflib as tflib -from dnnlib.tflib.autosummary import autosummary - -#---------------------------------------------------------------------------- -# Convenience func that casts all of its arguments to tf.float32. - -def fp32(*values): - if len(values) == 1 and isinstance(values[0], tuple): - values = values[0] - values = tuple(tf.cast(v, tf.float32) for v in values) - return values if len(values) >= 2 else values[0] - -#---------------------------------------------------------------------------- -# WGAN & WGAN-GP loss functions. - -def G_wgan(G, D, opt, training_set, minibatch_size): # pylint: disable=unused-argument - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - labels = training_set.get_random_labels_tf(minibatch_size) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - loss = -fake_scores_out - return loss - -def D_wgan(G, D, opt, training_set, minibatch_size, reals, labels, # pylint: disable=unused-argument - wgan_epsilon = 0.001): # Weight for the epsilon term, \epsilon_{drift}. - - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - real_scores_out = fp32(D.get_output_for(reals, labels, is_training=True)) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - real_scores_out = autosummary('Loss/scores/real', real_scores_out) - fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out) - loss = fake_scores_out - real_scores_out - - with tf.name_scope('EpsilonPenalty'): - epsilon_penalty = autosummary('Loss/epsilon_penalty', tf.square(real_scores_out)) - loss += epsilon_penalty * wgan_epsilon - return loss - -def D_wgan_gp(G, D, opt, training_set, minibatch_size, reals, labels, # pylint: disable=unused-argument - wgan_lambda = 10.0, # Weight for the gradient penalty term. - wgan_epsilon = 0.001, # Weight for the epsilon term, \epsilon_{drift}. - wgan_target = 1.0): # Target value for gradient magnitudes. - - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - real_scores_out = fp32(D.get_output_for(reals, labels, is_training=True)) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - real_scores_out = autosummary('Loss/scores/real', real_scores_out) - fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out) - loss = fake_scores_out - real_scores_out - - with tf.name_scope('GradientPenalty'): - mixing_factors = tf.random_uniform([minibatch_size, 1, 1, 1], 0.0, 1.0, dtype=fake_images_out.dtype) - mixed_images_out = tflib.lerp(tf.cast(reals, fake_images_out.dtype), fake_images_out, mixing_factors) - mixed_scores_out = fp32(D.get_output_for(mixed_images_out, labels, is_training=True)) - mixed_scores_out = autosummary('Loss/scores/mixed', mixed_scores_out) - mixed_loss = opt.apply_loss_scaling(tf.reduce_sum(mixed_scores_out)) - mixed_grads = opt.undo_loss_scaling(fp32(tf.gradients(mixed_loss, [mixed_images_out])[0])) - mixed_norms = tf.sqrt(tf.reduce_sum(tf.square(mixed_grads), axis=[1,2,3])) - mixed_norms = autosummary('Loss/mixed_norms', mixed_norms) - gradient_penalty = tf.square(mixed_norms - wgan_target) - loss += gradient_penalty * (wgan_lambda / (wgan_target**2)) - - with tf.name_scope('EpsilonPenalty'): - epsilon_penalty = autosummary('Loss/epsilon_penalty', tf.square(real_scores_out)) - loss += epsilon_penalty * wgan_epsilon - return loss - -#---------------------------------------------------------------------------- -# Hinge loss functions. (Use G_wgan with these) - -def D_hinge(G, D, opt, training_set, minibatch_size, reals, labels): # pylint: disable=unused-argument - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - real_scores_out = fp32(D.get_output_for(reals, labels, is_training=True)) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - real_scores_out = autosummary('Loss/scores/real', real_scores_out) - fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out) - loss = tf.maximum(0., 1.+fake_scores_out) + tf.maximum(0., 1.-real_scores_out) - return loss - -def D_hinge_gp(G, D, opt, training_set, minibatch_size, reals, labels, # pylint: disable=unused-argument - wgan_lambda = 10.0, # Weight for the gradient penalty term. - wgan_target = 1.0): # Target value for gradient magnitudes. - - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - real_scores_out = fp32(D.get_output_for(reals, labels, is_training=True)) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - real_scores_out = autosummary('Loss/scores/real', real_scores_out) - fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out) - loss = tf.maximum(0., 1.+fake_scores_out) + tf.maximum(0., 1.-real_scores_out) - - with tf.name_scope('GradientPenalty'): - mixing_factors = tf.random_uniform([minibatch_size, 1, 1, 1], 0.0, 1.0, dtype=fake_images_out.dtype) - mixed_images_out = tflib.lerp(tf.cast(reals, fake_images_out.dtype), fake_images_out, mixing_factors) - mixed_scores_out = fp32(D.get_output_for(mixed_images_out, labels, is_training=True)) - mixed_scores_out = autosummary('Loss/scores/mixed', mixed_scores_out) - mixed_loss = opt.apply_loss_scaling(tf.reduce_sum(mixed_scores_out)) - mixed_grads = opt.undo_loss_scaling(fp32(tf.gradients(mixed_loss, [mixed_images_out])[0])) - mixed_norms = tf.sqrt(tf.reduce_sum(tf.square(mixed_grads), axis=[1,2,3])) - mixed_norms = autosummary('Loss/mixed_norms', mixed_norms) - gradient_penalty = tf.square(mixed_norms - wgan_target) - loss += gradient_penalty * (wgan_lambda / (wgan_target**2)) - return loss - - -#---------------------------------------------------------------------------- -# Loss functions advocated by the paper -# "Which Training Methods for GANs do actually Converge?" - -def G_logistic_saturating(G, D, opt, training_set, minibatch_size): # pylint: disable=unused-argument - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - labels = training_set.get_random_labels_tf(minibatch_size) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - loss = -tf.nn.softplus(fake_scores_out) # log(1 - logistic(fake_scores_out)) - return loss - -def G_logistic_nonsaturating(G, D, opt, training_set, minibatch_size): # pylint: disable=unused-argument - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - labels = training_set.get_random_labels_tf(minibatch_size) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - loss = tf.nn.softplus(-fake_scores_out) # -log(logistic(fake_scores_out)) - return loss - -def D_logistic(G, D, opt, training_set, minibatch_size, reals, labels): # pylint: disable=unused-argument - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - real_scores_out = fp32(D.get_output_for(reals, labels, is_training=True)) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - real_scores_out = autosummary('Loss/scores/real', real_scores_out) - fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out) - loss = tf.nn.softplus(fake_scores_out) # -log(1 - logistic(fake_scores_out)) - loss += tf.nn.softplus(-real_scores_out) # -log(logistic(real_scores_out)) # temporary pylint workaround # pylint: disable=invalid-unary-operand-type - return loss - -def D_logistic_simplegp(G, D, opt, training_set, minibatch_size, reals, labels, r1_gamma=10.0, r2_gamma=0.0): # pylint: disable=unused-argument - latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:]) - fake_images_out = G.get_output_for(latents, labels, is_training=True) - real_scores_out = fp32(D.get_output_for(reals, labels, is_training=True)) - fake_scores_out = fp32(D.get_output_for(fake_images_out, labels, is_training=True)) - real_scores_out = autosummary('Loss/scores/real', real_scores_out) - fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out) - loss = tf.nn.softplus(fake_scores_out) # -log(1 - logistic(fake_scores_out)) - loss += tf.nn.softplus(-real_scores_out) # -log(logistic(real_scores_out)) # temporary pylint workaround # pylint: disable=invalid-unary-operand-type - - if r1_gamma != 0.0: - with tf.name_scope('R1Penalty'): - real_loss = opt.apply_loss_scaling(tf.reduce_sum(real_scores_out)) - real_grads = opt.undo_loss_scaling(fp32(tf.gradients(real_loss, [reals])[0])) - r1_penalty = tf.reduce_sum(tf.square(real_grads), axis=[1,2,3]) - r1_penalty = autosummary('Loss/r1_penalty', r1_penalty) - loss += r1_penalty * (r1_gamma * 0.5) - - if r2_gamma != 0.0: - with tf.name_scope('R2Penalty'): - fake_loss = opt.apply_loss_scaling(tf.reduce_sum(fake_scores_out)) - fake_grads = opt.undo_loss_scaling(fp32(tf.gradients(fake_loss, [fake_images_out])[0])) - r2_penalty = tf.reduce_sum(tf.square(fake_grads), axis=[1,2,3]) - r2_penalty = autosummary('Loss/r2_penalty', r2_penalty) - loss += r2_penalty * (r2_gamma * 0.5) - return loss - -#---------------------------------------------------------------------------- diff --git a/spaces/simsantonioii/MusicGen-Continuation/share_btn.py b/spaces/simsantonioii/MusicGen-Continuation/share_btn.py deleted file mode 100644 index 1535c153489aa0de3632dfe398aa6c662bcfcc46..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/share_btn.py +++ /dev/null @@ -1,121 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -css = """ -/* share button */ -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - width: 13rem; - margin-top: 10px; - margin-left: auto; - flex: unset !important; -} -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor: pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; - right:0; -} -#share-btn * { - all: unset !important; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - if(response.headers.get('Content-Type').includes('application/json')){ - const data = await response.json(); - return data; - } - const url = await response.text(); - return url; - } - async function getInputMediaFile(mediaEl){ - const res = await fetch(mediaEl.src); - const blob = await res.blob(); - const contentType = res.headers.get("content-type"); - const ext = contentType.split("/")[1]; - const videoId = Date.now() - const fileName = `MusicGen-${videoId}.${ext}`; - return new File([blob], fileName, { type: contentType }); - } - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const prompt = gradioEl.querySelector('#text-input textarea').value; - const melody = gradioEl.querySelector('#melody-output audio'); - const generated = gradioEl.querySelector('#generated-video video'); - - const titleTxt = `MusicGen: ${prompt.slice(0, 50)}...`; - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!generated){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const generatedFile= await getInputMediaFile(generated) - const generatedURL = await uploadFile(generatedFile); - let melodyURL = null; - - if(melody){ - const melodyFile = await getInputMediaFile(melody) - melodyURL = await uploadFile(melodyFile); - } - - const descriptionMd = ` -### Text -${prompt} - -### Generated Song -${generatedURL} - -${(melodyURL && (typeof melodyURL === 'string'))? ` -### Melody -` : ``} - -made with continuation: https://huggingface.co/spaces/radames/MusicGen-Continuation -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/facebook/MusicGen/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/skyxx/skyxxChat/modules/__init__.py b/spaces/skyxx/skyxxChat/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sohomghosh/FLUEnT/fin_readability_sustainability.py b/spaces/sohomghosh/FLUEnT/fin_readability_sustainability.py deleted file mode 100644 index 53ea0c60eab0dd27868f9bdc6d4652ea0ddc71b9..0000000000000000000000000000000000000000 --- a/spaces/sohomghosh/FLUEnT/fin_readability_sustainability.py +++ /dev/null @@ -1,110 +0,0 @@ -import torch -import transformers -from torch.utils.data import Dataset, DataLoader -from transformers import RobertaModel, RobertaTokenizer, BertModel, BertTokenizer -import pandas as pd - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -MAX_LEN = 128 -BATCH_SIZE = 20 -text_col_name = 'sentence' - -def scoring_data_prep(dataset): - out = [] - target = [] - mask = [] - - for i in range(len(dataset)): - rec = dataset[i] - out.append(rec['ids'].reshape(-1,MAX_LEN)) - mask.append(rec['mask'].reshape(-1,MAX_LEN)) - - out_stack = torch.cat(out, dim = 0) - mask_stack = torch.cat(mask, dim =0 ) - out_stack = out_stack.to(device, dtype = torch.long) - mask_stack = mask_stack.to(device, dtype = torch.long) - - return out_stack, mask_stack - -class Triage(Dataset): - """ - This is a subclass of torch packages Dataset class. It processes input to create ids, masks and targets required for model training. - """ - - def __init__(self, dataframe, tokenizer, max_len, text_col_name): - self.len = len(dataframe) - self.data = dataframe - self.tokenizer = tokenizer - self.max_len = max_len - self.text_col_name = text_col_name - - - def __getitem__(self, index): - title = str(self.data[self.text_col_name][index]) - title = " ".join(title.split()) - inputs = self.tokenizer.encode_plus( - title, - None, - add_special_tokens=True, - max_length=self.max_len, - pad_to_max_length=True, #padding='max_length' #For future version use `padding='max_length'` - return_token_type_ids=True, - truncation=True, - ) - ids = inputs["input_ids"] - mask = inputs["attention_mask"] - - return { - "ids": torch.tensor(ids, dtype=torch.long), - "mask": torch.tensor(mask, dtype=torch.long), - - } - - def __len__(self): - return self.len - -class BERTClass(torch.nn.Module): - def __init__(self, num_class, task): - super(BERTClass, self).__init__() - self.num_class = num_class - if task =="sustanability": - self.l1 = RobertaModel.from_pretrained("roberta-base") - else: - self.l1 = BertModel.from_pretrained("ProsusAI/finbert") - self.pre_classifier = torch.nn.Linear(768, 768) - self.dropout = torch.nn.Dropout(0.3) - self.classifier = torch.nn.Linear(768, self.num_class) - self.history = dict() - - def forward(self, input_ids, attention_mask): - output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask) - hidden_state = output_1[0] - pooler = hidden_state[:, 0] - pooler = self.pre_classifier(pooler) - pooler = torch.nn.ReLU()(pooler) - pooler = self.dropout(pooler) - output = self.classifier(pooler) - return output - -def do_predict(model, tokenizer, test_df): - test_set = Triage(test_df, tokenizer, MAX_LEN, text_col_name) - test_params = {'batch_size' : BATCH_SIZE, 'shuffle': False, 'num_workers':0} - test_loader = DataLoader(test_set, **test_params) - out_stack, mask_stack = scoring_data_prep(dataset = test_set) - n = 0 - combined_output = [] - model.eval() - with torch.no_grad(): - while n < test_df.shape[0]: - output = model(out_stack[n:n+BATCH_SIZE,:],mask_stack[n:n+BATCH_SIZE,:]) - n = n + BATCH_SIZE - combined_output.append(output) - combined_output = torch.cat(combined_output, dim = 0) - preds = torch.argsort(combined_output, axis = 1, descending = True) - preds = preds.to('cpu') - actual_predictions = [i[0] for i in preds.tolist()] - combined_output = combined_output.to('cpu') - prob_predictions= [i[1] for i in combined_output.tolist()] - return (actual_predictions, prob_predictions) - \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/__init__.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/__init__.py deleted file mode 100644 index 3532479e52a0e1f1ba204c6f5d51c71c98ee5df0..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the models/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("fairseq.model_parallel.models." + model_name) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dropgun Samples ? Vocal Trap (WAV MASSIVE SERUM SYLENTH1) ((HOT)).md b/spaces/stomexserde/gpt4-ui/Examples/Dropgun Samples ? Vocal Trap (WAV MASSIVE SERUM SYLENTH1) ((HOT)).md deleted file mode 100644 index 12fab939f71b27655d6b318d9729ae357031a90e..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dropgun Samples ? Vocal Trap (WAV MASSIVE SERUM SYLENTH1) ((HOT)).md +++ /dev/null @@ -1,19 +0,0 @@ -
    -

    How to Make a Hit with Dropgun Samples – Vocal Trap

    -

    If you are looking for a sample pack that will help you produce your best hits, you should check out Dropgun Samples – Vocal Trap. This pack is a collection of five catchy vocal toplines and five amazing drops that you can use in a wide range of production styles. Whether you are making EDM, future bass, pop, or trap, this pack has something for you.

    -

    Dropgun Samples – Vocal Trap is a perfect blend of different genres and has a fresh and unique sound. The vocal toplines are written and recorded by Annie Sollange, a talented singer and songwriter who has a strong pop vocal hook that will make your tracks stand out. The drops are designed by RØGUENETHVN, a duo of producers who have a knack for creating hit-like melodies and sick drum parts.

    -

    Dropgun Samples – Vocal Trap (WAV, MASSIVE, SERUM, SYLENTH1)


    Download File · https://urlgoal.com/2uI7ys



    -

    The pack contains 233 audio files, 49 MIDI files, and 49 presets for Massive, Serum, and Sylenth1. You also get five construction kits that show you how to use the samples and presets in your own projects. You can mix and match the elements to create your own original tracks or use them as inspiration for your next banger.

    -

    Dropgun Samples – Vocal Trap is available on dropgunsamples.com for $27.99. You can also listen to the demo on SoundCloud to hear how the samples sound in action. If you want to make a hit with Dropgun Samples – Vocal Trap, don't miss this opportunity and grab your copy today!

    - -

    But wait, there's more! Dropgun Samples – Vocal Trap is not only a sample pack, but also a source of inspiration and learning. You can use the pack to improve your vocal production skills and learn some tips and tricks from the pros. Here are some examples of what you can do with the pack:

    -
      -
    • Use parallel processing and reverb layering to create punchy and super-wide vocals that fill up the stereo field. You can use plugins like R-Channel and Doubler from Waves to achieve this effect.
    • -
    • Use pitch-shifting and modulation effects to create vocal effects that add interest and variation to your tracks. You can use plugins like Vocal Bender from Waves to manipulate the pitch and formant of your vocals in real time.
    • -
    • Use pitch correction tools to fix any tuning issues and make your vocals sound more natural and professional. You can use plugins like Waves Tune or Waves Tune Real-Time to correct your vocals automatically or manually.
    • -
    • Use vocal synthesis and vocoder effects to create bass lines, synth arpeggios, and other sounds with your voice. You can use plugins like OVox from Waves to transform your vocals into amazing instruments and effects.
    • -
    • Use de-essing and compression tools to tame and smooth out your vocals and make them sit on top of the mix. You can use plugins like Sibilance, R-Vox, and Vocal Rider from Waves to control the dynamics and level of your vocals.
    • -
    -

    As you can see, Dropgun Samples – Vocal Trap is a versatile and powerful pack that will take your vocal production to the next level. Whether you want to make a hit song, a remix, or just have fun with your vocals, this pack is for you. Don't miss this chance and get your copy of Dropgun Samples – Vocal Trap today!

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Junkers Gas Calorimeter Pdf 14.md b/spaces/stomexserde/gpt4-ui/Examples/Junkers Gas Calorimeter Pdf 14.md deleted file mode 100644 index a4c7d1eb0a15a0491eca1cb4105351e287122cca..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Junkers Gas Calorimeter Pdf 14.md +++ /dev/null @@ -1,38 +0,0 @@ - -

    How to Use the Junkers Gas Calorimeter PDF 14 to Measure the Calorific Value of Gases

    -

    The Junkers gas calorimeter PDF 14 is a device that can measure the calorific value of gases, such as natural gas, biogas, or hydrogen. The calorific value of a gas is the amount of heat energy that is released when a unit volume of the gas is burned completely. Knowing the calorific value of a gas is important for various applications, such as combustion engineering, power generation, and gas quality control.

    -

    junkers gas calorimeter pdf 14


    DOWNLOAD === https://urlgoal.com/2uI8lF



    -

    In this article, we will explain how the Junkers gas calorimeter PDF 14 works and how to use it to measure the calorific value of gases. We will also provide some tips and precautions to ensure accurate and reliable results.

    -

    How the Junkers Gas Calorimeter PDF 14 Works

    -

    The Junkers gas calorimeter PDF 14 consists of three main parts: a burner, a water jacket, and a thermometer. The burner is where the gas sample is burned with air. The water jacket is a cylindrical container that surrounds the burner and contains a known volume of water. The thermometer measures the temperature of the water before and after the gas sample is burned.

    -

    The principle of the Junkers gas calorimeter PDF 14 is based on the fact that the heat energy released by the burning gas is transferred to the water in the jacket. By measuring the temperature rise of the water and knowing its mass and specific heat capacity, we can calculate the heat energy transferred to the water. By dividing this heat energy by the volume of gas burned, we can obtain the calorific value of the gas.

    -

    -

    How to Use the Junkers Gas Calorimeter PDF 14 to Measure the Calorific Value of Gases

    -

    To use the Junkers gas calorimeter PDF 14 to measure the calorific value of gases, you will need to follow these steps:

    -
      -
    1. Fill the water jacket with a known volume of water and record its initial temperature.
    2. -
    3. Connect the gas sample to the burner and adjust the flow rate and air supply to obtain a steady flame.
    4. -
    5. Burn the gas sample for a fixed time interval (e.g., 10 minutes) and record the final temperature of the water.
    6. -
    7. Calculate the temperature rise of the water by subtracting the initial temperature from the final temperature.
    8. -
    9. Calculate the heat energy transferred to the water by multiplying its mass, specific heat capacity, and temperature rise.
    10. -
    11. Calculate the volume of gas burned by multiplying its flow rate and time interval.
    12. -
    13. Calculate the calorific value of the gas by dividing the heat energy transferred to the water by the volume of gas burned.
    14. -
    -

    Tips and Precautions for Using the Junkers Gas Calorimeter PDF 14

    -

    To ensure accurate and reliable results when using the Junkers gas calorimeter PDF 14, you should follow these tips and precautions:

    -
      -
    • Use distilled water or deionized water to fill the water jacket, as impurities in tap water may affect its specific heat capacity.
    • -
    • Use a calibrated thermometer to measure the temperature of the water, as errors in temperature measurement may affect the calculation of heat energy transferred.
    • -
    • Use a calibrated flow meter to measure the flow rate of gas sample, as errors in flow rate measurement may affect the calculation of volume of gas burned.
    • -
    • Avoid drafts or external heat sources that may affect the temperature of water or flame.
    • -
    • Avoid leaks or incomplete combustion that may affect the amount or quality of gas burned.
    • -
    • Repeat the experiment several times with different samples of gas and take an average of results to reduce random errors.
    • -
    -

    Conclusion

    -

    The Junkers gas calorimeter PDF 14 is a useful device that can measure -the calorific value of gases. By following these steps, tips, and precautions, -you can use it effectively and accurately. If you want to learn more about -the Junkers gas calorimeter PDF 14, you can download its manual from -https://www.junkers.com/pdf/14.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/simple_detection.py b/spaces/stratussox/yolov5_inference/simple_detection.py deleted file mode 100644 index 38d203516fb2a99957db1ef06a32fd6943c2048f..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/simple_detection.py +++ /dev/null @@ -1,235 +0,0 @@ -import argparse -import os -import platform -import sys -from pathlib import Path - -import torch - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import DetectMultiBackend -from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams -from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, - increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import select_device, smart_inference_mode - - -@smart_inference_mode() -def run( - weights=ROOT / 'yolov5s.pt', # model path or triton URL - source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam) - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - imgsz=(640, 640), # inference size (height, width) - conf_thres=0.25, # confidence threshold - iou_thres=0.45, # NMS IOU threshold - max_det=1000, # maximum detections per image - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - view_img=False, # show results - save_txt=False, # save results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_crop=False, # save cropped prediction boxes - nosave=False, # do not save images/videos - classes=None, # filter by class: --class 0, or --class 0 2 3 - agnostic_nms=False, # class-agnostic NMS - augment=False, # augmented inference - visualize=False, # visualize features - update=False, # update all models - project=ROOT / 'runs/detect', # save results to project/name - name='exp', # save results to project/name - exist_ok=False, # existing project/name ok, do not increment - line_thickness=3, # bounding box thickness (pixels) - hide_labels=False, # hide labels - hide_conf=False, # hide confidences - half=False, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - vid_stride=1, # video frame-rate stride -): - source = str(source) - save_img = not nosave and not source.endswith('.txt') # save inference images - is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) - is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) - webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file) - screenshot = source.lower().startswith('screen') - if is_url and is_file: - source = check_file(source) # download - - # Directories - save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - device = select_device(device) - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, names, pt = model.stride, model.names, model.pt - imgsz = check_img_size(imgsz, s=stride) # check image size - - # Dataloader - bs = 1 # batch_size - if webcam: - view_img = check_imshow(warn=True) - dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - bs = len(dataset) - elif screenshot: - dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - vid_path, vid_writer = [None] * bs, [None] * bs - - # Run inference - model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup - seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) - for path, im, im0s, vid_cap, s in dataset: - with dt[0]: - im = torch.from_numpy(im).to(model.device) - im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - if len(im.shape) == 3: - im = im[None] # expand for batch dim - - # Inference - with dt[1]: - visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False - print("Img size: " + str(im.size)) - pred = model(im, augment=augment, visualize=visualize) - - # NMS - with dt[2]: - pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) - - # Second-stage classifier (optional) - # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) - - # Process predictions - for i, det in enumerate(pred): # per image - seen += 1 - if webcam: # batch_size >= 1 - p, im0, frame = path[i], im0s[i].copy(), dataset.count - s += f'{i}: ' - else: - p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # im.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt - s += '%gx%g ' % im.shape[2:] # print string - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - imc = im0.copy() if save_crop else im0 # for save_crop - annotator = Annotator(im0, line_width=line_thickness, example=str(names)) - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, 5].unique(): - n = (det[:, 5] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format - with open(f'{txt_path}.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img or save_crop or view_img: # Add bbox to image - c = int(cls) # integer class - label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') - annotator.box_label(xyxy, label, color=colors(c, True)) - if save_crop: - save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) - - # Stream results - im0 = annotator.result() - if view_img: - if platform.system() == 'Linux' and p not in windows: - windows.append(p) - cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) - cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - else: # 'video' or 'stream' - if vid_path[i] != save_path: # new video - vid_path[i] = save_path - if isinstance(vid_writer[i], cv2.VideoWriter): - vid_writer[i].release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos - vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer[i].write(im0) - - # Print time (inference-only) - LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms") - - # Print results - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - if update: - strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path or triton URL') - parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') - parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') - parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='show results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--visualize', action='store_true', help='visualize features') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)') - parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') - parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') - opt = parser.parse_args() - opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - print_args(vars(opt)) - return opt - - -def main(opt): - check_requirements(exclude=('tensorboard', 'thop')) - run(**vars(opt)) - -def detect(): - opt = parse_opt() - main(opt) - - - - diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Aakrosh 720p Movies.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Aakrosh 720p Movies.md deleted file mode 100644 index b0668fa307500e95e6b091b2d54b26c6b354a952..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Aakrosh 720p Movies.md +++ /dev/null @@ -1,14 +0,0 @@ -

    Aakrosh 720p Movies


    Download Ziphttps://urluss.com/2uCEeV



    - -The film is an action thriller and family drama. It is a remake of the Hindi film Aakrosh, released in 1984. The film had its world premiere at the 2012 Toronto International Film Festival on September 8, 2012, and released in India on 27 October 2012.[Value of the prenatal diagnosis of the 24 cases of lethal dystrophies of the nervous system]. - -The clinical and genetic value of a prenatal diagnosis in 6 cases of autosomal recessive lethal neurogenesis and in 18 cases of autosomal recessive lethal dystrophies of the nervous system (Lazarus-like) is reviewed. The clinical data of the 6 cases of autosomal recessive lethal neurogenesis and of the 18 cases of autosomal recessive lethal dystrophies of the nervous system (Lazarus-like) are presented. The data concerning the 6 cases of autosomal recessive lethal neurogenesis and of the 18 cases of autosomal recessive lethal dystrophies of the nervous system (Lazarus-like) are presented. The data obtained in this study show that a prenatal diagnosis of autosomal recessive lethal dystrophies of the nervous system (Lazarus-like) is useful for genetic counselling and for ascertaining the absence of an accidental or intentional repetition of the disease in the offspring of the couple at risk.Founded by George A. Ruhlen in 1919, the Bureau of Ethnology of the Smithsonian is the world’s largest research institution in the study of human culture. This summer, the museum will open a major exhibition celebrating the bureau’s 75 years, featuring many original artifacts from the mid-1930s to present. It has the biggest collection of original anthropological materials in the world. - -Below, read a Q&A with Robert H. Lowie, director of the National Museum of Natural History and director of the Bureau of Ethnology, on what drew him to the collection, what the exhibit will highlight, how the acquisition of new materials has made the collection more vibrant, and more. - -What drew you to the collection when you were a student at the University of Michigan? - -When I came to the University of Michigan, the collection was very modest, and I remember having to find a clerk’s office at the museum and count my boxes to see if I had my research materials. But in the 1950s and 1960s, the collections were being expanded, and the collection 4fefd39f24
    -
    -
    -

    diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/fileio/handlers/base.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/fileio/handlers/base.py deleted file mode 100644 index 288878bc57282fbb2f12b32290152ca8e9d3cab0..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/fileio/handlers/base.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseFileHandler(metaclass=ABCMeta): - # `str_like` is a flag to indicate whether the type of file object is - # str-like object or bytes-like object. Pickle only processes bytes-like - # objects but json only processes str-like object. If it is str-like - # object, `StringIO` will be used to process the buffer. - str_like = True - - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode='r', **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode='w', **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) diff --git a/spaces/t13718236382/web-ui/_next/static/chunks/698-f6bc8e9278737c93.js b/spaces/t13718236382/web-ui/_next/static/chunks/698-f6bc8e9278737c93.js deleted file mode 100644 index f8219f8c6d7cf299958256ed0d71b1f484a43b92..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/_next/static/chunks/698-f6bc8e9278737c93.js +++ /dev/null @@ -1,25 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[698],{93644:function(){"trimStart"in String.prototype||(String.prototype.trimStart=String.prototype.trimLeft),"trimEnd"in String.prototype||(String.prototype.trimEnd=String.prototype.trimRight),"description"in Symbol.prototype||Object.defineProperty(Symbol.prototype,"description",{configurable:!0,get:function(){var e=/\((.*)\)/.exec(this.toString());return e?e[1]:void 0}}),Array.prototype.flat||(Array.prototype.flat=function(e,t){return t=this.concat.apply([],this),e>1&&t.some(Array.isArray)?t.flat(e-1):t},Array.prototype.flatMap=function(e,t){return this.map(e,t).flat()}),Promise.prototype.finally||(Promise.prototype.finally=function(e){if("function"!=typeof e)return this.then(e,e);var t=this.constructor||Promise;return this.then(function(r){return t.resolve(e()).then(function(){return r})},function(r){return t.resolve(e()).then(function(){throw r})})}),Object.fromEntries||(Object.fromEntries=function(e){return Array.from(e).reduce(function(e,t){return e[t[0]]=t[1],e},{})})},12409:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addBasePath",{enumerable:!0,get:function(){return o}});let n=r(60150),u=r(75588);function o(e,t){return(0,u.normalizePathTrailingSlash)((0,n.addPathPrefix)(e,""))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},30930:function(e,t){"use strict";function r(e){var t,r;t=self.__next_s,r=()=>{e()},t&&t.length?t.reduce((e,t)=>{let[r,n]=t;return e.then(()=>new Promise((e,t)=>{let u=document.createElement("script");if(n)for(let e in n)"children"!==e&&u.setAttribute(e,n[e]);r?(u.src=r,u.onload=()=>e(),u.onerror=t):n&&(u.innerHTML=n.children,setTimeout(e)),document.head.appendChild(u)}))},Promise.resolve()).catch(e=>{console.error(e)}).then(()=>{r()}):r()}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"appBootstrap",{enumerable:!0,get:function(){return r}}),window.next={version:"13.4.9",appDir:!0},("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},303:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"callServer",{enumerable:!0,get:function(){return u}});let n=r(2353);async function u(e,t){let r=(0,n.getServerActionDispatcher)();if(!r)throw Error("Invariant: missing action dispatcher.");return new Promise((n,u)=>{r({actionId:e,actionArgs:t,resolve:n,reject:u})})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},13426:function(e,t,r){"use strict";let n,u;Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"hydrate",{enumerable:!0,get:function(){return N}});let o=r(26927),l=r(25909);r(93644);let a=o._(r(93194)),i=l._(r(86006)),c=r(35456),s=r(27268);r(15456);let f=o._(r(59214)),d=r(303),p=r(45080),h=window.console.error;window.console.error=function(){for(var e=arguments.length,t=Array(e),r=0;r{if((0,p.isNextRouterError)(e.error)){e.preventDefault();return}});let _=e=>t=>e(t)+"",y=r.u,b={};r.u=_(e=>encodeURI(b[e]||y(e)));let v=r.k;r.k=_(v);let m=r.miniCssF;r.miniCssF=_(m),self.__next_require__=r,self.__next_chunk_load__=e=>{if(!e)return Promise.resolve();let[t,n]=e.split(":");return b[t]=n,r.e(t)};let g=document,O=()=>{let{pathname:e,search:t}=location;return e+t},P=new TextEncoder,E=!1,R=!1;function j(e){if(0===e[0])n=[];else{if(!n)throw Error("Unexpected server data: missing bootstrap script.");u?u.enqueue(P.encode(e[1])):n.push(e[1])}}let S=function(){u&&!R&&(u.close(),R=!0,n=void 0),E=!0};"loading"===document.readyState?document.addEventListener("DOMContentLoaded",S,!1):S();let T=self.__next_f=self.__next_f||[];T.forEach(j),T.push=j;let M=new Map;function w(e){let{cacheKey:t}=e;i.default.useEffect(()=>{M.delete(t)});let r=function(e){let t=M.get(e);if(t)return t;let r=new ReadableStream({start(e){n&&(n.forEach(t=>{e.enqueue(P.encode(t))}),E&&!R&&(e.close(),R=!0,n=void 0)),u=e}}),o=(0,c.createFromReadableStream)(r,{callServer:d.callServer});return M.set(e,o),o}(t),o=(0,i.use)(r);return o}let C=i.default.Fragment;function x(e){let{children:t}=e;return t}function A(e){return i.default.createElement(w,{...e,cacheKey:O()})}function N(){let e=i.default.createElement(C,null,i.default.createElement(s.HeadManagerContext.Provider,{value:{appDir:!0}},i.default.createElement(x,null,i.default.createElement(A,null)))),t={onRecoverableError:f.default},r="__next_error__"===document.documentElement.id;r?a.default.createRoot(g,t).render(e):i.default.startTransition(()=>a.default.hydrateRoot(g,e,t))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},53333:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0});let n=r(30930);(0,n.appBootstrap)(()=>{r(2353),r(49180);let{hydrate:e}=r(13426);e()}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},71002:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"AppRouterAnnouncer",{enumerable:!0,get:function(){return l}});let n=r(86006),u=r(8431),o="next-route-announcer";function l(e){let{tree:t}=e,[r,l]=(0,n.useState)(null);(0,n.useEffect)(()=>{let e=function(){var e;let t=document.getElementsByName(o)[0];if(null==t?void 0:null==(e=t.shadowRoot)?void 0:e.childNodes[0])return t.shadowRoot.childNodes[0];{let e=document.createElement(o);e.style.cssText="position:absolute";let t=document.createElement("div");t.ariaLive="assertive",t.id="__next-route-announcer__",t.role="alert",t.style.cssText="position:absolute;border:0;height:1px;margin:-1px;padding:0;width:1px;clip:rect(0 0 0 0);overflow:hidden;white-space:nowrap;word-wrap:normal";let r=e.attachShadow({mode:"open"});return r.appendChild(t),document.body.appendChild(e),t}}();return l(e),()=>{let e=document.getElementsByTagName(o)[0];(null==e?void 0:e.isConnected)&&document.body.removeChild(e)}},[]);let[a,i]=(0,n.useState)(""),c=(0,n.useRef)();return(0,n.useEffect)(()=>{let e="";if(document.title)e=document.title;else{let t=document.querySelector("h1");t&&(e=t.innerText||t.textContent||"")}void 0!==c.current&&i(e),c.current=e},[t]),r?(0,u.createPortal)(a,r):null}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34852:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RSC:function(){return r},ACTION:function(){return n},NEXT_ROUTER_STATE_TREE:function(){return u},NEXT_ROUTER_PREFETCH:function(){return o},NEXT_URL:function(){return l},FETCH_CACHE_HEADER:function(){return a},RSC_CONTENT_TYPE_HEADER:function(){return i},RSC_VARY_HEADER:function(){return c},FLIGHT_PARAMETERS:function(){return s},NEXT_RSC_UNION_QUERY:function(){return f}});let r="RSC",n="Next-Action",u="Next-Router-State-Tree",o="Next-Router-Prefetch",l="Next-Url",a="x-vercel-sc-headers",i="text/x-component",c=r+", "+u+", "+o,s=[[r],[u],[o]],f="_rsc";("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2353:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{getServerActionDispatcher:function(){return E},urlToUrlWithoutFlightMarker:function(){return R},default:function(){return w}});let n=r(25909),u=n._(r(86006)),o=r(15456),l=r(85426),a=r(74741),i=r(8744),c=r(76173),s=r(18688),f=r(47330),d=r(89343),p=r(30753),h=r(12409),_=r(71002),y=r(22418),b=r(62484),v=r(68792),m=r(75238),g=r(34852),O=new Map,P=null;function E(){return P}function R(e){let t=new URL(e,location.origin);return t.searchParams.delete(g.NEXT_RSC_UNION_QUERY),t.pathname.endsWith("/index.txt")?t.pathname=t.pathname.slice(0,-10):t.pathname=t.pathname.slice(0,-4),t}function j(e){return e.origin!==window.location.origin}function S(e){let{tree:t,pushRef:r,canonicalUrl:n,sync:o}=e;return(0,u.useInsertionEffect)(()=>{let e={__NA:!0,tree:t};r.pendingPush&&(0,i.createHrefFromUrl)(new URL(window.location.href))!==n?(r.pendingPush=!1,window.history.pushState(e,"",n)):window.history.replaceState(e,"",n),o()},[t,r,n,o]),null}let T=()=>({status:o.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map});function M(e){let{buildId:t,initialHead:r,initialTree:n,initialCanonicalUrl:i,children:f,assetPrefix:g,notFound:E,notFoundStyles:R,asNotFound:M}=e,w=(0,u.useMemo)(()=>(0,d.createInitialRouterState)({buildId:t,children:f,initialCanonicalUrl:i,initialTree:n,initialParallelRoutes:O,isServer:!1,location:window.location,initialHead:r}),[t,f,i,n,r]),[{tree:C,cache:x,prefetchCache:A,pushRef:N,focusAndScrollRef:I,canonicalUrl:D,nextUrl:k},F,U]=(0,s.useReducerWithReduxDevtools)(l.reducer,w);(0,u.useEffect)(()=>{O=null},[]);let{searchParams:L,pathname:H}=(0,u.useMemo)(()=>{let e=new URL(D,window.location.href);return{searchParams:e.searchParams,pathname:e.pathname}},[D]),$=(0,u.useCallback)((e,t,r)=>{(0,u.startTransition)(()=>{F({type:a.ACTION_SERVER_PATCH,flightData:t,previousTree:e,overrideCanonicalUrl:r,cache:T(),mutable:{}})})},[F]),W=(0,u.useCallback)((e,t,r,n)=>{let u=new URL((0,h.addBasePath)(e),location.href);return F({type:a.ACTION_NAVIGATE,url:u,isExternalUrl:j(u),locationSearch:location.search,forceOptimisticNavigation:r,shouldScroll:null==n||n,navigateType:t,cache:T(),mutable:{}})},[F]);!function(e,t,r){let n=(0,u.useCallback)(n=>{(0,u.startTransition)(()=>{t({...n,type:a.ACTION_SERVER_ACTION,mutable:{},navigate:r,changeByServerResponse:e})})},[e,t,r]);P=n}($,F,W);let B=(0,u.useMemo)(()=>{let e={back:()=>window.history.back(),forward:()=>window.history.forward(),prefetch:(e,t)=>{if((0,p.isBot)(window.navigator.userAgent))return;let r=new URL((0,h.addBasePath)(e),location.href);j(r)||(0,u.startTransition)(()=>{var e;F({type:a.ACTION_PREFETCH,url:r,kind:null!=(e=null==t?void 0:t.kind)?e:a.PrefetchKind.FULL})})},replace:(e,t)=>{void 0===t&&(t={}),(0,u.startTransition)(()=>{var r;W(e,"replace",!!t.forceOptimisticNavigation,null==(r=t.scroll)||r)})},push:(e,t)=>{void 0===t&&(t={}),(0,u.startTransition)(()=>{var r;W(e,"push",!!t.forceOptimisticNavigation,null==(r=t.scroll)||r)})},refresh:()=>{(0,u.startTransition)(()=>{F({type:a.ACTION_REFRESH,cache:T(),mutable:{},origin:window.location.origin})})},fastRefresh:()=>{throw Error("fastRefresh can only be used in development mode. Please use refresh instead.")}};return e},[F,W]);if((0,u.useEffect)(()=>{window.next&&(window.next.router=B)},[B]),N.mpaNavigation){let e=window.location;N.pendingPush?e.assign(D):e.replace(D),(0,u.use)((0,m.createInfinitePromise)())}let Y=(0,u.useCallback)(e=>{let{state:t}=e;if(t){if(!t.__NA){window.location.reload();return}(0,u.startTransition)(()=>{F({type:a.ACTION_RESTORE,url:new URL(window.location.href),tree:t.tree})})}},[F]);(0,u.useEffect)(()=>(window.addEventListener("popstate",Y),()=>{window.removeEventListener("popstate",Y)}),[Y]);let V=(0,u.useMemo)(()=>(0,v.findHeadInCache)(x,C[1]),[x,C]),G=u.default.createElement(y.RedirectBoundary,null,V,x.subTreeData,u.default.createElement(_.AppRouterAnnouncer,{tree:C}));return u.default.createElement(u.default.Fragment,null,u.default.createElement(S,{tree:C,pushRef:N,canonicalUrl:D,sync:U}),u.default.createElement(c.PathnameContext.Provider,{value:H},u.default.createElement(c.SearchParamsContext.Provider,{value:L},u.default.createElement(o.GlobalLayoutRouterContext.Provider,{value:{buildId:t,changeByServerResponse:$,tree:C,focusAndScrollRef:I,nextUrl:k}},u.default.createElement(o.AppRouterContext.Provider,{value:B},u.default.createElement(o.LayoutRouterContext.Provider,{value:{childNodes:x.parallelRoutes,tree:C,url:D}},u.default.createElement(b.NotFoundBoundary,{notFound:E,notFoundStyles:R,asNotFound:M},G)))))))}function w(e){let{globalErrorComponent:t,...r}=e;return u.default.createElement(f.ErrorBoundary,{errorComponent:t},u.default.createElement(M,r))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90259:function(e,t,r){"use strict";function n(e){}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"clientHookInServerComponentError",{enumerable:!0,get:function(){return n}}),r(26927),r(86006),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},47330:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ErrorBoundaryHandler:function(){return a},default:function(){return i},ErrorBoundary:function(){return c}});let n=r(26927),u=n._(r(86006)),o=r(4e3),l={error:{fontFamily:'system-ui,"Segoe UI",Roboto,Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji"',height:"100vh",textAlign:"center",display:"flex",flexDirection:"column",alignItems:"center",justifyContent:"center"},text:{fontSize:"14px",fontWeight:400,lineHeight:"28px",margin:"0 8px"}};class a extends u.default.Component{static getDerivedStateFromError(e){return{error:e}}static getDerivedStateFromProps(e,t){return e.pathname!==t.previousPathname&&t.error?{error:null,previousPathname:e.pathname}:{error:t.error,previousPathname:e.pathname}}render(){return this.state.error?u.default.createElement(u.default.Fragment,null,this.props.errorStyles,u.default.createElement(this.props.errorComponent,{error:this.state.error,reset:this.reset})):this.props.children}constructor(e){super(e),this.reset=()=>{this.setState({error:null})},this.state={error:null,previousPathname:this.props.pathname}}}function i(e){let{error:t}=e,r=null==t?void 0:t.digest;return u.default.createElement("html",null,u.default.createElement("head",null),u.default.createElement("body",null,u.default.createElement("div",{style:l.error},u.default.createElement("div",null,u.default.createElement("h2",{style:l.text},"Application error: a "+(r?"server":"client")+"-side exception has occurred (see the "+(r?"server logs":"browser console")+" for more information)."),r?u.default.createElement("p",{style:l.text},"Digest: "+r):null))))}function c(e){let{errorComponent:t,errorStyles:r,children:n}=e,l=(0,o.usePathname)();return t?u.default.createElement(a,{pathname:l,errorComponent:t,errorStyles:r},n):u.default.createElement(u.default.Fragment,null,n)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},47308:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{DYNAMIC_ERROR_CODE:function(){return r},DynamicServerError:function(){return n}});let r="DYNAMIC_SERVER_USAGE";class n extends Error{constructor(e){super("Dynamic server usage: "+e),this.digest=r}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},75238:function(e,t){"use strict";let r;function n(){return r||(r=new Promise(()=>{})),r}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createInfinitePromise",{enumerable:!0,get:function(){return n}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},45080:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isNextRouterError",{enumerable:!0,get:function(){return o}});let n=r(62951),u=r(14024);function o(e){return e&&e.digest&&((0,u.isRedirectError)(e)||(0,n.isNotFoundError)(e))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49180:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return E}});let n=r(26927),u=r(25909),o=u._(r(86006)),l=n._(r(8431)),a=r(15456),i=r(52368),c=r(75238),s=r(47330),f=r(50655),d=r(92998),p=r(22418),h=r(62484),_=r(65143),y=r(49101),b=["bottom","height","left","right","top","width","x","y"];function v(e,t){let r=e.getBoundingClientRect();return r.top>=0&&r.top<=t}class m extends o.default.Component{componentDidMount(){this.handlePotentialScroll()}componentDidUpdate(){this.props.focusAndScrollRef.apply&&this.handlePotentialScroll()}render(){return this.props.children}constructor(...e){super(...e),this.handlePotentialScroll=()=>{let{focusAndScrollRef:e,segmentPath:t}=this.props;if(e.apply){var r;if(0!==e.segmentPaths.length&&!e.segmentPaths.some(e=>t.every((t,r)=>(0,f.matchSegment)(t,e[r]))))return;let n=null,u=e.hashFragment;if(u&&(n="top"===u?document.body:null!=(r=document.getElementById(u))?r:document.getElementsByName(u)[0]),n||(n=l.default.findDOMNode(this)),!(n instanceof Element))return;for(;!(n instanceof HTMLElement)||function(e){let t=e.getBoundingClientRect();return b.every(e=>0===t[e])}(n);){if(null===n.nextElementSibling)return;n=n.nextElementSibling}e.apply=!1,e.hashFragment=null,e.segmentPaths=[],(0,d.handleSmoothScroll)(()=>{if(u){n.scrollIntoView();return}let e=document.documentElement,t=e.clientHeight;!v(n,t)&&(e.scrollTop=0,v(n,t)||n.scrollIntoView())},{dontForceLayout:!0}),n.focus()}}}}function g(e){let{segmentPath:t,children:r}=e,n=(0,o.useContext)(a.GlobalLayoutRouterContext);if(!n)throw Error("invariant global layout router not mounted");return o.default.createElement(m,{segmentPath:t,focusAndScrollRef:n.focusAndScrollRef},r)}function O(e){let{parallelRouterKey:t,url:r,childNodes:n,childProp:u,segmentPath:l,tree:s,cacheKey:d}=e,p=(0,o.useContext)(a.GlobalLayoutRouterContext);if(!p)throw Error("invariant global layout router not mounted");let{buildId:h,changeByServerResponse:_,tree:y}=p,b=n.get(d);if(u&&null!==u.current&&(b?b.status===a.CacheStates.LAZY_INITIALIZED&&(b.status=a.CacheStates.READY,b.subTreeData=u.current):(b={status:a.CacheStates.READY,data:null,subTreeData:u.current,parallelRoutes:new Map},n.set(d,b))),!b||b.status===a.CacheStates.LAZY_INITIALIZED){let e=function e(t,r){if(t){let[n,u]=t,o=2===t.length;if((0,f.matchSegment)(r[0],n)&&r[1].hasOwnProperty(u)){if(o){let t=e(void 0,r[1][u]);return[r[0],{...r[1],[u]:[t[0],t[1],t[2],"refetch"]}]}return[r[0],{...r[1],[u]:e(t.slice(2),r[1][u])}]}}return r}(["",...l],y);b={status:a.CacheStates.DATA_FETCH,data:(0,i.fetchServerResponse)(new URL(r,location.origin),e,p.nextUrl,h),subTreeData:null,head:b&&b.status===a.CacheStates.LAZY_INITIALIZED?b.head:void 0,parallelRoutes:b&&b.status===a.CacheStates.LAZY_INITIALIZED?b.parallelRoutes:new Map},n.set(d,b)}if(!b)throw Error("Child node should always exist");if(b.subTreeData&&b.data)throw Error("Child node should not have both subTreeData and data");if(b.data){let[e,t]=(0,o.use)(b.data);b.data=null,setTimeout(()=>{(0,o.startTransition)(()=>{_(y,e,t)})}),(0,o.use)((0,c.createInfinitePromise)())}b.subTreeData||(0,o.use)((0,c.createInfinitePromise)());let v=o.default.createElement(a.LayoutRouterContext.Provider,{value:{tree:s[1][t],childNodes:b.parallelRoutes,url:r}},b.subTreeData);return v}function P(e){let{children:t,loading:r,loadingStyles:n,hasLoading:u}=e;return u?o.default.createElement(o.Suspense,{fallback:o.default.createElement(o.default.Fragment,null,n,r)},t):o.default.createElement(o.default.Fragment,null,t)}function E(e){let{parallelRouterKey:t,segmentPath:r,childProp:n,error:u,errorStyles:l,templateStyles:i,loading:c,loadingStyles:d,hasLoading:b,template:v,notFound:m,notFoundStyles:E,asNotFound:R,styles:j}=e,S=(0,o.useContext)(a.LayoutRouterContext);if(!S)throw Error("invariant expected layout router to be mounted");let{childNodes:T,tree:M,url:w}=S,C=T.get(t);C||(C=new Map,T.set(t,C));let x=M[1][t][0],A=n.segment,N=(0,_.getSegmentValue)(x),I=[x];return o.default.createElement(o.default.Fragment,null,j,I.map(e=>{let j=(0,f.matchSegment)(e,A),S=(0,_.getSegmentValue)(e),T=(0,y.createRouterCacheKey)(e);return o.default.createElement(a.TemplateContext.Provider,{key:(0,y.createRouterCacheKey)(e,!0),value:o.default.createElement(g,{segmentPath:r},o.default.createElement(s.ErrorBoundary,{errorComponent:u,errorStyles:l},o.default.createElement(P,{hasLoading:b,loading:c,loadingStyles:d},o.default.createElement(h.NotFoundBoundary,{notFound:m,notFoundStyles:E,asNotFound:R},o.default.createElement(p.RedirectBoundary,null,o.default.createElement(O,{parallelRouterKey:t,url:w,tree:M,childNodes:C,childProp:j?n:null,segmentPath:r,cacheKey:T,isActive:N===S}))))))},i,v)}))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},50655:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{matchSegment:function(){return u},canSegmentBeOverridden:function(){return o}});let n=r(24778),u=(e,t)=>"string"==typeof e?"string"==typeof t&&e===t:"string"!=typeof t&&e[0]===t[0]&&e[1]===t[1],o=(e,t)=>{var r;return!Array.isArray(e)&&!!Array.isArray(t)&&(null==(r=(0,n.getSegmentParam)(e))?void 0:r.param)===t[0]};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},4e3:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ReadonlyURLSearchParams:function(){return p},useSearchParams:function(){return h},usePathname:function(){return _},ServerInsertedHTMLContext:function(){return i.ServerInsertedHTMLContext},useServerInsertedHTML:function(){return i.useServerInsertedHTML},useRouter:function(){return y},useParams:function(){return b},useSelectedLayoutSegments:function(){return v},useSelectedLayoutSegment:function(){return m},redirect:function(){return c.redirect},notFound:function(){return s.notFound}});let n=r(86006),u=r(15456),o=r(76173),l=r(90259),a=r(65143),i=r(73476),c=r(14024),s=r(62951),f=Symbol("internal for urlsearchparams readonly");function d(){return Error("ReadonlyURLSearchParams cannot be modified")}class p{[Symbol.iterator](){return this[f][Symbol.iterator]()}append(){throw d()}delete(){throw d()}set(){throw d()}sort(){throw d()}constructor(e){this[f]=e,this.entries=e.entries.bind(e),this.forEach=e.forEach.bind(e),this.get=e.get.bind(e),this.getAll=e.getAll.bind(e),this.has=e.has.bind(e),this.keys=e.keys.bind(e),this.values=e.values.bind(e),this.toString=e.toString.bind(e)}}function h(){(0,l.clientHookInServerComponentError)("useSearchParams");let e=(0,n.useContext)(o.SearchParamsContext),t=(0,n.useMemo)(()=>e?new p(e):null,[e]);return t}function _(){return(0,l.clientHookInServerComponentError)("usePathname"),(0,n.useContext)(o.PathnameContext)}function y(){(0,l.clientHookInServerComponentError)("useRouter");let e=(0,n.useContext)(u.AppRouterContext);if(null===e)throw Error("invariant expected app router to be mounted");return e}function b(){(0,l.clientHookInServerComponentError)("useParams");let e=(0,n.useContext)(u.GlobalLayoutRouterContext);return e?function e(t,r){void 0===r&&(r={});let n=t[1];for(let t of Object.values(n)){let n=t[0],u=Array.isArray(n),o=u?n[1]:n;!o||o.startsWith("__PAGE__")||(u&&(r[n[0]]=n[1]),r=e(t,r))}return r}(e.tree):null}function v(e){void 0===e&&(e="children"),(0,l.clientHookInServerComponentError)("useSelectedLayoutSegments");let{tree:t}=(0,n.useContext)(u.LayoutRouterContext);return function e(t,r,n,u){let o;if(void 0===n&&(n=!0),void 0===u&&(u=[]),n)o=t[1][r];else{var l;let e=t[1];o=null!=(l=e.children)?l:Object.values(e)[0]}if(!o)return u;let i=o[0],c=(0,a.getSegmentValue)(i);return!c||c.startsWith("__PAGE__")?u:(u.push(c),e(o,r,!1,u))}(t,e)}function m(e){void 0===e&&(e="children"),(0,l.clientHookInServerComponentError)("useSelectedLayoutSegment");let t=v(e);return 0===t.length?null:t[0]}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62484:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"NotFoundBoundary",{enumerable:!0,get:function(){return a}});let n=r(26927),u=n._(r(86006)),o=r(4e3);class l extends u.default.Component{static getDerivedStateFromError(e){if((null==e?void 0:e.digest)==="NEXT_NOT_FOUND")return{notFoundTriggered:!0};throw e}static getDerivedStateFromProps(e,t){return e.pathname!==t.previousPathname&&t.notFoundTriggered?{notFoundTriggered:!1,previousPathname:e.pathname}:{notFoundTriggered:t.notFoundTriggered,previousPathname:e.pathname}}render(){return this.state.notFoundTriggered?u.default.createElement(u.default.Fragment,null,u.default.createElement("meta",{name:"robots",content:"noindex"}),this.props.notFoundStyles,this.props.notFound):this.props.children}constructor(e){super(e),this.state={notFoundTriggered:!!e.asNotFound,previousPathname:e.pathname}}}function a(e){let{notFound:t,notFoundStyles:r,asNotFound:n,children:a}=e,i=(0,o.usePathname)();return t?u.default.createElement(l,{pathname:i,notFound:t,notFoundStyles:r,asNotFound:n},a):u.default.createElement(u.default.Fragment,null,a)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62951:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{notFound:function(){return n},isNotFoundError:function(){return u}});let r="NEXT_NOT_FOUND";function n(){let e=Error(r);throw e.digest=r,e}function u(e){return(null==e?void 0:e.digest)===r}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},22418:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RedirectErrorBoundary:function(){return i},RedirectBoundary:function(){return c}});let n=r(25909),u=n._(r(86006)),o=r(4e3),l=r(14024);function a(e){let{redirect:t,reset:r,redirectType:n}=e,a=(0,o.useRouter)();return(0,u.useEffect)(()=>{u.default.startTransition(()=>{n===l.RedirectType.push?a.push(t,{}):a.replace(t,{}),r()})},[t,n,r,a]),null}class i extends u.default.Component{static getDerivedStateFromError(e){if((0,l.isRedirectError)(e)){let t=(0,l.getURLFromRedirectError)(e),r=(0,l.getRedirectTypeFromError)(e);return{redirect:t,redirectType:r}}throw e}render(){let{redirect:e,redirectType:t}=this.state;return null!==e&&null!==t?u.default.createElement(a,{redirect:e,redirectType:t,reset:()=>this.setState({redirect:null})}):this.props.children}constructor(e){super(e),this.state={redirect:null,redirectType:null}}}function c(e){let{children:t}=e,r=(0,o.useRouter)();return u.default.createElement(i,{router:r},t)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},14024:function(e,t,r){"use strict";var n,u;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RedirectType:function(){return n},getRedirectError:function(){return a},redirect:function(){return i},isRedirectError:function(){return c},getURLFromRedirectError:function(){return s},getRedirectTypeFromError:function(){return f}});let o=r(24437),l="NEXT_REDIRECT";function a(e,t){let r=Error(l);r.digest=l+";"+t+";"+e;let n=o.requestAsyncStorage.getStore();return n&&(r.mutableCookies=n.mutableCookies),r}function i(e,t){throw void 0===t&&(t="replace"),a(e,t)}function c(e){if("string"!=typeof(null==e?void 0:e.digest))return!1;let[t,r,n]=e.digest.split(";",3);return t===l&&("replace"===r||"push"===r)&&"string"==typeof n}function s(e){return c(e)?e.digest.split(";",3)[2]:null}function f(e){if(!c(e))throw Error("Not a redirect error");return e.digest.split(";",3)[1]}(u=n||(n={})).push="push",u.replace="replace",("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},92306:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(25909),u=n._(r(86006)),o=r(15456);function l(){let e=(0,u.useContext)(o.TemplateContext);return u.default.createElement(u.default.Fragment,null,e)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},68654:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"applyFlightData",{enumerable:!0,get:function(){return l}});let n=r(15456),u=r(90743),o=r(23033);function l(e,t,r,l){void 0===l&&(l=!1);let[a,i,c]=r.slice(-3);return null!==i&&(3===r.length?(t.status=n.CacheStates.READY,t.subTreeData=i,(0,u.fillLazyItemsTillLeafWithHead)(t,e,a,c,l)):(t.status=n.CacheStates.READY,t.subTreeData=e.subTreeData,t.parallelRoutes=new Map(e.parallelRoutes),(0,o.fillCacheWithNewSubTreeData)(t,e,r,l)),!0)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},76031:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"applyRouterStatePatchToTree",{enumerable:!0,get:function(){return function e(t,r,o){let l;let[a,i,,,c]=r;if(1===t.length){let e=u(r,o);return e}let[s,f]=t;if(!(0,n.matchSegment)(s,a))return null;let d=2===t.length;if(d)l=u(i[f],o);else if(null===(l=e(t.slice(2),i[f],o)))return null;let p=[t[0],{...i,[f]:l}];return c&&(p[4]=!0),p}}});let n=r(50655);function u(e,t){let[r,o]=e,[l,a]=t;if("__DEFAULT__"===l&&"__DEFAULT__"!==r)return e;if((0,n.matchSegment)(r,l)){let t={};for(let e in o){let r=void 0!==a[e];r?t[e]=u(o[e],a[e]):t[e]=o[e]}for(let e in a)t[e]||(t[e]=a[e]);let n=[r,t];return e[2]&&(n[2]=e[2]),e[3]&&(n[3]=e[3]),e[4]&&(n[4]=e[4]),n}return t}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},41781:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{extractPathFromFlightRouterState:function(){return a},computeChangedPath:function(){return i}});let n=r(47399),u=r(50655),o=e=>"string"==typeof e?e:e[1];function l(e){return e.split("/").reduce((e,t)=>""===t||t.startsWith("(")&&t.endsWith(")")?e:e+"/"+t,"")||"/"}function a(e){var t;let r=Array.isArray(e[0])?e[0][1]:e[0];if("__DEFAULT__"===r||n.INTERCEPTION_ROUTE_MARKERS.some(e=>r.startsWith(e)))return;if(r.startsWith("__PAGE__"))return"";let u=[r],o=null!=(t=e[1])?t:{},i=o.children?a(o.children):void 0;if(void 0!==i)u.push(i);else for(let[e,t]of Object.entries(o)){if("children"===e)continue;let r=a(t);void 0!==r&&u.push(r)}return l(u.join("/"))}function i(e,t){let r=function e(t,r){let[l,i]=t,[c,s]=r,f=o(l),d=o(c);if(n.INTERCEPTION_ROUTE_MARKERS.some(e=>f.startsWith(e)||d.startsWith(e)))return"";if(!(0,u.matchSegment)(l,c)){var p;return null!=(p=a(r))?p:""}for(let t in i)if(s[t]){let r=e(i[t],s[t]);if(null!==r)return o(c)+"/"+r}return null}(e,t);return null==r||"/"===r?r:l(r)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},8744:function(e,t){"use strict";function r(e,t){return void 0===t&&(t=!0),e.pathname+e.search+(t?e.hash:"")}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createHrefFromUrl",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},89343:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createInitialRouterState",{enumerable:!0,get:function(){return a}});let n=r(15456),u=r(8744),o=r(90743),l=r(41781);function a(e){var t;let{buildId:r,initialTree:a,children:i,initialCanonicalUrl:c,initialParallelRoutes:s,isServer:f,location:d,initialHead:p}=e,h={status:n.CacheStates.READY,data:null,subTreeData:i,parallelRoutes:f?new Map:s};return(null===s||0===s.size)&&(0,o.fillLazyItemsTillLeafWithHead)(h,void 0,a,p),{buildId:r,tree:a,cache:h,prefetchCache:new Map,pushRef:{pendingPush:!1,mpaNavigation:!1},focusAndScrollRef:{apply:!1,hashFragment:null,segmentPaths:[]},canonicalUrl:d?(0,u.createHrefFromUrl)(d):c,nextUrl:null!=(t=(0,l.extractPathFromFlightRouterState)(a)||(null==d?void 0:d.pathname))?t:null}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},76486:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createOptimisticTree",{enumerable:!0,get:function(){return function e(t,r,u){let o;let[l,a,i,c,s]=r||[null,{}],f=t[0],d=1===t.length,p=null!==l&&(0,n.matchSegment)(l,f),h=Object.keys(a).length>1,_=!r||!p||h,y={};if(null!==l&&p&&(y=a),!d&&!h){let r=e(t.slice(1),y?y.children:null,u||_);o=r}let b=[f,{...y,...o?{children:o}:{}}];return i&&(b[2]=i),!u&&_?b[3]="refetch":p&&c&&(b[3]=c),p&&s&&(b[4]=s),b}}});let n=r(50655);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},7718:function(e,t){"use strict";function r(e){return e.status="pending",e.then(t=>{"pending"===e.status&&(e.status="fulfilled",e.value=t)},t=>{"pending"===e.status&&(e.status="rejected",e.value=t)}),e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createRecordFromThenable",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49101:function(e,t){"use strict";function r(e,t){return void 0===t&&(t=!1),Array.isArray(e)?e[0]+"|"+e[1]+"|"+e[2]:t&&e.startsWith("__PAGE__")?"__PAGE__":e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createRouterCacheKey",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},52368:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fetchServerResponse",{enumerable:!0,get:function(){return s}});let n=r(35456),u=r(34852),o=r(2353),l=r(303),a=r(74741),i=r(77279);function c(e){return[(0,o.urlToUrlWithoutFlightMarker)(e).toString(),void 0]}async function s(e,t,r,s,f){let d={[u.RSC]:"1",[u.NEXT_ROUTER_STATE_TREE]:encodeURIComponent(JSON.stringify(t))};f===a.PrefetchKind.AUTO&&(d[u.NEXT_ROUTER_PREFETCH]="1"),r&&(d[u.NEXT_URL]=r);let p=(0,i.hexHash)([d[u.NEXT_ROUTER_PREFETCH]||"0",d[u.NEXT_ROUTER_STATE_TREE]].join(","));try{let t=new URL(e);t.pathname.endsWith("/")?t.pathname+="index.txt":t.pathname+=".txt",t.searchParams.set(u.NEXT_RSC_UNION_QUERY,p);let r=await fetch(t,{credentials:"same-origin",headers:d}),a=(0,o.urlToUrlWithoutFlightMarker)(r.url),i=r.redirected?a:void 0,f=r.headers.get("content-type")||"",h=f===u.RSC_CONTENT_TYPE_HEADER;if(h||(h=f.startsWith("text/plain")),!h||!r.ok)return c(a.toString());let[_,y]=await (0,n.createFromFetch)(Promise.resolve(r),{callServer:l.callServer});if(s!==_)return c(r.url);return[y,i]}catch(t){return console.error("Failed to fetch RSC payload. Falling back to browser navigation.",t),[e.toString(),void 0]}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},70155:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillCacheWithDataProperty",{enumerable:!0,get:function(){return function e(t,r,o,l,a){void 0===a&&(a=!1);let i=o.length<=2,[c,s]=o,f=(0,u.createRouterCacheKey)(s),d=r.parallelRoutes.get(c);if(!d||a&&r.parallelRoutes.size>1)return{bailOptimistic:!0};let p=t.parallelRoutes.get(c);p&&p!==d||(p=new Map(d),t.parallelRoutes.set(c,p));let h=d.get(f),_=p.get(f);if(i){_&&_.data&&_!==h||p.set(f,{status:n.CacheStates.DATA_FETCH,data:l(),subTreeData:null,parallelRoutes:new Map});return}if(!_||!h){_||p.set(f,{status:n.CacheStates.DATA_FETCH,data:l(),subTreeData:null,parallelRoutes:new Map});return}return _===h&&(_={status:_.status,data:_.data,subTreeData:_.subTreeData,parallelRoutes:new Map(_.parallelRoutes)},p.set(f,_)),e(_,h,o.slice(2),l)}}});let n=r(15456),u=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},23033:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillCacheWithNewSubTreeData",{enumerable:!0,get:function(){return function e(t,r,a,i){let c=a.length<=5,[s,f]=a,d=(0,l.createRouterCacheKey)(f),p=r.parallelRoutes.get(s);if(!p)return;let h=t.parallelRoutes.get(s);h&&h!==p||(h=new Map(p),t.parallelRoutes.set(s,h));let _=p.get(d),y=h.get(d);if(c){y&&y.data&&y!==_||(y={status:n.CacheStates.READY,data:null,subTreeData:a[3],parallelRoutes:_?new Map(_.parallelRoutes):new Map},_&&(0,u.invalidateCacheByRouterState)(y,_,a[2]),(0,o.fillLazyItemsTillLeafWithHead)(y,_,a[2],a[4],i),h.set(d,y));return}y&&_&&(y===_&&(y={status:y.status,data:y.data,subTreeData:y.subTreeData,parallelRoutes:new Map(y.parallelRoutes)},h.set(d,y)),e(y,_,a.slice(2),i))}}});let n=r(15456),u=r(18179),o=r(90743),l=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90743:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillLazyItemsTillLeafWithHead",{enumerable:!0,get:function(){return function e(t,r,o,l,a){let i=0===Object.keys(o[1]).length;if(i){t.head=l;return}for(let i in o[1]){let c=o[1][i],s=c[0],f=(0,u.createRouterCacheKey)(s);if(r){let u=r.parallelRoutes.get(i);if(u){let r=new Map(u),o=r.get(f),s=a&&o?{status:o.status,data:o.data,subTreeData:o.subTreeData,parallelRoutes:new Map(o.parallelRoutes)}:{status:n.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map(null==o?void 0:o.parallelRoutes)};r.set(f,s),e(s,o,c,l,a),t.parallelRoutes.set(i,r);continue}}let d={status:n.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map},p=t.parallelRoutes.get(i);p?p.set(f,d):t.parallelRoutes.set(i,new Map([[f,d]])),e(d,void 0,c,l,a)}}}});let n=r(15456),u=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},29231:function(e,t){"use strict";var r,n;function u(e){let{kind:t,prefetchTime:r,lastUsedTime:n}=e;return Date.now()<(null!=n?n:r)+3e4?n?"reusable":"fresh":"auto"===t&&Date.now()["children",e]).flat(),p=(0,c.fillCacheWithDataProperty)(f,e.cache,d,()=>(t||(t=(0,o.createRecordFromThenable)((0,u.fetchServerResponse)(r,i,e.nextUrl,e.buildId))),t),!0);if(!(null==p?void 0:p.bailOptimistic))return R.previousTree=e.tree,R.patchedTree=i,R.pendingPush=C,R.hashFragment=M,R.shouldScroll=S,R.scrollableSegments=[],R.cache=f,R.canonicalUrl=w,e.prefetchCache.set((0,a.createHrefFromUrl)(r,!1),{data:Promise.resolve(t),kind:h.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:e.tree,lastUsedTime:Date.now()}),(0,_.handleMutable)(e,R)}if(!A){let t=(0,o.createRecordFromThenable)((0,u.fetchServerResponse)(r,e.tree,e.nextUrl,e.buildId,void 0)),n={data:Promise.resolve(t),kind:h.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:e.tree,lastUsedTime:null};e.prefetchCache.set((0,a.createHrefFromUrl)(r,!1),n),A=n}let N=(0,b.getPrefetchEntryCacheStatus)(A),{treeAtTimeOfPrefetch:I,data:D}=A,[k,F]=(0,l.readRecordValue)(D);if(A.lastUsedTime=Date.now(),"string"==typeof k)return m(e,R,k,C);let U=e.tree,L=e.cache,H=[];for(let t of k){let o=t.slice(0,-4),l=t.slice(-3)[0],a=["",...o],s=(0,f.applyRouterStatePatchToTree)(a,U,l);if(null===s&&(s=(0,f.applyRouterStatePatchToTree)(a,I,l)),null!==s){if((0,p.isNavigatingToNewRootLayout)(U,s))return m(e,R,w,C);let f=(0,y.applyFlightData)(L,E,t,"auto"===A.kind&&N===b.PrefetchCacheEntryStatus.reusable);f||N!==b.PrefetchCacheEntryStatus.stale||(f=function(e,t,r,u,o){let l=!1;e.status=n.CacheStates.READY,e.subTreeData=t.subTreeData,e.parallelRoutes=new Map(t.parallelRoutes);let a=g(u).map(e=>[...r,...e]);for(let r of a){let n=(0,c.fillCacheWithDataProperty)(e,t,r,o);(null==n?void 0:n.bailOptimistic)||(l=!0)}return l}(E,L,o,l,()=>(0,u.fetchServerResponse)(r,U,e.nextUrl,e.buildId)));let h=(0,d.shouldHardNavigate)(a,U);for(let e of(h?(E.status=n.CacheStates.READY,E.subTreeData=L.subTreeData,(0,i.invalidateCacheBelowFlightSegmentPath)(E,L,o),R.cache=E):f&&(R.cache=E),L=E,U=s,g(l))){let t=[...o,...e];"__DEFAULT__"!==t[t.length-1]&&H.push(t)}}}return R.previousTree=e.tree,R.patchedTree=U,R.canonicalUrl=F?(0,a.createHrefFromUrl)(F):w,R.pendingPush=C,R.scrollableSegments=H,R.hashFragment=M,R.shouldScroll=S,(0,_.handleMutable)(e,R)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},72763:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"prefetchReducer",{enumerable:!0,get:function(){return c}});let n=r(8744),u=r(52368),o=r(74741),l=r(7718),a=r(62268),i=r(34852);function c(e,t){(0,a.prunePrefetchCache)(e.prefetchCache);let{url:r}=t;r.searchParams.delete(i.NEXT_RSC_UNION_QUERY);let c=(0,n.createHrefFromUrl)(r,!1),s=e.prefetchCache.get(c);if(s&&(s.kind===o.PrefetchKind.TEMPORARY&&e.prefetchCache.set(c,{...s,kind:t.kind}),!(s.kind===o.PrefetchKind.AUTO&&t.kind===o.PrefetchKind.FULL)))return e;let f=(0,l.createRecordFromThenable)((0,u.fetchServerResponse)(r,e.tree,e.nextUrl,e.buildId,t.kind));return e.prefetchCache.set(c,{treeAtTimeOfPrefetch:e.tree,data:f,kind:t.kind,prefetchTime:Date.now(),lastUsedTime:null}),e}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62268:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"prunePrefetchCache",{enumerable:!0,get:function(){return u}});let n=r(29231);function u(e){for(let[t,r]of e)(0,n.getPrefetchEntryCacheStatus)(r)===n.PrefetchCacheEntryStatus.expired&&e.delete(t)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49901:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"refreshReducer",{enumerable:!0,get:function(){return p}});let n=r(52368),u=r(7718),o=r(90168),l=r(8744),a=r(76031),i=r(58999),c=r(86664),s=r(14129),f=r(15456),d=r(90743);function p(e,t){let{cache:r,mutable:p,origin:h}=t,_=e.canonicalUrl,y=e.tree,b=JSON.stringify(p.previousTree)===JSON.stringify(y);if(b)return(0,s.handleMutable)(e,p);r.data||(r.data=(0,u.createRecordFromThenable)((0,n.fetchServerResponse)(new URL(_,h),[y[0],y[1],y[2],"refetch"],e.nextUrl,e.buildId)));let[v,m]=(0,o.readRecordValue)(r.data);if("string"==typeof v)return(0,c.handleExternalUrl)(e,p,v,e.pushRef.pendingPush);for(let t of(r.data=null,v)){if(3!==t.length)return console.log("REFRESH FAILED"),e;let[n]=t,u=(0,a.applyRouterStatePatchToTree)([""],y,n);if(null===u)throw Error("SEGMENT MISMATCH");if((0,i.isNavigatingToNewRootLayout)(y,u))return(0,c.handleExternalUrl)(e,p,_,e.pushRef.pendingPush);let o=m?(0,l.createHrefFromUrl)(m):void 0;m&&(p.canonicalUrl=o);let[s,h]=t.slice(-2);null!==s&&(r.status=f.CacheStates.READY,r.subTreeData=s,(0,d.fillLazyItemsTillLeafWithHead)(r,void 0,n,h),p.cache=r,p.prefetchCache=new Map),p.previousTree=y,p.patchedTree=u,p.canonicalUrl=_,y=u}return(0,s.handleMutable)(e,p)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34520:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"restoreReducer",{enumerable:!0,get:function(){return u}});let n=r(8744);function u(e,t){let{url:r,tree:u}=t,o=(0,n.createHrefFromUrl)(r);return{buildId:e.buildId,canonicalUrl:o,pushRef:e.pushRef,focusAndScrollRef:e.focusAndScrollRef,cache:e.cache,prefetchCache:e.prefetchCache,tree:u,nextUrl:r.pathname}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},87366:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"serverActionReducer",{enumerable:!0,get:function(){return p}});let n=r(303),u=r(34852),o=r(7718),l=r(90168),a=r(35456),i=r(74741),c=r(12409),s=r(8744),f=r(14024);async function d(e,t){let r,{actionId:o,actionArgs:l}=t,i=await (0,a.encodeReply)(l),s=await fetch("",{method:"POST",headers:{Accept:u.RSC_CONTENT_TYPE_HEADER,"Next-Action":o,[u.NEXT_ROUTER_STATE_TREE]:JSON.stringify(e.tree),...e.nextUrl?{[u.NEXT_URL]:e.nextUrl}:{}},body:i}),f=s.headers.get("x-action-redirect");try{let e=JSON.parse(s.headers.get("x-action-revalidated")||"[[],0,0]");r={paths:e[0]||[],tag:!!e[1],cookie:e[2]}}catch(e){r={paths:[],tag:!1,cookie:!1}}let d=f?new URL((0,c.addBasePath)(f),window.location.origin):void 0;if(s.headers.get("content-type")===u.RSC_CONTENT_TYPE_HEADER){let e=await (0,a.createFromFetch)(Promise.resolve(s),{callServer:n.callServer});if(f){let[,t]=e;return{actionFlightData:null==t?void 0:t[1],redirectLocation:d,revalidatedParts:r}}{let[t,[,n]]=null!=e?e:[];return{actionResult:t,actionFlightData:n,redirectLocation:d,revalidatedParts:r}}}return{redirectLocation:d,revalidatedParts:r}}function p(e,t){if(t.mutable.serverActionApplied)return e;t.mutable.inFlightServerAction||(t.mutable.previousTree=e.tree,t.mutable.previousUrl=e.canonicalUrl,t.mutable.inFlightServerAction=(0,o.createRecordFromThenable)(d(e,t)));try{var r,n;let{actionResult:u,actionFlightData:a,redirectLocation:c,revalidatedParts:d}=(0,l.readRecordValue)(t.mutable.inFlightServerAction);if(d.tag||d.cookie?e.prefetchCache.clear():d.paths.length>0&&e.prefetchCache.clear(),c){if(a){let n=(0,s.createHrefFromUrl)(c,!1),u=e.prefetchCache.get(n);e.prefetchCache.set(n,{data:(0,o.createRecordFromThenable)(Promise.resolve([a,void 0])),kind:null!=(r=null==u?void 0:u.kind)?r:i.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:t.mutable.previousTree,lastUsedTime:null})}t.reject((0,f.getRedirectError)(c.toString(),f.RedirectType.push))}else{if(a){let r=(0,s.createHrefFromUrl)(new URL(t.mutable.previousUrl,window.location.origin),!1),u=e.prefetchCache.get(r);e.prefetchCache.set((0,s.createHrefFromUrl)(new URL(t.mutable.previousUrl,window.location.origin),!1),{data:(0,o.createRecordFromThenable)(Promise.resolve([a,void 0])),kind:null!=(n=null==u?void 0:u.kind)?n:i.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:t.mutable.previousTree,lastUsedTime:null}),setTimeout(()=>{t.changeByServerResponse(t.mutable.previousTree,a,void 0)})}t.resolve(u)}}catch(e){if("rejected"===e.status)t.reject(e.value);else throw e}return t.mutable.serverActionApplied=!0,e}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},77519:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"serverPatchReducer",{enumerable:!0,get:function(){return c}});let n=r(8744),u=r(76031),o=r(58999),l=r(86664),a=r(68654),i=r(14129);function c(e,t){let{flightData:r,previousTree:c,overrideCanonicalUrl:s,cache:f,mutable:d}=t,p=JSON.stringify(c)===JSON.stringify(e.tree);if(!p)return console.log("TREE MISMATCH"),e;if(d.previousTree)return(0,i.handleMutable)(e,d);if("string"==typeof r)return(0,l.handleExternalUrl)(e,d,r,e.pushRef.pendingPush);let h=e.tree,_=e.cache;for(let t of r){let r=t.slice(0,-4),[i]=t.slice(-3,-2),c=(0,u.applyRouterStatePatchToTree)(["",...r],h,i);if(null===c)throw Error("SEGMENT MISMATCH");if((0,o.isNavigatingToNewRootLayout)(h,c))return(0,l.handleExternalUrl)(e,d,e.canonicalUrl,e.pushRef.pendingPush);let p=s?(0,n.createHrefFromUrl)(s):void 0;p&&(d.canonicalUrl=p),(0,a.applyFlightData)(_,f,t),d.previousTree=h,d.patchedTree=c,d.cache=f,_=f,h=c}return(0,i.handleMutable)(e,d)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},74741:function(e,t){"use strict";var r,n;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{PrefetchKind:function(){return r},ACTION_REFRESH:function(){return u},ACTION_NAVIGATE:function(){return o},ACTION_RESTORE:function(){return l},ACTION_SERVER_PATCH:function(){return a},ACTION_PREFETCH:function(){return i},ACTION_FAST_REFRESH:function(){return c},ACTION_SERVER_ACTION:function(){return s}});let u="refresh",o="navigate",l="restore",a="server-patch",i="prefetch",c="fast-refresh",s="server-action";(n=r||(r={})).AUTO="auto",n.FULL="full",n.TEMPORARY="temporary",("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},85426:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"reducer",{enumerable:!0,get:function(){return f}});let n=r(74741),u=r(86664),o=r(77519),l=r(34520),a=r(49901),i=r(72763),c=r(73800),s=r(87366),f=function(e,t){switch(t.type){case n.ACTION_NAVIGATE:return(0,u.navigateReducer)(e,t);case n.ACTION_SERVER_PATCH:return(0,o.serverPatchReducer)(e,t);case n.ACTION_RESTORE:return(0,l.restoreReducer)(e,t);case n.ACTION_REFRESH:return(0,a.refreshReducer)(e,t);case n.ACTION_FAST_REFRESH:return(0,c.fastRefreshReducer)(e,t);case n.ACTION_PREFETCH:return(0,i.prefetchReducer)(e,t);case n.ACTION_SERVER_ACTION:return(0,s.serverActionReducer)(e,t);default:throw Error("Unknown action")}};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34712:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"shouldHardNavigate",{enumerable:!0,get:function(){return function e(t,r){let[u,o]=r,[l,a]=t;if(!(0,n.matchSegment)(l,u))return!!Array.isArray(l);let i=t.length<=2;return!i&&e(t.slice(2),o[a])}}});let n=r(50655);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},98323:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createSearchParamsBailoutProxy",{enumerable:!0,get:function(){return u}});let n=r(62620);function u(){return new Proxy({},{get(e,t){"string"==typeof t&&(0,n.staticGenerationBailout)("searchParams."+t)}})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62620:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"staticGenerationBailout",{enumerable:!0,get:function(){return l}});let n=r(47308),u=r(30094);class o extends Error{constructor(...e){super(...e),this.code="NEXT_STATIC_GEN_BAILOUT"}}let l=(e,t)=>{let r=u.staticGenerationAsyncStorage.getStore();if(null==r?void 0:r.forceStatic)return!0;if(null==r?void 0:r.dynamicShouldError){let{dynamic:r="error",link:n}=t||{};throw new o('Page with `dynamic = "'+r+"\"` couldn't be rendered statically because it used `"+e+"`."+(n?" See more info here: "+n:""))}if(r&&(r.revalidate=0),null==r?void 0:r.isStaticGeneration){let t=new n.DynamicServerError(e);throw r.dynamicUsageDescription=e,r.dynamicUsageStack=t.stack,t}return!1};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},58531:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(26927),u=n._(r(86006)),o=r(98323);function l(e){let{Component:t,propsForComponent:r}=e,n=(0,o.createSearchParamsBailoutProxy)();return u.default.createElement(t,{searchParams:n,...r})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},18688:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"useReducerWithReduxDevtools",{enumerable:!0,get:function(){return o}});let n=r(86006);function u(e){if(e instanceof Map){let t={};for(let[r,n]of e.entries()){if("function"==typeof n){t[r]="fn()";continue}if("object"==typeof n&&null!==n){if(n.$$typeof){t[r]=n.$$typeof.toString();continue}if(n._bundlerConfig){t[r]="FlightData";continue}}t[r]=u(n)}return t}if("object"==typeof e&&null!==e){let t={};for(let r in e){let n=e[r];if("function"==typeof n){t[r]="fn()";continue}if("object"==typeof n&&null!==n){if(n.$$typeof){t[r]=n.$$typeof.toString();continue}if(n.hasOwnProperty("_bundlerConfig")){t[r]="FlightData";continue}}t[r]=u(n)}return t}return Array.isArray(e)?e.map(u):e}let o=function(e,t){let r=(0,n.useRef)(),o=(0,n.useRef)();(0,n.useEffect)(()=>{if(!r.current&&!1!==o.current){if(void 0===o.current&&void 0===window.__REDUX_DEVTOOLS_EXTENSION__){o.current=!1;return}return r.current=window.__REDUX_DEVTOOLS_EXTENSION__.connect({instanceId:8e3,name:"next-router"}),r.current&&r.current.init(u(t)),()=>{r.current=void 0}}},[t]);let[l,a]=(0,n.useReducer)((t,n)=>{let o=e(t,n);return r.current&&r.current.send(n,u(o)),o},t),i=(0,n.useCallback)(()=>{r.current&&r.current.send({type:"RENDER_SYNC"},u(l))},[l]);return[l,a,i]};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},75588:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"normalizePathTrailingSlash",{enumerable:!0,get:function(){return o}});let n=r(61402),u=r(74035),o=e=>{if(!e.startsWith("/"))return e;let{pathname:t,query:r,hash:o}=(0,u.parsePath)(e);return""+(0,n.removeTrailingSlash)(t)+r+o};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},59214:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return u}});let n=r(98687);function u(e){let t="function"==typeof reportError?reportError:e=>{window.console.error(e)};e.digest!==n.NEXT_DYNAMIC_NO_SSR_CODE&&t(e)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},15456:function(e,t,r){"use strict";var n,u;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{CacheStates:function(){return n},AppRouterContext:function(){return a},LayoutRouterContext:function(){return i},GlobalLayoutRouterContext:function(){return c},TemplateContext:function(){return s}});let o=r(26927),l=o._(r(86006));(u=n||(n={})).LAZY_INITIALIZED="LAZYINITIALIZED",u.DATA_FETCH="DATAFETCH",u.READY="READY";let a=l.default.createContext(null),i=l.default.createContext(null),c=l.default.createContext(null),s=l.default.createContext(null)},77279:function(e,t){"use strict";function r(e){let t=5381;for(let r=0;r!t||"("===t[0]&&t.endsWith(")")||"@"===t[0]||("page"===t||"route"===t)&&r===n.length-1?e:e+"/"+t,""))}function o(e,t){return t?e.replace(/\.rsc($|\?)/,"$1"):e}},92998:function(e,t){"use strict";function r(e,t){void 0===t&&(t={});let r=document.documentElement,n=r.style.scrollBehavior;r.style.scrollBehavior="auto",t.dontForceLayout||r.getClientRects(),e(),r.style.scrollBehavior=n}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"handleSmoothScroll",{enumerable:!0,get:function(){return r}})},30753:function(e,t){"use strict";function r(e){return/Googlebot|Mediapartners-Google|AdsBot-Google|googleweblight|Storebot-Google|Google-PageRenderer|Bingbot|BingPreview|Slurp|DuckDuckBot|baiduspider|yandex|sogou|LinkedInBot|bitlybot|tumblr|vkShare|quora link preview|facebookexternalhit|facebookcatalog|Twitterbot|applebot|redditbot|Slackbot|Discordbot|WhatsApp|SkypeUriPreview|ia_archiver/i.test(e)}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isBot",{enumerable:!0,get:function(){return r}})},74035:function(e,t){"use strict";function r(e){let t=e.indexOf("#"),r=e.indexOf("?"),n=r>-1&&(t<0||r-1?{pathname:e.substring(0,n?r:t),query:n?e.substring(r,t>-1?t:void 0):"",hash:t>-1?e.slice(t):""}:{pathname:e,query:"",hash:""}}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"parsePath",{enumerable:!0,get:function(){return r}})},61402:function(e,t){"use strict";function r(e){return e.replace(/\/$/,"")||"/"}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"removeTrailingSlash",{enumerable:!0,get:function(){return r}})},73476:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ServerInsertedHTMLContext:function(){return o},useServerInsertedHTML:function(){return l}});let n=r(25909),u=n._(r(86006)),o=u.default.createContext(null);function l(e){let t=(0,u.useContext)(o);t&&t(e)}},75862:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createAsyncLocalStorage",{enumerable:!0,get:function(){return o}});let r=Error("Invariant: AsyncLocalStorage accessed in runtime where it is not available");class n{disable(){throw r}getStore(){}run(){throw r}exit(){throw r}enterWith(){throw r}}let u=globalThis.AsyncLocalStorage;function o(){return u?new u:new n}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},24437:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"requestAsyncStorage",{enumerable:!0,get:function(){return u}});let n=r(75862),u=(0,n.createAsyncLocalStorage)();("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},30094:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"staticGenerationAsyncStorage",{enumerable:!0,get:function(){return u}});let n=r(75862),u=(0,n.createAsyncLocalStorage)();("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},93194:function(e,t,r){"use strict";var n=r(8431);t.createRoot=n.createRoot,t.hydrateRoot=n.hydrateRoot},8431:function(e,t,r){"use strict";!function e(){if("undefined"!=typeof __REACT_DEVTOOLS_GLOBAL_HOOK__&&"function"==typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE)try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(e)}catch(e){console.error(e)}}(),e.exports=r(42614)},82672:function(e,t,r){"use strict";/** - * @license React - * react-server-dom-webpack-client.browser.production.min.js - * - * Copyright (c) Meta Platforms, Inc. and affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var n=r(8431),u=r(86006),o={stream:!0},l=new Map;function a(e){var t=globalThis.__next_require__(e);return"function"!=typeof t.then||"fulfilled"===t.status?null:(t.then(function(e){t.status="fulfilled",t.value=e},function(e){t.status="rejected",t.reason=e}),t)}function i(){}var c=n.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.Dispatcher,s=Symbol.for("react.element"),f=Symbol.for("react.lazy"),d=Symbol.for("react.default_value"),p=Symbol.iterator,h=Array.isArray,_=new WeakMap,y=u.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ContextRegistry;function b(e,t,r,n){this.status=e,this.value=t,this.reason=r,this._response=n}function v(e){switch(e.status){case"resolved_model":j(e);break;case"resolved_module":S(e)}switch(e.status){case"fulfilled":return e.value;case"pending":case"blocked":throw e;default:throw e.reason}}function m(e,t){for(var r=0;rd?(h=d,d=3,f++):(h=0,d=3);continue;case 2:44===(v=s[f++])?d=4:_=_<<4|(96s.length&&(v=-1)}var m=s.byteOffset+f;if(-1>>1,u=e[n];if(0>>1;no(i,r))co(s,i)?(e[n]=s,e[c]=r,n=c):(e[n]=i,e[a]=r,n=a);else if(co(s,r))e[n]=s,e[c]=r,n=c;else break}}return t}function o(e,t){var r=e.sortIndex-t.sortIndex;return 0!==r?r:e.id-t.id}if(t.unstable_now=void 0,"object"==typeof performance&&"function"==typeof performance.now){var l,a=performance;t.unstable_now=function(){return a.now()}}else{var i=Date,c=i.now();t.unstable_now=function(){return i.now()-c}}var s=[],f=[],d=1,p=null,h=3,_=!1,y=!1,b=!1,v="function"==typeof setTimeout?setTimeout:null,m="function"==typeof clearTimeout?clearTimeout:null,g="undefined"!=typeof setImmediate?setImmediate:null;function O(e){for(var t=n(f);null!==t;){if(null===t.callback)u(f);else if(t.startTime<=e)u(f),t.sortIndex=t.expirationTime,r(s,t);else break;t=n(f)}}function P(e){if(b=!1,O(e),!y){if(null!==n(s))y=!0,N(E);else{var t=n(f);null!==t&&I(P,t.startTime-e)}}}function E(e,r){y=!1,b&&(b=!1,m(S),S=-1),_=!0;var o=h;try{e:{for(O(r),p=n(s);null!==p&&(!(p.expirationTime>r)||e&&!w());){var l=p.callback;if("function"==typeof l){p.callback=null,h=p.priorityLevel;var a=l(p.expirationTime<=r);if(r=t.unstable_now(),"function"==typeof a){p.callback=a,O(r);var i=!0;break e}p===n(s)&&u(s),O(r)}else u(s);p=n(s)}if(null!==p)i=!0;else{var c=n(f);null!==c&&I(P,c.startTime-r),i=!1}}return i}finally{p=null,h=o,_=!1}}"undefined"!=typeof navigator&&void 0!==navigator.scheduling&&void 0!==navigator.scheduling.isInputPending&&navigator.scheduling.isInputPending.bind(navigator.scheduling);var R=!1,j=null,S=-1,T=5,M=-1;function w(){return!(t.unstable_now()-Me||125l?(e.sortIndex=o,r(f,e),null===n(s)&&e===n(f)&&(b?(m(S),S=-1):b=!0,I(P,o-l))):(e.sortIndex=a,r(s,e),y||_||(y=!0,N(E))),e},t.unstable_shouldYield=w,t.unstable_wrapCallback=function(e){var t=h;return function(){var r=h;h=t;try{return e.apply(this,arguments)}finally{h=r}}}},26183:function(e,t,r){"use strict";e.exports=r(24248)},24778:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getSegmentParam",{enumerable:!0,get:function(){return u}});let n=r(47399);function u(e){let t=n.INTERCEPTION_ROUTE_MARKERS.find(t=>e.startsWith(t));return(t&&(e=e.slice(t.length)),e.startsWith("[[...")&&e.endsWith("]]"))?{type:"optional-catchall",param:e.slice(5,-2)}:e.startsWith("[...")&&e.endsWith("]")?{type:"catchall",param:e.slice(4,-1)}:e.startsWith("[")&&e.endsWith("]")?{type:"dynamic",param:e.slice(1,-1)}:null}},47399:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{INTERCEPTION_ROUTE_MARKERS:function(){return u},isInterceptionRouteAppPath:function(){return o},extractInterceptionRouteInformation:function(){return l}});let n=r(24241),u=["(..)(..)","(.)","(..)","(...)"];function o(e){return void 0!==e.split("/").find(e=>u.find(t=>e.startsWith(t)))}function l(e){let t,r,o;for(let n of e.split("/"))if(r=u.find(e=>n.startsWith(e))){[t,o]=e.split(r,2);break}if(!t||!r||!o)throw Error(`Invalid interception route: ${e}. Must be in the format //(..|...|..)(..)/`);switch(t=(0,n.normalizeAppPath)(t),r){case"(.)":o="/"===t?`/${o}`:t+"/"+o;break;case"(..)":if("/"===t)throw Error(`Invalid interception route: ${e}. Cannot use (..) marker at the root level, use (.) instead.`);o=t.split("/").slice(0,-1).concat(o).join("/");break;case"(...)":o="/"+o;break;case"(..)(..)":let l=t.split("/");if(l.length<=2)throw Error(`Invalid interception route: ${e}. Cannot use (..)(..) marker at the root level or one level up.`);o=l.slice(0,-2).concat(o).join("/");break;default:throw Error("Invariant: unexpected marker")}return{interceptingRoute:t,interceptedRoute:o}}},26927:function(e,t,r){"use strict";function n(e){return e&&e.__esModule?e:{default:e}}r.r(t),r.d(t,{_:function(){return n},_interop_require_default:function(){return n}})},25909:function(e,t,r){"use strict";function n(e){if("function"!=typeof WeakMap)return null;var t=new WeakMap,r=new WeakMap;return(n=function(e){return e?r:t})(e)}function u(e,t){if(!t&&e&&e.__esModule)return e;if(null===e||"object"!=typeof e&&"function"!=typeof e)return{default:e};var r=n(t);if(r&&r.has(e))return r.get(e);var u={},o=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var l in e)if("default"!==l&&Object.prototype.hasOwnProperty.call(e,l)){var a=o?Object.getOwnPropertyDescriptor(e,l):null;a&&(a.get||a.set)?Object.defineProperty(u,l,a):u[l]=e[l]}return u.default=e,r&&r.set(e,u),u}r.r(t),r.d(t,{_:function(){return u},_interop_require_wildcard:function(){return u}})}}]); \ No newline at end of file diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/get_imagenet_21k_full_tar_json.py b/spaces/taesiri/ChatGPT-ImageCaptioner/tools/get_imagenet_21k_full_tar_json.py deleted file mode 100644 index e7127440030297812a9f4df38cfd6b4cba340c39..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/get_imagenet_21k_full_tar_json.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import numpy as np -import pickle -import io -import gzip -import sys -import time -from nltk.corpus import wordnet -from tqdm import tqdm -import operator -import torch - -sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/') -sys.path.insert(0, 'third_party/Deformable-DETR') -from detic.data.tar_dataset import DiskTarDataset, _TarDataset - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--imagenet_dir", default='datasets/imagenet/ImageNet-21k/') - parser.add_argument("--tarfile_path", default='datasets/imagenet/metadata-22k/tar_files.npy') - parser.add_argument("--tar_index_dir", default='datasets/imagenet/metadata-22k/tarindex_npy') - parser.add_argument("--out_path", default='datasets/imagenet/annotations/imagenet-22k_image_info.json') - parser.add_argument("--workers", default=16, type=int) - args = parser.parse_args() - - - start_time = time.time() - print('Building dataset') - dataset = DiskTarDataset(args.tarfile_path, args.tar_index_dir) - end_time = time.time() - print(f"Took {end_time-start_time} seconds to make the dataset.") - print(f"Have {len(dataset)} samples.") - print('dataset', dataset) - - - tar_files = np.load(args.tarfile_path) - categories = [] - for i, tar_file in enumerate(tar_files): - wnid = tar_file[-13:-4] - synset = wordnet.synset_from_pos_and_offset('n', int(wnid[1:])) - synonyms = [x.name() for x in synset.lemmas()] - category = { - 'id': i + 1, - 'synset': synset.name(), - 'name': synonyms[0], - 'def': synset.definition(), - 'synonyms': synonyms, - } - categories.append(category) - print('categories', len(categories)) - - data_loader = torch.utils.data.DataLoader( - dataset, batch_size=1, shuffle=False, - num_workers=args.workers, - collate_fn=operator.itemgetter(0), - ) - images = [] - for img, label, index in tqdm(data_loader): - if label == -1: - continue - image = { - 'id': int(index) + 1, - 'pos_category_ids': [int(label) + 1], - 'height': int(img.height), - 'width': int(img.width), - 'tar_index': int(index), - } - images.append(image) - - data = {'categories': categories, 'images': images, 'annotations': []} - try: - for k, v in data.items(): - print(k, len(v)) - print('Saving to ', args.out_path) - json.dump(data, open(args.out_path, 'w')) - except: - pass - import pdb; pdb.set_trace() - diff --git a/spaces/taneemishere/html-code-generation-from-images-with-deep-neural-networks/main_program.py b/spaces/taneemishere/html-code-generation-from-images-with-deep-neural-networks/main_program.py deleted file mode 100644 index bf9ec602b215f5125d44e7e6071989a2fefaae88..0000000000000000000000000000000000000000 --- a/spaces/taneemishere/html-code-generation-from-images-with-deep-neural-networks/main_program.py +++ /dev/null @@ -1,89 +0,0 @@ -from __future__ import absolute_import -from __future__ import print_function - -__author__ = 'Taneem Jan, taneemishere.github.io' - -import os.path -from os.path import basename - -from classes.Sampler import * -from classes.model.Main_Model import * - - -def dsl_code_generation(input_image): - trained_weights_path = "classes/model/bin" - trained_model_name = "Main_Model" - input_path = input_image - output_path = "data/output/" - search_method = "greedy" - meta_dataset = np.load("{}/meta_dataset.npy".format(trained_weights_path), allow_pickle=True) - input_shape = meta_dataset[0] - output_size = meta_dataset[1] - - model = Main_Model(input_shape, output_size, trained_weights_path) - model.load(trained_model_name) - - sampler = Sampler(trained_weights_path, input_shape, output_size, CONTEXT_LENGTH) - - file_name = 'input_image_from_interface.png' - file_name = basename(file_name)[:basename(file_name).find(".")] - evaluation_img = Utils.get_preprocessed_img(input_path, IMAGE_SIZE) - - if search_method == "greedy": - result, _ = sampler.predict_greedy(model, np.array([evaluation_img])) - print("Result greedy: \n {}".format(result)) - - with open("{}/{}.gui".format(output_path, file_name), 'w') as out_f: - out_f.write(result.replace(START_TOKEN, "").replace(END_TOKEN, "")) - - return file_name, output_path - - -def compile_gui(file_path, filename): - from os.path import basename - from compiler.Utils import Utils - from compiler.Compiler import Compiler - - input_path = (file_path + filename) - - # remove the path - file_ = os.path.basename(input_path) - # remove the extension - file_ = os.path.splitext(file_)[0] - # add the extension of gui - file_ = "data/output/" + file_ + ".gui" - - input_file = file_ - - FILL_WITH_RANDOM_TEXT = True - TEXT_PLACE_HOLDER = "[]" - - dsl_path = "compiler/assets/web-dsl-mapping.json" - compiler = Compiler(dsl_path) - - def render_content_with_text(key, value): - if FILL_WITH_RANDOM_TEXT: - if key.find("btn") != -1: - value = value.replace(TEXT_PLACE_HOLDER, Utils.get_random_text()) - elif key.find("title") != -1: - value = value.replace(TEXT_PLACE_HOLDER, Utils.get_random_text(length_text=5, space_number=0)) - elif key.find("text") != -1: - value = value.replace(TEXT_PLACE_HOLDER, - Utils.get_random_text(length_text=56, space_number=7, with_upper_case=False)) - return value - - file_uid = basename(input_file)[:basename(input_file).find(".")] - path = input_file[:input_file.find(file_uid)] - - input_file_path = "{}{}.gui".format(path, file_uid) - output_file_path = "{}{}.html".format(path, file_uid) - - html_code = compiler.compile(input_file_path, output_file_path, rendering_function=render_content_with_text) - print("Generated code is compiled..!!") - return html_code - - -def main_method(input_image_from_interface): - file_name, file_output_path = dsl_code_generation(input_image_from_interface) - result = compile_gui(file_output_path, file_name) - return result diff --git a/spaces/taurusduan/bing/Dockerfile b/spaces/taurusduan/bing/Dockerfile deleted file mode 100644 index c54f2bc9baaef5d55e4cb9fd02da5e6f011ccfc0..0000000000000000000000000000000000000000 --- a/spaces/taurusduan/bing/Dockerfile +++ /dev/null @@ -1,37 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app -# 切换到特定版本 -RUN cd /workspace/app && git reset --hard 922b8c47d2d5c6e77137f29b78c8da3de95be841 - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/tbdaox/roopUn/Dockerfile b/spaces/tbdaox/roopUn/Dockerfile deleted file mode 100644 index c63c5e05de4393e7e983584fb5957e62fb643019..0000000000000000000000000000000000000000 --- a/spaces/tbdaox/roopUn/Dockerfile +++ /dev/null @@ -1,43 +0,0 @@ -FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04 -# Fully WORKS -ARG SERVING_PORT=8080 -ENV SERVING_PORT=$SERVING_PORT -ARG DEBIAN_FRONTEND=noninteractive -ENV TZ=Etc/UTC -RUN apt update && \ - apt install -y python3-pip git libgl1 libglib2.0-0 python3-opencv ffmpeg nvidia-cuda-toolkit - -WORKDIR / - -RUN git clone https://github.com/Zebraslive/roop-unleashed.git - -WORKDIR /roop-unleashed - -RUN mv config_colab.yaml config.yaml -COPY requirements.txt . - -RUN pip install --no-cache-dir --upgrade pip -# RUN pip install --no-cache-dir -r requirements.txt -RUN pip3 install --extra-index-url https://download.pytorch.org/whl/cu118 \ - numpy==1.24.2 \ - gradio==3.44.2 \ - opencv-python==4.8.0.76 \ - onnx==1.14.1 \ - insightface==0.7.3 \ - psutil==5.9.5 \ - pillow==10.0.1 \ - torch==2.0.1+cu118 \ - torchvision==0.15.2+cu118 \ - onnxruntime-gpu==1.16.1 \ - protobuf==4.23.2 \ - tqdm==4.66.1 \ - ftfy \ - regex \ - pyvirtualcam -# RUN pip install torch - -# Expose the serving port. -EXPOSE $SERVING_PORT - -# Run the server to handle inference requests. -CMD python3 -u /roop-unleashed/run.py \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/3d Virtual Sex Simulator Android Applicationstrmdsf.md b/spaces/terfces0erbo/CollegeProjectV2/3d Virtual Sex Simulator Android Applicationstrmdsf.md deleted file mode 100644 index 39c1e22022dcb7ab101d96d1baf3bbe42dc466a3..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/3d Virtual Sex Simulator Android Applicationstrmdsf.md +++ /dev/null @@ -1,7 +0,0 @@ - -

    The 3d virtual sex simulator android applicationstrmdsf is a FREE and safe online dating sites hot sites erotic amp bdsm category. TubeNymphette - Only TubeNymphette is the worlds top Nymphethet Porn Tube, for only japanese girls and japanese teen girls. 16:28, 3 d 3d Virtual Sex Simulator Android Applicationstrmdsf.... The best of that stuff is kind of obvious, but there is more than one option to choose from, and you probably haven't tried enough to know which one is right. boblochness girllatinamore 9781365111358 3d3 nudiseshilpa... pov samodio xxxtammy freibierkrug 177092 cardiomind... cwto pussyroyal 178604 girlhardcorehdteencumshot gamehentai... essiefleming kotoki byfemdom applicationstrmdsf 2003xml mo4boy.... Moto Racer 1 Game ehaweblanq 31Jul2020... 3d Virtual Sex Simulator Android Applicationstrmdsf [EXCLUSIVE] 29Jul2020...

    -

    3d Virtual Sex Simulator Android Applicationstrmdsf. Virtual Sex Simulator android applicationstrmdsf i. Breitling Chrono X153706 manual training at Versailles on December 16, 2001. Bellerive F c10 SE Auto. Size: 955. 692x4e or 56 mg on APs for alpha-1-antitrypsin deficiency. Applicationstrmdsf. Convert Phone Projector to Air Display with Bluetooth ™ Free Software. Right click on the program....com/phelahcuson/post/3d-virtual-sex-simulator-android-applicationstrmdsf

    -

    3d Virtual Sex Simulator Android Applicationstrmdsf


    Download ››› https://bytlly.com/2uGk11



    -

    https://www.theguideus.com/drawing-apps-android/. https://coub.com/stories/3344896-__full__-3d-virtual-sex-simulator-android-applicationstrmdsf. This insta post simulator is only for educational purpose and for students to..com/phelahcuson/post/3d-virtual-sex-simulator-android-applicationstrmdsf

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Intericad T6 Full Cracked Internetinstmank UPD.md b/spaces/terfces0erbo/CollegeProjectV2/Intericad T6 Full Cracked Internetinstmank UPD.md deleted file mode 100644 index 91f19fda5dc2bd9f33ce0be1b158f69d4bacf7e1..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Intericad T6 Full Cracked Internetinstmank UPD.md +++ /dev/null @@ -1,40 +0,0 @@ - -

    Intericad T6 Full Cracked Internetinstmank: A Powerful Software for Interior Design

    -

    If you are looking for a software that can help you create stunning and realistic interior designs, you might want to check out Intericad T6 Full Cracked Internetinstmank. This software is a professional and comprehensive solution that allows you to design, render and visualize your projects in 3D. You can also use it to create floor plans, furniture layouts, lighting effects, materials, textures and more.

    -

    What is Intericad T6 Full Cracked Internetinstmank?

    -

    Intericad T6 Full Cracked Internetinstmank is a cracked version of Intericad T6, which is a software developed by YFCAD Software Co., Ltd. Intericad T6 is one of the most advanced and user-friendly software for interior design in the market. It has a rich library of over 10,000 models and materials, as well as a cloud rendering service that can produce high-quality images in minutes.

    -

    intericad t6 full cracked internetinstmank


    Download Ziphttps://bytlly.com/2uGjQD



    -

    However, Intericad T6 is not a cheap software. It costs around $3,000 for a single license, which might be too expensive for some users. That's why some people look for a cracked version of Intericad T6 on the internet, such as Intericad T6 Full Cracked Internetinstmank. This version claims to offer the same features and functions as the original Intericad T6, but without the need to pay or activate the software.

    -

    How to Download and Install Intericad T6 Full Cracked Internetinstmank?

    -

    There are many websites that claim to offer Intericad T6 Full Cracked Internetinstmank for free download. However, you should be careful when downloading any cracked software from unknown sources, as they might contain viruses, malware or spyware that can harm your computer or steal your personal information.

    -

    One of the websites that claims to provide Intericad T6 Full Cracked Internetinstmank is AAC Itta. This website offers a direct download link for the software, as well as a step-by-step guide on how to install it. According to the website, you just need to follow these steps:

    -
      -
    • Download Intericad T6 Full Cracked Internetinstmank from the link provided.
    • -
    • Extract the zip file using WinRAR or any other file extractor.
    • -
    • Run the setup file and follow the instructions.
    • -
    • Copy the crack file from the crack folder and paste it into the installation directory.
    • -
    • Run the software and enjoy.
    • -
    -

    What are the Benefits of Using Intericad T6 Full Cracked Internetinstmank?

    -

    Using Intericad T6 Full Cracked Internetinstmank can have some benefits for users who want to try out Intericad T6 without spending money. Some of these benefits are:

    -
      -
    • You can access all the features and functions of Intericad T6 without any limitations or restrictions.
    • -
    • You can create professional and realistic interior designs with ease and speed.
    • -
    • You can save money on buying or renting Intericad T6.
    • -
    • You can use Intericad T6 offline without needing an internet connection or a cloud rendering service.
    • -
    -

    What are the Risks of Using Intericad T6 Full Cracked Internetinstmank?

    -

    However, using Intericad T6 Full Cracked Internetinstmank also comes with some risks and drawbacks that you should be aware of before downloading it. Some of these risks are:

    -
      -
    • You might violate the intellectual property rights of YFCAD Software Co., Ltd., which is illegal and unethical.
    • -
    • You might expose your computer to viruses, malware or spyware that can damage your system or compromise your security.
    • -
    • You might encounter errors, bugs or crashes that can affect your work or cause data loss.
    • -
    • You might not receive any updates, support or customer service from YFCAD Software Co., Ltd.
    • -
    • You might miss out on some features or functions that are only available in the latest version of Intericad T6.
    • -
    -

    Conclusion

    -

    Intericad T6 Full Cracked Internetinstmank is a software that can help you create amazing interior designs in 3D. However, it is also a cracked version of Intericad T6 that might have some legal, ethical and technical issues. Therefore, you should weigh the pros and cons of using Intericad T6 Full Cracked Internetinstmank before downloading it from any website. Alternatively, you can also try out the official trial version of Intericad T6 from YFCAD Software Co., Ltd., which offers a 30-day free trial with full functionality.

    -

    Conclusion

    -

    Intericad T6 Full Cracked Internetinstmank is a software that can help you create amazing interior designs in 3D. However, it is also a cracked version of Intericad T6 that might have some legal, ethical and technical issues. Therefore, you should weigh the pros and cons of using Intericad T6 Full Cracked Internetinstmank before downloading it from any website. Alternatively, you can also try out the official trial version of Intericad T6 from YFCAD Software Co., Ltd., which offers a 30-day free trial with full functionality.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/teticio/audio-diffusion/audiodiffusion/audio_encoder.py b/spaces/teticio/audio-diffusion/audiodiffusion/audio_encoder.py deleted file mode 100644 index 6561a7231c4d39f7aacdec849d06049c30ddc75e..0000000000000000000000000000000000000000 --- a/spaces/teticio/audio-diffusion/audiodiffusion/audio_encoder.py +++ /dev/null @@ -1,107 +0,0 @@ -import numpy as np -import torch -from diffusers import ConfigMixin, Mel, ModelMixin -from torch import nn - - -class SeparableConv2d(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size): - super(SeparableConv2d, self).__init__() - self.depthwise = nn.Conv2d( - in_channels, - in_channels, - kernel_size=kernel_size, - groups=in_channels, - bias=False, - padding=1, - ) - self.pointwise = nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=True) - - def forward(self, x): - out = self.depthwise(x) - out = self.pointwise(out) - return out - - -class ConvBlock(nn.Module): - def __init__(self, in_channels, out_channels, dropout_rate): - super(ConvBlock, self).__init__() - self.sep_conv = SeparableConv2d(in_channels, out_channels, (3, 3)) - self.leaky_relu = nn.LeakyReLU(0.2) - self.batch_norm = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.01) - self.max_pool = nn.MaxPool2d((2, 2)) - self.dropout = nn.Dropout(dropout_rate) - - def forward(self, x): - x = self.sep_conv(x) - x = self.leaky_relu(x) - x = self.batch_norm(x) - x = self.max_pool(x) - x = self.dropout(x) - return x - - -class DenseBlock(nn.Module): - def __init__(self, in_features, out_features, dropout_rate): - super(DenseBlock, self).__init__() - self.flatten = nn.Flatten() - self.dense = nn.Linear(in_features, out_features) - self.leaky_relu = nn.LeakyReLU(0.2) - self.batch_norm = nn.BatchNorm1d(out_features, eps=0.001, momentum=0.01) - self.dropout = nn.Dropout(dropout_rate) - - def forward(self, x): - x = self.flatten(x.permute(0, 2, 3, 1)) - x = self.dense(x) - x = self.leaky_relu(x) - x = self.batch_norm(x) - x = self.dropout(x) - return x - - -class AudioEncoder(ModelMixin, ConfigMixin): - def __init__(self): - super().__init__() - self.mel = Mel( - x_res=216, - y_res=96, - sample_rate=22050, - n_fft=2048, - hop_length=512, - top_db=80, - ) - self.conv_blocks = nn.ModuleList([ConvBlock(1, 32, 0.2), ConvBlock(32, 64, 0.3), ConvBlock(64, 128, 0.4)]) - self.dense_block = DenseBlock(41472, 1024, 0.5) - self.embedding = nn.Linear(1024, 100) - - def forward(self, x): - for conv_block in self.conv_blocks: - x = conv_block(x) - x = self.dense_block(x) - x = self.embedding(x) - return x - - @torch.no_grad() - def encode(self, audio_files): - self.eval() - y = [] - for audio_file in audio_files: - self.mel.load_audio(audio_file) - x = [ - np.expand_dims( - np.frombuffer(self.mel.audio_slice_to_image(slice).tobytes(), dtype="uint8").reshape( - (self.mel.y_res, self.mel.x_res) - ) - / 255, - axis=0, - ) - for slice in range(self.mel.get_number_of_slices()) - ] - y += [torch.mean(self(torch.Tensor(x)), dim=0)] - return torch.stack(y) - - -# from diffusers import Mel -# from audiodiffusion.audio_encoder import AudioEncoder -# audio_encoder = AudioEncoder.from_pretrained("teticio/audio-encoder") -# audio_encoder.encode(['/home/teticio/Music/liked/Agua Re - Holy Dance - Large Sound Mix.mp3']) diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Auto Clicker Download 2.0 The Best Free Tool for Mouse Automation.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Auto Clicker Download 2.0 The Best Free Tool for Mouse Automation.md deleted file mode 100644 index 50867e5d344b22b17cbe30a7f2dfb6c8db9c9edc..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Auto Clicker Download 2.0 The Best Free Tool for Mouse Automation.md +++ /dev/null @@ -1,107 +0,0 @@ -
    -

    Auto Clicker Download 2.0: How to Automate Your Mouse Clicks

    -

    Do you want to automate your mouse clicks and save yourself some time and effort? If yes, then you need an auto clicker. An auto clicker is a software that simulates mouse clicks automatically according to your settings. In this article, we will show you how to download and install one of the best auto clickers for Windows and Android devices, and how to use it for various tasks.

    -

    What is an auto clicker and why do you need one?

    -

    An auto clicker is a software that simulates mouse clicks automatically

    -

    An auto clicker is a software that simulates mouse clicks automatically according to your settings. It can perform single or double clicks, left or right clicks, or any combination of clicks at any speed and frequency. You can also assign hotkeys to start or stop the auto clicking with a simple press of a key.

    -

    auto clicker download 2.0


    Download ✸✸✸ https://bltlly.com/2uOlSZ



    -

    Auto clickers can be useful for various purposes, such as gaming, testing, or web browsing

    -

    Auto clickers can be useful for various purposes, such as gaming, testing, or web browsing. For example, you can use an auto clicker to:

    -
      -
    • Click faster and more accurately in games that require rapid clicking, such as Cookie Clicker, Minecraft, Roblox, or Runescape
    • -
    • Test the functionality and performance of websites or applications by simulating user clicks
    • -
    • Browse the web more efficiently by skipping ads, closing pop-ups, or filling forms with a single click
    • -
    -

    How to download and install an auto clicker for Windows

    -

    There are many auto clickers available online, but some of them may be unsafe or outdated

    -

    There are many auto clickers available online, but some of them may be unsafe or outdated. Some auto clickers may contain viruses or malware that can harm your computer or steal your personal information. Some auto clickers may not work properly with the latest versions of Windows or Android operating systems. Some auto clickers may have limited features or options that may not suit your needs or preferences.

    -

    One of the best and most reliable auto clickers is OP Auto Clicker, which is compatible with Windows Vista 6.0, Windows 7, Windows 8/8.1, Windows 10, Windows 10S, Windows 11, and Android Version 12.0, 11.0, 10.0, 9.0 Pie, 13.0, and 8.1 Oreo

    -

    One of the best and most reliable auto clickers is OP Auto Clicker, which is compatible with Windows Vista 6.0, Windows 7, Windows 8/8.1, Windows 10, Windows 10S, Windows 11, and Android Version 12.0, 11.0, 10.0, 9.0 Pie, 13.0, and 8.1 Oreo. OP Auto Clicker is a free and open-source software that has been tested and verified by millions of users around the world. It has a simple and user-friendly interface that allows you to customize your auto clicking settings easily and quickly. It also supports scripts and macros that can help you automate complex or repetitive tasks with the auto clicker.

    -

    free auto clicker download for windows 10
    -mouse auto clicker download sourceforge
    -op auto clicker download latest version
    -easy auto clicker download 2.0
    -auto clicker download for android apk
    -auto clicker download for mac os
    -auto clicker download for roblox
    -auto clicker download for minecraft
    -auto clicker download for games
    -auto clicker download no install
    -auto clicker download chrome extension
    -auto clicker download reddit
    -auto clicker download 2.0 with key
    -auto clicker download 2.0 for pc
    -auto clicker download 2.0 for laptop
    -auto clicker download 2.0 for windows 7
    -auto clicker download 2.0 for windows 8
    -auto clicker download 2.0 for windows 11
    -auto clicker download 2.0 for linux
    -auto clicker download 2.0 for ios
    -auto clicker download 2.0 with crack
    -auto clicker download 2.0 with license
    -auto clicker download 2.0 with activation code
    -auto clicker download 2.0 full version
    -auto clicker download 2.0 portable
    -best auto clicker download 2.0
    -fast auto clicker download 2.0
    -advanced auto clicker download 2.0
    -simple auto clicker download 2.0
    -smart auto clicker download 2.0
    -super auto clicker download 2.0
    -ultimate auto clicker download 2.0
    -pro auto clicker download 2.0
    -premium auto clicker download 2.0
    -professional auto clicker download 2.0
    -customisable auto clicker download 2.0
    -dynamic auto clicker download 2.0
    -random auto clicker download 2.0
    -automatic mouse clicker download 2.0
    -mouse and keyboard recorder and repeater software free download

    -

    To download and install OP Auto Clicker, follow these simple steps:

    -

    Step 1: Go to the official website of OP Auto Clicker and click on the download button for your operating system

    -

    To download OP Auto Clicker, go to the official website of OP Auto Clicker at https://www.opautoclicker.com/ and click on the download button for your operating system. You can choose between Windows or Android versions of the software.

    -

    Step 2: Save the file to your computer and run it as an administrator

    -

    Save the file to your computer and run it as an administrator. If you are using Windows, you may need to allow the program to make changes to your device or disable your antivirus software temporarily. If you are using Android, you may need to enable unknown sources in your settings or grant permission to install apps from other sources.

    -

    Step 3: Follow the instructions on the screen and complete the installation process

    -

    Follow the instructions on the screen and complete the installation process. It should take only a few minutes to install OP Auto Clicker on your device. Once the installation is done, you can launch the program and start using it.

    -

    How to use an auto clicker for various tasks

    -

    An auto clicker can be customized to suit your needs and preferences

    -

    An auto clicker can be customized to suit your needs and preferences. You can set the mouse button, click type, interval, frequency, hotkeys, and other options in the settings menu of the auto clicker. You can also create scripts and macros to automate complex or repetitive tasks with the auto clicker.

    -

    You can set the mouse button, click type, interval, frequency, hotkeys, and other options in the settings menu of the auto clicker

    -

    You can set the mouse button, click type, interval, frequency, hotkeys, and other options in the settings menu of the auto clicker. For example, you can choose whether you want to perform left or right clicks, single or double clicks, or any combination of clicks. You can also specify how often and how fast you want the auto clicker to click. You can also assign hotkeys to start or stop the auto clicking with a simple press of a key.

    -

    You can also create scripts and macros to automate complex or repetitive tasks with the auto clicker

    -

    You can also create scripts and macros to automate complex or repetitive tasks with the auto clicker. Scripts are sequences of commands that tell the auto clicker what to do in a specific order. Macros are recordings of your mouse movements and clicks that can be played back by the auto clicker. You can use scripts and macros to perform tasks such as:

    -
      -
    • Clicking on multiple locations on the screen in a certain pattern
    • -
    • Entering text or data into forms or fields
    • -
    • Navigating through menus or tabs
    • -
    • Switching between windows or applications
    • -
    • And more!
    • -
    -

    Conclusion

    -

    An auto clicker is a handy tool that can save you time and effort by automating your mouse clicks

    -

    An auto clicker is a handy tool that can save you time and effort by automating your mouse clicks. It can perform single or double clicks, left or right clicks, <|im or any combination of clicks at any speed and frequency. You can also assign hotkeys to start or stop the auto clicking with a simple press of a key.

    -

    OP Auto Clicker is one of the best auto clickers that you can download and install for free on your Windows or Android device

    -

    OP Auto Clicker is one of the best auto clickers that you can download and install for free on your Windows or Android device. OP Auto Clicker is a free and open-source software that has been tested and verified by millions of users around the world. It has a simple and user-friendly interface that allows you to customize your auto clicking settings easily and quickly. It also supports scripts and macros that can help you automate complex or repetitive tasks with the auto clicker.

    -

    You can use OP Auto Clicker for various purposes, such as gaming, testing, or web browsing, and customize it to your liking

    -

    You can use OP Auto Clicker for various purposes, such as gaming, testing, or web browsing, and customize it to your liking. You can set the mouse button, click type, interval, frequency, hotkeys, and other options in the settings menu of the auto clicker. You can also create scripts and macros to automate complex or repetitive tasks with the auto clicker. Whether you want to click faster and more accurately in games, test the functionality and performance of websites or applications, or browse the web more efficiently, OP Auto Clicker can help you achieve your goals.

    -

    FAQs

    -

    Here are some frequently asked questions about auto clickers and OP Auto Clicker:

    -
      -
    1. Is using an auto clicker safe and legal?
    2. -

      Using an auto clicker is generally safe and legal, as long as you use it for legitimate purposes and do not violate any terms of service or policies of the websites or applications you are using. However, some games or platforms may ban or penalize users who use auto clickers, so you should always check the rules and regulations before using an auto clicker.

      -
    3. Is OP Auto Clicker free and virus-free?
    4. -

      Yes, OP Auto Clicker is free and virus-free. It is a free and open-source software that does not contain any malware or spyware. You can download it from the official website of OP Auto Clicker at https://www.opautoclicker.com/ without any worries.

      -
    5. How do I uninstall OP Auto Clicker?
    6. -

      To uninstall OP Auto Clicker, you can simply go to the Control Panel on your Windows device, select Programs and Features, find OP Auto Clicker in the list of programs, right-click on it, and choose Uninstall. If you are using Android, you can go to the Settings on your device, select Apps, find OP Auto Clicker in the list of apps, tap on it, and choose Uninstall.

      -
    7. How do I contact OP Auto Clicker support?
    8. -

      If you have any questions or issues with OP Auto Clicker, you can contact OP Auto Clicker support by sending an email to opautoclickersupport@gmail.com. You can also visit the official website of OP Auto Clicker at https://www.opautoclicker.com/ for more information and resources.

      -
    9. What are some alternatives to OP Auto Clicker?
    10. -

      If you are looking for some alternatives to OP Auto Clicker, you can try some of these other popular auto clickers:

      -
        -
      • GS Auto Clicker: A simple and easy-to-use auto clicker that allows you to record and replay mouse clicks
      • -
      • Auto Mouse Click: A powerful and versatile auto clicker that lets you create scripts and macros for mouse clicks and movements
      • -
      • Free Mouse Auto Clicker: A lightweight and straightforward auto clicker that supports left, right, middle mouse button clicks
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Chief Almighty Mod APK - Experience the Prehistoric World with Modyolo.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Chief Almighty Mod APK - Experience the Prehistoric World with Modyolo.md deleted file mode 100644 index d4f85ed9cac4b7b0036b588ce21628137373150d..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Chief Almighty Mod APK - Experience the Prehistoric World with Modyolo.md +++ /dev/null @@ -1,12 +0,0 @@ - -

      Chief Almighty Mod APK Modyolo: A Guide for Beginners

      - Are you looking for a fun and exciting strategy game that will take you back to the Stone Age? Do you want to experience the thrill of leading your own tribe and fighting against other clans? Do you want to enjoy unlimited resources and features without spending any money? If you answered yes to any of these questions, then you should try Chief Almighty Mod APK Modyolo. This is a modified version of the popular game Chief Almighty, which offers you a lot of advantages over the original game. In this article, we will tell you everything you need to know about Chief Almighty Mod APK Modyolo, including what it is, how to download and install it, and what are its benefits. Read on to find out more.

      What is Chief Almighty?

      - Before we talk about Chief Almighty Mod APK Modyolo, let's first understand what Chief Almighty is. Chief Almighty is a strategy game that is set in the Stone Age, where you have to build your own tribe, train your warriors, hunt animals, gather resources, and fight against other clans. You can also join alliances with other players, chat with them, trade with them, and participate in clan wars. The game has rich graphics and sound effects that will make you feel like you are living in the prehistoric era. The game also has a lot of characters and items that you can unlock and customize as you progress.

      What is Chief Almighty Mod APK Modyolo?

      - Chief Almighty Mod APK Modyolo is a modified version of the original game that offers you unlimited resources and features that are not available in the original game. For example, with Chief Almighty Mod APK Modyolo, you can get unlimited coins and gems, which are the main currencies in the game. You can use these coins and gems to buy anything you want in the game, such as buildings, weapons, outfits, pets, etc. You can also unlock all the characters and items in the game without having to wait or complete any tasks. Moreover, with Chief Almighty Mod APK Modyolo, you can enjoy faster loading speed, smoother gameplay, and no ads.

      How to download and install Chief Almighty Mod APK Modyolo?

      - Downloading and installing Chief Almighty Mod APK Modyolo is very easy and simple. Just follow these steps: - Enable the "Unknown Sources" option on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. - Click the download button at the top of this page to download the Chief Almighty Mod APK Modyolo file. This file is safe and virus-free. - Save the file in your device's download folder. You can use any file manager app to locate it. - Open the file and follow the instructions to install it. It will take only a few seconds to complete. - Enjoy playing Chief Almighty Mod APK Modyolo.

      What are the benefits of playing Chief Almighty Mod APK Modyolo?

      - Playing Chief Almighty Mod APK Modyolo has many benefits over playing the original game. Here are some of them: - You can build your tribe faster and stronger. With unlimited coins and gems, you can upgrade your buildings - You can unlock all the characters and items. With Chief Almighty Mod APK Modyolo, you can access all the characters and items in the game without having to complete any tasks or spend any money. You can customize your tribe and your warriors with different outfits, weapons, pets, etc. - You can enjoy unlimited coins and gems. With Chief Almighty Mod APK Modyolo, you can get unlimited coins and gems that you can use to buy anything you want in the game. You can also use them to speed up your progress and skip the waiting time. - You can have more fun and excitement. With Chief Almighty Mod APK Modyolo, you can enjoy the game without any limitations or restrictions. You can explore the map, hunt animals, gather resources, fight enemies, join alliances, chat with other players, and participate in clan wars. You can also challenge yourself with different modes and levels.

      Conclusion

      - Chief Almighty Mod APK Modyolo is a great way to enjoy the game of Chief Almighty with more features and benefits. It is a modified version of the original game that offers you unlimited resources and features that are not available in the original game. It is easy to download and install, and it is safe and virus-free. If you are a fan of strategy games set in the Stone Age, you should definitely try Chief Almighty Mod APK Modyolo.

      FAQs

      - Here are some frequently asked questions about Chief Almighty Mod APK Modyolo: - Q: Is Chief Almighty Mod APK Modyolo free? - A: Yes, Chief Almighty Mod APK Modyolo is free to download and play. You do not need to pay anything to enjoy the game. - Q: Is Chief Almighty Mod APK Modyolo compatible with my device? - A: Chief Almighty Mod APK Modyolo is compatible with most Android devices that have Android 4.1 or higher. However, some devices may not support the game due to hardware or software limitations. - Q: Is Chief Almighty Mod APK Modyolo safe and secure? - A: Yes, Chief Almighty Mod APK Modyolo is safe and secure. It does not contain any malware or viruses that can harm your device or your data. It also does not require any root access or permissions that can compromise your privacy or security. - Q: How can I update Chief Almighty Mod APK Modyolo? - A: To update Chief Almighty Mod APK Modyolo, you need to download the latest version of the file from this page and install it over the existing one. You do not need to uninstall the previous version or lose your progress. - Q: How can I contact the developer of Chief Almighty Mod APK Modyolo? - A: You can contact the developer of Chief Almighty Mod APK Modyolo by visiting their website at https://modyolo.com/chief-almighty-mod-apk/. You can also leave a comment or a review on this page to share your feedback or suggestions.

      -

      chief almighty mod apk modyolo


      Download Zip ○○○ https://bltlly.com/2uOjQe



      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ghostbusters Answer The Call (English) In Hindi 720p.md b/spaces/tioseFevbu/cartoon-converter/scripts/Ghostbusters Answer The Call (English) In Hindi 720p.md deleted file mode 100644 index ca84eff489f6ebce951f3d19ca416514cd6cb52f..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ghostbusters Answer The Call (English) In Hindi 720p.md +++ /dev/null @@ -1,21 +0,0 @@ - -

      How to Watch Ghostbusters: Answer the Call (2016) in Hindi 720p

      -

      If you are a fan of the Ghostbusters franchise and want to watch the latest installment in Hindi, you might be wondering how to do that. Ghostbusters: Answer the Call (2016) is a reboot of the classic comedy series, featuring a new team of female ghostbusters who have to save New York City from a paranormal threat. The film stars Melissa McCarthy, Kristen Wiig, Kate McKinnon, Leslie Jones, and Chris Hemsworth.

      -

      Unfortunately, the film was not officially released in Hindi in India or other countries. However, there are some unofficial ways to watch it in Hindi 720p quality online. Here are some of them:

      -

      Ghostbusters: Answer the Call (English) in hindi 720p


      Download Zip >>> https://urlcod.com/2uHvLJ



      -
        -
      • Torrent download: You can use a torrent client like BitTorrent or uTorrent to download the film from a torrent site like The Pirate Bay or Kickass Torrents. You can search for "Ghostbusters: Answer the Call (English) in hindi 720p" or similar keywords and find a torrent file that has both the video and audio tracks in Hindi. However, this method is illegal and risky, as you might download malware or viruses along with the film, or face legal consequences for piracy.
      • -
      • Online streaming: You can also use an online streaming site like Fmovies or 123Movies to watch the film in Hindi 720p quality. These sites usually have multiple links for different languages and resolutions. You can choose the one that has Hindi audio and 720p video quality. However, this method is also illegal and unsafe, as these sites are full of pop-up ads and malware that can harm your device or steal your personal information.
      • -
      • VPN service: A safer and legal way to watch the film in Hindi 720p quality is to use a VPN service like NordVPN or ExpressVPN. A VPN service allows you to change your IP address and location, so you can access geo-restricted content from other countries. For example, you can use a VPN service to connect to a server in Canada or Australia, where the film might be available on Netflix or Amazon Prime Video with Hindi audio and subtitles. You can then watch the film legally and safely on these platforms.
      • -
      -

      These are some of the ways to watch Ghostbusters: Answer the Call (2016) in Hindi 720p quality online. However, we recommend that you always respect the original creators and distributors of the film and watch it legally on official platforms whenever possible.

      - -

      Why Ghostbusters: Answer the Call (2016) is worth watching

      -

      Ghostbusters: Answer the Call (2016) is a film that received mixed reviews from critics and fans alike. Some praised it for its humor, action, and female empowerment, while others criticized it for its lack of originality, weak plot, and unnecessary remake. However, regardless of your opinion on the film, there are some reasons why it is worth watching at least once.

      -

      First of all, the film has a talented and diverse cast of comedians who deliver some hilarious and memorable lines and scenes. Melissa McCarthy, Kristen Wiig, Kate McKinnon, and Leslie Jones have great chemistry and charisma as the new ghostbusters, and they each bring their own style and personality to the roles. Chris Hemsworth also steals the show as the dim-witted and handsome receptionist Kevin, who provides some of the funniest moments in the film.

      -

      Secondly, the film has some impressive and creative visual effects and action sequences that showcase the ghostbusting gadgets and skills of the team. The film features a variety of ghosts and creatures that range from spooky to silly, and the ghostbusters have to use their proton packs, traps, grenades, pistols, and other weapons to fight them off. The climax of the film involves a spectacular battle in Times Square, where the ghostbusters face off against a giant version of their logo.

      -

      Thirdly, the film has some nods and references to the original Ghostbusters films and franchise that fans can appreciate. The film features cameos from most of the original cast members, such as Bill Murray, Dan Aykroyd, Ernie Hudson, Sigourney Weaver, Annie Potts, and even Harold Ramis in a tribute. The film also includes some iconic elements from the original films, such as the Ecto-1 car, the firehouse headquarters, the Stay Puft Marshmallow Man, and the theme song.

      -

      Therefore, Ghostbusters: Answer the Call (2016) is a film that has something for everyone: comedy, action, horror, nostalgia, and feminism. It is a film that celebrates the legacy of the Ghostbusters franchise while also introducing a new generation of fans to it.

      -

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/tomli/_re.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/tomli/_re.py deleted file mode 100644 index 994bb7493fd92865e6ab87c277ba5741b44c31a9..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/tomli/_re.py +++ /dev/null @@ -1,107 +0,0 @@ -# SPDX-License-Identifier: MIT -# SPDX-FileCopyrightText: 2021 Taneli Hukkinen -# Licensed to PSF under a Contributor Agreement. - -from __future__ import annotations - -from datetime import date, datetime, time, timedelta, timezone, tzinfo -from functools import lru_cache -import re -from typing import Any - -from ._types import ParseFloat - -# E.g. -# - 00:32:00.999999 -# - 00:32:00 -_TIME_RE_STR = r"([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(?:\.([0-9]{1,6})[0-9]*)?" - -RE_NUMBER = re.compile( - r""" -0 -(?: - x[0-9A-Fa-f](?:_?[0-9A-Fa-f])* # hex - | - b[01](?:_?[01])* # bin - | - o[0-7](?:_?[0-7])* # oct -) -| -[+-]?(?:0|[1-9](?:_?[0-9])*) # dec, integer part -(?P - (?:\.[0-9](?:_?[0-9])*)? # optional fractional part - (?:[eE][+-]?[0-9](?:_?[0-9])*)? # optional exponent part -) -""", - flags=re.VERBOSE, -) -RE_LOCALTIME = re.compile(_TIME_RE_STR) -RE_DATETIME = re.compile( - rf""" -([0-9]{{4}})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01]) # date, e.g. 1988-10-27 -(?: - [Tt ] - {_TIME_RE_STR} - (?:([Zz])|([+-])([01][0-9]|2[0-3]):([0-5][0-9]))? # optional time offset -)? -""", - flags=re.VERBOSE, -) - - -def match_to_datetime(match: re.Match) -> datetime | date: - """Convert a `RE_DATETIME` match to `datetime.datetime` or `datetime.date`. - - Raises ValueError if the match does not correspond to a valid date - or datetime. - """ - ( - year_str, - month_str, - day_str, - hour_str, - minute_str, - sec_str, - micros_str, - zulu_time, - offset_sign_str, - offset_hour_str, - offset_minute_str, - ) = match.groups() - year, month, day = int(year_str), int(month_str), int(day_str) - if hour_str is None: - return date(year, month, day) - hour, minute, sec = int(hour_str), int(minute_str), int(sec_str) - micros = int(micros_str.ljust(6, "0")) if micros_str else 0 - if offset_sign_str: - tz: tzinfo | None = cached_tz( - offset_hour_str, offset_minute_str, offset_sign_str - ) - elif zulu_time: - tz = timezone.utc - else: # local date-time - tz = None - return datetime(year, month, day, hour, minute, sec, micros, tzinfo=tz) - - -@lru_cache(maxsize=None) -def cached_tz(hour_str: str, minute_str: str, sign_str: str) -> timezone: - sign = 1 if sign_str == "+" else -1 - return timezone( - timedelta( - hours=sign * int(hour_str), - minutes=sign * int(minute_str), - ) - ) - - -def match_to_localtime(match: re.Match) -> time: - hour_str, minute_str, sec_str, micros_str = match.groups() - micros = int(micros_str.ljust(6, "0")) if micros_str else 0 - return time(int(hour_str), int(minute_str), int(sec_str), micros) - - -def match_to_number(match: re.Match, parse_float: ParseFloat) -> Any: - if match.group("floatpart"): - return parse_float(match.group()) - return int(match.group(), 0) diff --git a/spaces/tomandandy/MusicGen3/tests/common_utils/temp_utils.py b/spaces/tomandandy/MusicGen3/tests/common_utils/temp_utils.py deleted file mode 100644 index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000 --- a/spaces/tomandandy/MusicGen3/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittenly. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/tomofi/MMOCR/configs/textrecog/sar/README.md b/spaces/tomofi/MMOCR/configs/textrecog/sar/README.md deleted file mode 100644 index b7211855b2666e1688a683fbf671b59becfc28ab..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/textrecog/sar/README.md +++ /dev/null @@ -1,84 +0,0 @@ -# SAR -> [Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition](https://arxiv.org/abs/1811.00751) - - - -## Abstract - -Recognizing irregular text in natural scene images is challenging due to the large variance in text appearance, such as curvature, orientation and distortion. Most existing approaches rely heavily on sophisticated model designs and/or extra fine-grained annotations, which, to some extent, increase the difficulty in algorithm implementation and data collection. In this work, we propose an easy-to-implement strong baseline for irregular scene text recognition, using off-the-shelf neural network components and only word-level annotations. It is composed of a 31-layer ResNet, an LSTM-based encoder-decoder framework and a 2-dimensional attention module. Despite its simplicity, the proposed method is robust and achieves state-of-the-art performance on both regular and irregular scene text recognition benchmarks. - -
      - -
      - - - -## Dataset - -### Train Dataset - -| trainset | instance_num | repeat_num | source | -| :--------: | :----------: | :--------: | :----------------------: | -| icdar_2011 | 3567 | 20 | real | -| icdar_2013 | 848 | 20 | real | -| icdar2015 | 4468 | 20 | real | -| coco_text | 42142 | 20 | real | -| IIIT5K | 2000 | 20 | real | -| SynthText | 2400000 | 1 | synth | -| SynthAdd | 1216889 | 1 | synth, 1.6m in [[1]](#1) | -| Syn90k | 2400000 | 1 | synth | - -### Test Dataset - -| testset | instance_num | type | -| :-----: | :----------: | :-------------------------: | -| IIIT5K | 3000 | regular | -| SVT | 647 | regular | -| IC13 | 1015 | regular | -| IC15 | 2077 | irregular | -| SVTP | 645 | irregular, 639 in [[1]](#1) | -| CT80 | 288 | irregular | - -## Results and Models - -| Methods | Backbone | Decoder | | Regular Text | | | | Irregular Text | | download | -| :-----------------------------------------------------------------: | :---------: | :------------------: | :----: | :----------: | :---: | :---: | :---: | :------------: | :---: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| | | | IIIT5K | SVT | IC13 | | IC15 | SVTP | CT80 | -| [SAR](/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py) | R31-1/8-1/4 | ParallelSARDecoder | 95.0 | 89.6 | 93.7 | | 79.0 | 82.2 | 88.9 | [model](https://download.openmmlab.com/mmocr/textrecog/sar/sar_r31_parallel_decoder_academic-dba3a4a3.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/sar/20210327_154129.log.json) | -| [SAR](configs/textrecog/sar/sar_r31_sequential_decoder_academic.py) | R31-1/8-1/4 | SequentialSARDecoder | 95.2 | 88.7 | 92.4 | | 78.2 | 81.9 | 89.6 | [model](https://download.openmmlab.com/mmocr/textrecog/sar/sar_r31_sequential_decoder_academic-d06c9a8e.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/sar/20210330_105728.log.json) | - -## Chinese Dataset - -## Results and Models - -| Methods | Backbone | Decoder | | download | -| :---------------------------------------------------------------: | :---------: | :----------------: | :---: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| [SAR](/configs/textrecog/sar/sar_r31_parallel_decoder_chinese.py) | R31-1/8-1/4 | ParallelSARDecoder | | [model](https://download.openmmlab.com/mmocr/textrecog/sar/sar_r31_parallel_decoder_chineseocr_20210507-b4be8214.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/sar/20210506_225557.log.json) \| [dict](https://download.openmmlab.com/mmocr/textrecog/sar/dict_printed_chinese_english_digits.txt) | - -:::{note} - -- `R31-1/8-1/4` means the height of feature from backbone is 1/8 of input image, where 1/4 for width. -- We did not use beam search during decoding. -- We implemented two kinds of decoder. Namely, `ParallelSARDecoder` and `SequentialSARDecoder`. - - `ParallelSARDecoder`: Parallel decoding during training with `LSTM` layer. It would be faster. - - `SequentialSARDecoder`: Sequential Decoding during training with `LSTMCell`. It would be easier to understand. -- For train dataset. - - We did not construct distinct data groups (20 groups in [[1]](#1)) to train the model group-by-group since it would render model training too complicated. - - Instead, we randomly selected `2.4m` patches from `Syn90k`, `2.4m` from `SynthText` and `1.2m` from `SynthAdd`, and grouped all data together. See [config](https://download.openmmlab.com/mmocr/textrecog/sar/sar_r31_academic.py) for details. -- We used 48 GPUs with `total_batch_size = 64 * 48` in the experiment above to speedup training, while keeping the `initial lr = 1e-3` unchanged. -::: - - -## Citation - -```bibtex -@inproceedings{li2019show, - title={Show, attend and read: A simple and strong baseline for irregular text recognition}, - author={Li, Hui and Wang, Peng and Shen, Chunhua and Zhang, Guyu}, - booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, - volume={33}, - number={01}, - pages={8610--8617}, - year={2019} -} -``` diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/voc0712.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/voc0712.py deleted file mode 100644 index ae09acdd5c9580217815300abbad9f08b71b37ed..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/voc0712.py +++ /dev/null @@ -1,55 +0,0 @@ -# dataset settings -dataset_type = 'VOCDataset' -data_root = 'data/VOCdevkit/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1000, 600), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1000, 600), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=3, - dataset=dict( - type=dataset_type, - ann_file=[ - data_root + 'VOC2007/ImageSets/Main/trainval.txt', - data_root + 'VOC2012/ImageSets/Main/trainval.txt' - ], - img_prefix=[data_root + 'VOC2007/', data_root + 'VOC2012/'], - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt', - img_prefix=data_root + 'VOC2007/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt', - img_prefix=data_root + 'VOC2007/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='mAP') diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_yolact_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_yolact_head.py deleted file mode 100644 index aff57c4a97405110b4a94ce515f34a52b867eeb8..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_yolact_head.py +++ /dev/null @@ -1,136 +0,0 @@ -import mmcv -import torch - -from mmdet.models.dense_heads import YOLACTHead, YOLACTProtonet, YOLACTSegmHead - - -def test_yolact_head_loss(): - """Tests yolact head losses when truth is empty and non-empty.""" - s = 550 - img_metas = [{ - 'img_shape': (s, s, 3), - 'scale_factor': 1, - 'pad_shape': (s, s, 3) - }] - train_cfg = mmcv.Config( - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0., - ignore_iof_thr=-1, - gt_max_assign_all=False), - smoothl1_beta=1., - allowed_border=-1, - pos_weight=-1, - neg_pos_ratio=3, - debug=False, - min_gt_box_wh=[4.0, 4.0])) - bbox_head = YOLACTHead( - num_classes=80, - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=3, - scales_per_octave=1, - base_sizes=[8, 16, 32, 64, 128], - ratios=[0.5, 1.0, 2.0], - strides=[550.0 / x for x in [69, 35, 18, 9, 5]], - centers=[(550 * 0.5 / x, 550 * 0.5 / x) - for x in [69, 35, 18, 9, 5]]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - reduction='none', - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.5), - num_head_convs=1, - num_protos=32, - use_ohem=True, - train_cfg=train_cfg) - segm_head = YOLACTSegmHead( - in_channels=256, - num_classes=80, - loss_segm=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)) - mask_head = YOLACTProtonet( - num_classes=80, - in_channels=256, - num_protos=32, - max_masks_to_train=100, - loss_mask_weight=6.125) - feat = [ - torch.rand(1, 256, feat_size, feat_size) - for feat_size in [69, 35, 18, 9, 5] - ] - cls_score, bbox_pred, coeff_pred = bbox_head.forward(feat) - # Test that empty ground truth encourages the network to predict background - gt_bboxes = [torch.empty((0, 4))] - gt_labels = [torch.LongTensor([])] - gt_masks = [torch.empty((0, 550, 550))] - gt_bboxes_ignore = None - empty_gt_losses, sampling_results = bbox_head.loss( - cls_score, - bbox_pred, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - # When there is no truth, the cls loss should be nonzero but there should - # be no box loss. - empty_cls_loss = sum(empty_gt_losses['loss_cls']) - empty_box_loss = sum(empty_gt_losses['loss_bbox']) - assert empty_cls_loss.item() > 0, 'cls loss should be non-zero' - assert empty_box_loss.item() == 0, ( - 'there should be no box loss when there are no true boxes') - - # Test segm head and mask head - segm_head_outs = segm_head(feat[0]) - empty_segm_loss = segm_head.loss(segm_head_outs, gt_masks, gt_labels) - mask_pred = mask_head(feat[0], coeff_pred, gt_bboxes, img_metas, - sampling_results) - empty_mask_loss = mask_head.loss(mask_pred, gt_masks, gt_bboxes, img_metas, - sampling_results) - # When there is no truth, the segm and mask loss should be zero. - empty_segm_loss = sum(empty_segm_loss['loss_segm']) - empty_mask_loss = sum(empty_mask_loss['loss_mask']) - assert empty_segm_loss.item() == 0, ( - 'there should be no segm loss when there are no true boxes') - assert empty_mask_loss == 0, ( - 'there should be no mask loss when there are no true boxes') - - # When truth is non-empty then cls, box, mask, segm loss should be - # nonzero for random inputs. - gt_bboxes = [ - torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]), - ] - gt_labels = [torch.LongTensor([2])] - gt_masks = [(torch.rand((1, 550, 550)) > 0.5).float()] - - one_gt_losses, sampling_results = bbox_head.loss( - cls_score, - bbox_pred, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - one_gt_cls_loss = sum(one_gt_losses['loss_cls']) - one_gt_box_loss = sum(one_gt_losses['loss_bbox']) - assert one_gt_cls_loss.item() > 0, 'cls loss should be non-zero' - assert one_gt_box_loss.item() > 0, 'box loss should be non-zero' - - one_gt_segm_loss = segm_head.loss(segm_head_outs, gt_masks, gt_labels) - mask_pred = mask_head(feat[0], coeff_pred, gt_bboxes, img_metas, - sampling_results) - one_gt_mask_loss = mask_head.loss(mask_pred, gt_masks, gt_bboxes, - img_metas, sampling_results) - one_gt_segm_loss = sum(one_gt_segm_loss['loss_segm']) - one_gt_mask_loss = sum(one_gt_mask_loss['loss_mask']) - assert one_gt_segm_loss.item() > 0, 'segm loss should be non-zero' - assert one_gt_mask_loss.item() > 0, 'mask loss should be non-zero' diff --git a/spaces/typesdigital/telegram-chatbot/app.py b/spaces/typesdigital/telegram-chatbot/app.py deleted file mode 100644 index 68cf80c215f95dd3e57a715e3c96c968686454b7..0000000000000000000000000000000000000000 --- a/spaces/typesdigital/telegram-chatbot/app.py +++ /dev/null @@ -1,20 +0,0 @@ -from telegram.ext import Updater, MessageHandler, Filters -import openai - -openai.api_key = "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" -TELEGRAM_API_TOKEN = "6074730982:AAGKU2_gpogdkTQvmE4Ya63n9ot2dHVzA7I" - -def text_message(update, context): - response = openai.Completion.create( - engine="davinci", - prompt="Hello, world!", - max_tokens=5 - ) - update.message.reply_text(response.choices[0].text) - - -updater = Updater(TELEGRAM_API_TOKEN, use_context=True) -dispatcher = updater.dispatcher -dispatcher.add_handler(MessageHandler(Filters.text & (~Filters.command), text_message)) -updater.start_polling() -updater.idle() \ No newline at end of file diff --git a/spaces/vaibhavarduino/better-autogpt/index.html b/spaces/vaibhavarduino/better-autogpt/index.html deleted file mode 100644 index 528694962a74a758bd548f363e014910f072cb28..0000000000000000000000000000000000000000 --- a/spaces/vaibhavarduino/better-autogpt/index.html +++ /dev/null @@ -1,9 +0,0 @@ - - - - APP - - - - - \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/overrides/partials/source-file.html b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/overrides/partials/source-file.html deleted file mode 100644 index 84e2ab1f7dad99a3c8971dd6cb931b8ad0bb6d5b..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/overrides/partials/source-file.html +++ /dev/null @@ -1,26 +0,0 @@ -{% import "partials/language.html" as lang with context %} - - - -
      -
      - - - - {% if page.meta.git_revision_date_localized %} - 📅 {{ lang.t("source.file.date.updated") }}: - {{ page.meta.git_revision_date_localized }} - {% if page.meta.git_creation_date_localized %} -
      - 🎂 {{ lang.t("source.file.date.created") }}: - {{ page.meta.git_creation_date_localized }} - {% endif %} - - - {% elif page.meta.revision_date %} - 📅 {{ lang.t("source.file.date.updated") }}: - {{ page.meta.revision_date }} - {% endif %} -
      -
      diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/autobackend.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/autobackend.md deleted file mode 100644 index ccd10773e80630b2a68e63a740eb075f054b3f64..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/autobackend.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -description: Ensure class names match filenames for easy imports. Use AutoBackend to automatically rename and refactor model files. -keywords: AutoBackend, ultralytics, nn, autobackend, check class names, neural network ---- - -## AutoBackend ---- -### ::: ultralytics.nn.autobackend.AutoBackend -

      - -## check_class_names ---- -### ::: ultralytics.nn.autobackend.check_class_names -

      diff --git a/spaces/vg055/demo_analisis_de_sentimientos_textos_turisticos_mx_polarity/README.md b/spaces/vg055/demo_analisis_de_sentimientos_textos_turisticos_mx_polarity/README.md deleted file mode 100644 index 8d18f419ee814250880dacfd2e3cd9914aaa619a..0000000000000000000000000000000000000000 --- a/spaces/vg055/demo_analisis_de_sentimientos_textos_turisticos_mx_polarity/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Demo Analisis De Sentimientos Textos Turisticos Mx Polarity -emoji: 🚀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vialibre/edia/README.md b/spaces/vialibre/edia/README.md deleted file mode 100644 index 16c1659fabd5ccba02d573ccd6bb6ba8d174ca06..0000000000000000000000000000000000000000 --- a/spaces/vialibre/edia/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: EDIA -emoji: 🏢 -colorFrom: yellow -colorTo: yellow -sdk: static -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/video-p2p-library/Video-P2P-Demo/style.css b/spaces/video-p2p-library/Video-P2P-Demo/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/video-p2p-library/Video-P2P-Demo/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/speed.py b/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/speed.py deleted file mode 100644 index 45e95237da65e44f35a172c25ac6dc4e313e4eae..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/speed.py +++ /dev/null @@ -1,23 +0,0 @@ -from easydict import EasyDict as edict - -# configs for test speed - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "synthetic" -config.num_classes = 100 * 10000 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = [] diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/generation/util.py b/spaces/vishnu0001/text2mesh/shap_e/models/generation/util.py deleted file mode 100644 index 9ac30033a5c75b2c917160f661d48c5edad14871..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/models/generation/util.py +++ /dev/null @@ -1,23 +0,0 @@ -import math - -import torch - - -def timestep_embedding(timesteps, dim, max_period=10000): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].to(timesteps.dtype) * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - return embedding diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/cnn/vgg.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/cnn/vgg.py deleted file mode 100644 index 8778b649561a45a9652b1a15a26c2d171e58f3e1..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/cnn/vgg.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - -from .utils import constant_init, kaiming_init, normal_init - - -def conv3x3(in_planes, out_planes, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - padding=dilation, - dilation=dilation) - - -def make_vgg_layer(inplanes, - planes, - num_blocks, - dilation=1, - with_bn=False, - ceil_mode=False): - layers = [] - for _ in range(num_blocks): - layers.append(conv3x3(inplanes, planes, dilation)) - if with_bn: - layers.append(nn.BatchNorm2d(planes)) - layers.append(nn.ReLU(inplace=True)) - inplanes = planes - layers.append(nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=ceil_mode)) - - return layers - - -class VGG(nn.Module): - """VGG backbone. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_bn (bool): Use BatchNorm or not. - num_classes (int): number of classes for classification. - num_stages (int): VGG stages, normally 5. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - """ - - arch_settings = { - 11: (1, 1, 2, 2, 2), - 13: (2, 2, 2, 2, 2), - 16: (2, 2, 3, 3, 3), - 19: (2, 2, 4, 4, 4) - } - - def __init__(self, - depth, - with_bn=False, - num_classes=-1, - num_stages=5, - dilations=(1, 1, 1, 1, 1), - out_indices=(0, 1, 2, 3, 4), - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - ceil_mode=False, - with_last_pool=True): - super(VGG, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for vgg') - assert num_stages >= 1 and num_stages <= 5 - stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - assert len(dilations) == num_stages - assert max(out_indices) <= num_stages - - self.num_classes = num_classes - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - - self.inplanes = 3 - start_idx = 0 - vgg_layers = [] - self.range_sub_modules = [] - for i, num_blocks in enumerate(self.stage_blocks): - num_modules = num_blocks * (2 + with_bn) + 1 - end_idx = start_idx + num_modules - dilation = dilations[i] - planes = 64 * 2**i if i < 4 else 512 - vgg_layer = make_vgg_layer( - self.inplanes, - planes, - num_blocks, - dilation=dilation, - with_bn=with_bn, - ceil_mode=ceil_mode) - vgg_layers.extend(vgg_layer) - self.inplanes = planes - self.range_sub_modules.append([start_idx, end_idx]) - start_idx = end_idx - if not with_last_pool: - vgg_layers.pop(-1) - self.range_sub_modules[-1][1] -= 1 - self.module_name = 'features' - self.add_module(self.module_name, nn.Sequential(*vgg_layers)) - - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Linear(512 * 7 * 7, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - vgg_layers = getattr(self, self.module_name) - for i in range(len(self.stage_blocks)): - for j in range(*self.range_sub_modules[i]): - vgg_layer = vgg_layers[j] - x = vgg_layer(x) - if i in self.out_indices: - outs.append(x) - if self.num_classes > 0: - x = x.view(x.size(0), -1) - x = self.classifier(x) - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(VGG, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - vgg_layers = getattr(self, self.module_name) - if mode and self.frozen_stages >= 0: - for i in range(self.frozen_stages): - for j in range(*self.range_sub_modules[i]): - mod = vgg_layers[j] - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/weiren119/AudiogramDigitization/src/digitize_report.py b/spaces/weiren119/AudiogramDigitization/src/digitize_report.py deleted file mode 100644 index b5b468f547175a5cf09f460af4776ebf9b153263..0000000000000000000000000000000000000000 --- a/spaces/weiren119/AudiogramDigitization/src/digitize_report.py +++ /dev/null @@ -1,55 +0,0 @@ -#!/usr/bin/env python3 -""" -Copyright (c) 2020, Carleton University Biomedical Informatics Collaboratory - -This source code is licensed under the MIT license found in the -LICENSE file in the root directory of this source tree. -""" - -import json -import os - -from tqdm import tqdm - -from digitizer.digitization import generate_partial_annotation, extract_thresholds - -if __name__ == "__main__": - import argparse - parser = argparse.ArgumentParser(description=("Digitization script " - "for the conversion of audiology reports to JSON documents.")) - parser.add_argument("-i", "--input", type=str, required=True, - help=("Path to the audiology report (or directory) to be digitized.")) - parser.add_argument("-o", "--output_dir", type=str, required=False, - help="Path to the directory in which the result is to be saved (file will have same base name as input file, but with .json extension). If not provided, result is printed to the console.") - parser.add_argument("-a", "--annotation_mode", action="store_true", - help="Whether the script should be run in `annotation mode`, i.e. return results similar in format to those of a human-made annotation. If not given, a list of thresholds is computed.") - parser.add_argument("-g", "--gpu", action="store_true", - help="Use the GPU.") - args = parser.parse_args() - - input_files = [] - if os.path.isfile(args.input): - input_files += [os.path.abspath(args.input)] - else: - input_files += [os.path.join(args.input, filename) for filename in os.listdir(args.input)] - - with tqdm(total=len(input_files)) as pbar: - for input_file in input_files: - pbar.set_description(f"{os.path.basename(input_file)}") - - result = None - if args.annotation_mode: - result = generate_partial_annotation(input_file, gpu=args.gpu) - else: - result = extract_thresholds(input_file, gpu=args.gpu) - - result_as_string = json.dumps(result, indent=4, separators=(',', ': ')) - - if args.output_dir: - predictions_filename = os.path.basename(input_file).split(".")[0] + ".json" - with open(os.path.join(args.output_dir, predictions_filename), "w") as ofile: - ofile.write(result_as_string) - else: - print(result_as_string) - - pbar.update(1) # increment the progress bar diff --git a/spaces/wilson1/bingo/src/pages/api/healthz.ts b/spaces/wilson1/bingo/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/wolfpackhnu/web_hosting/lotto.py b/spaces/wolfpackhnu/web_hosting/lotto.py deleted file mode 100644 index bcfb0d36f210a083f7e3d5d2bbcdc36d618f4658..0000000000000000000000000000000000000000 --- a/spaces/wolfpackhnu/web_hosting/lotto.py +++ /dev/null @@ -1,21 +0,0 @@ -import gradio as gr -from numpy.random.mtrand import RandomState -import numpy as np - -def lotto_generator(howmany): - x=[] - for k in range(1,int(howmany)+1): - lotto_no=np.arange(1,46) - lotto_prior=[182,171,173,177,162,172,171,160,140,172, - 172,183,180,177,168,170,184,182,165,176, - 170,144,151,174,155,174,186,152,150,163, - 173,158,181,186,171,170,172,174,175,172,149,164,186,168,175] - x.append(np.random.choice(lotto_no, size=6, replace=False,p=np.divide(lotto_prior,7630))) - return np.asmatrix(x) - -demo = gr.Interface( - fn=lotto_generator, - inputs=[gr.Slider(1,5)], - outputs=["text"], -) -demo.launch() \ No newline at end of file diff --git a/spaces/wydgg/bingo-wyd-ai/src/lib/hooks/use-bing.ts b/spaces/wydgg/bingo-wyd-ai/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/wydgg/bingo-wyd-ai/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/wz758727829/ChuanhuChatGPT/llama_func.py b/spaces/wz758727829/ChuanhuChatGPT/llama_func.py deleted file mode 100644 index c71027dd4e6f99c0c12626cbbf276f407877be04..0000000000000000000000000000000000000000 --- a/spaces/wz758727829/ChuanhuChatGPT/llama_func.py +++ /dev/null @@ -1,192 +0,0 @@ -import os -import logging - -from llama_index import GPTSimpleVectorIndex -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -from langchain.llms import OpenAI -import colorama - - -from presets import * -from utils import * - - -def get_documents(file_src): - documents = [] - index_name = "" - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - logging.debug(f"file: {file.name}") - index_name += file.name - if os.path.splitext(file.name)[1] == ".pdf": - logging.debug("Loading PDF...") - CJKPDFReader = download_loader("CJKPDFReader") - loader = CJKPDFReader() - documents += loader.load_data(file=file.name) - elif os.path.splitext(file.name)[1] == ".docx": - logging.debug("Loading DOCX...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - documents += loader.load_data(file=file.name) - elif os.path.splitext(file.name)[1] == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - documents += loader.load_data(file=file.name) - else: - logging.debug("Loading text file...") - with open(file.name, "r", encoding="utf-8") as f: - text = add_space(f.read()) - documents += [Document(text)] - index_name = sha1sum(index_name) - return documents, index_name - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=1, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", - num_children=10, - max_keywords_per_chunk=10, -): - os.environ["OPENAI_API_KEY"] = api_key - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - llm_predictor = LLMPredictor( - llm=OpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key) - ) - prompt_helper = PromptHelper( - max_input_size, - num_outputs, - max_chunk_overlap, - embedding_limit, - chunk_size_limit, - separator=separator, - ) - documents, index_name = get_documents(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - logging.debug("构建索引中……") - index = GPTSimpleVectorIndex( - documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper - ) - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - return index - except Exception as e: - print(e) - return None - - -def chat_ai( - api_key, - index, - question, - context, - chatbot, -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.info(f"Question: {question}") - - response, chatbot_display, status_text = ask_ai( - api_key, - index, - question, - replace_today(PROMPT_TEMPLATE), - REFINE_TEMPLATE, - SIM_K, - INDEX_QUERY_TEMPRATURE, - context, - ) - if response is None: - status_text = "查询失败,请换个问法试试" - return context, chatbot - response = response - - context.append({"role": "user", "content": question}) - context.append({"role": "assistant", "content": response}) - chatbot.append((question, chatbot_display)) - - os.environ["OPENAI_API_KEY"] = "" - return context, chatbot, status_text - - -def ask_ai( - api_key, - index, - question, - prompt_tmpl, - refine_tmpl, - sim_k=1, - temprature=0, - prefix_messages=[], -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.debug("Index file found") - logging.debug("Querying index...") - llm_predictor = LLMPredictor( - llm=OpenAI( - temperature=temprature, - model_name="gpt-3.5-turbo-0301", - prefix_messages=prefix_messages, - ) - ) - - response = None # Initialize response variable to avoid UnboundLocalError - qa_prompt = QuestionAnswerPrompt(prompt_tmpl) - rf_prompt = RefinePrompt(refine_tmpl) - response = index.query( - question, - llm_predictor=llm_predictor, - similarity_top_k=sim_k, - text_qa_template=qa_prompt, - refine_template=rf_prompt, - response_mode="compact", - ) - - if response is not None: - logging.info(f"Response: {response}") - ret_text = response.response - nodes = [] - for index, node in enumerate(response.source_nodes): - brief = node.source_text[:25].replace("\n", "") - nodes.append( - f"
      [{index+1}]\t{brief}...

      {node.source_text}

      " - ) - new_response = ret_text + "\n----------\n" + "\n\n".join(nodes) - logging.info( - f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}" - ) - os.environ["OPENAI_API_KEY"] = "" - return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens" - else: - logging.warning("No response found, returning None") - os.environ["OPENAI_API_KEY"] = "" - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/transforms.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/xiaozhong/Real-CUGAN/README.md b/spaces/xiaozhong/Real-CUGAN/README.md deleted file mode 100644 index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000 --- a/spaces/xiaozhong/Real-CUGAN/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: DianXian/Real-CUGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xuetao/bingo3/tests/parse.ts b/spaces/xuetao/bingo3/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/text/__init__.py b/spaces/xxbb/VITS-Umamusume-voice-synthesizer/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/ybelkada/interfacegan_pp/torch_utils/ops/bias_act.py b/spaces/ybelkada/interfacegan_pp/torch_utils/ops/bias_act.py deleted file mode 100644 index 5c485c0027570decab26f0b6602a363a432b851f..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/torch_utils/ops/bias_act.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom PyTorch ops for efficient bias and activation.""" - -import os -import numpy as np -import torch -import dnnlib - -from .. import custom_ops -from .. import misc - -#---------------------------------------------------------------------------- - -activation_funcs = { - 'linear': dnnlib.EasyDict(func=lambda x, **_: x, def_alpha=0, def_gain=1, cuda_idx=1, ref='', has_2nd_grad=False), - 'relu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.relu(x), def_alpha=0, def_gain=np.sqrt(2), cuda_idx=2, ref='y', has_2nd_grad=False), - 'lrelu': dnnlib.EasyDict(func=lambda x, alpha, **_: torch.nn.functional.leaky_relu(x, alpha), def_alpha=0.2, def_gain=np.sqrt(2), cuda_idx=3, ref='y', has_2nd_grad=False), - 'tanh': dnnlib.EasyDict(func=lambda x, **_: torch.tanh(x), def_alpha=0, def_gain=1, cuda_idx=4, ref='y', has_2nd_grad=True), - 'sigmoid': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x), def_alpha=0, def_gain=1, cuda_idx=5, ref='y', has_2nd_grad=True), - 'elu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.elu(x), def_alpha=0, def_gain=1, cuda_idx=6, ref='y', has_2nd_grad=True), - 'selu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.selu(x), def_alpha=0, def_gain=1, cuda_idx=7, ref='y', has_2nd_grad=True), - 'softplus': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.softplus(x), def_alpha=0, def_gain=1, cuda_idx=8, ref='y', has_2nd_grad=True), - 'swish': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x) * x, def_alpha=0, def_gain=np.sqrt(2), cuda_idx=9, ref='x', has_2nd_grad=True), -} - -#---------------------------------------------------------------------------- - -_plugin = None -_null_tensor = torch.empty([0]) - -def _init(): - global _plugin - if _plugin is None: - _plugin = custom_ops.get_plugin( - module_name='bias_act_plugin', - sources=['bias_act.cpp', 'bias_act.cu'], - headers=['bias_act.h'], - source_dir=os.path.dirname(__file__), - extra_cuda_cflags=['--use_fast_math'], - ) - return True - -#---------------------------------------------------------------------------- - -def bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda'): - r"""Fused bias and activation function. - - Adds bias `b` to activation tensor `x`, evaluates activation function `act`, - and scales the result by `gain`. Each of the steps is optional. In most cases, - the fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports first and second order gradients, - but not third order gradients. - - Args: - x: Input activation tensor. Can be of any shape. - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The shape must be known, and it must match the dimension of `x` - corresponding to `dim`. - dim: The dimension in `x` corresponding to the elements of `b`. - The value of `dim` is ignored if `b` is not specified. - act: Name of the activation function to evaluate, or `"linear"` to disable. - Can be e.g. `"relu"`, `"lrelu"`, `"tanh"`, `"sigmoid"`, `"swish"`, etc. - See `activation_funcs` for a full list. `None` is not allowed. - alpha: Shape parameter for the activation function, or `None` to use the default. - gain: Scaling factor for the output tensor, or `None` to use default. - See `activation_funcs` for the default scaling of each activation function. - If unsure, consider specifying 1. - clamp: Clamp the output values to `[-clamp, +clamp]`, or `None` to disable - the clamping (default). - impl: Name of the implementation to use. Can be `"ref"` or `"cuda"` (default). - - Returns: - Tensor of the same shape and datatype as `x`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _bias_act_cuda(dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp).apply(x, b) - return _bias_act_ref(x=x, b=b, dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _bias_act_ref(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Slow reference implementation of `bias_act()` using standard TensorFlow ops. - """ - assert isinstance(x, torch.Tensor) - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Add bias. - if b is not None: - assert isinstance(b, torch.Tensor) and b.ndim == 1 - assert 0 <= dim < x.ndim - assert b.shape[0] == x.shape[dim] - x = x + b.reshape([-1 if i == dim else 1 for i in range(x.ndim)]) - - # Evaluate activation function. - alpha = float(alpha) - x = spec.func(x, alpha=alpha) - - # Scale by gain. - gain = float(gain) - if gain != 1: - x = x * gain - - # Clamp. - if clamp >= 0: - x = x.clamp(-clamp, clamp) # pylint: disable=invalid-unary-operand-type - return x - -#---------------------------------------------------------------------------- - -_bias_act_cuda_cache = dict() - -def _bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Fast CUDA implementation of `bias_act()` using custom ops. - """ - # Parse arguments. - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Lookup from cache. - key = (dim, act, alpha, gain, clamp) - if key in _bias_act_cuda_cache: - return _bias_act_cuda_cache[key] - - # Forward op. - class BiasActCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, b): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if x.ndim > 2 and x.stride(1) == 1 else torch.contiguous_format - x = x.contiguous(memory_format=ctx.memory_format) - b = b.contiguous() if b is not None else _null_tensor - y = x - if act != 'linear' or gain != 1 or clamp >= 0 or b is not _null_tensor: - y = _plugin.bias_act(x, b, _null_tensor, _null_tensor, _null_tensor, 0, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - x if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - b if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - y if 'y' in spec.ref else _null_tensor) - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - dy = dy.contiguous(memory_format=ctx.memory_format) - x, b, y = ctx.saved_tensors - dx = None - db = None - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - dx = dy - if act != 'linear' or gain != 1 or clamp >= 0: - dx = BiasActCudaGrad.apply(dy, x, b, y) - - if ctx.needs_input_grad[1]: - db = dx.sum([i for i in range(dx.ndim) if i != dim]) - - return dx, db - - # Backward op. - class BiasActCudaGrad(torch.autograd.Function): - @staticmethod - def forward(ctx, dy, x, b, y): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if dy.ndim > 2 and dy.stride(1) == 1 else torch.contiguous_format - dx = _plugin.bias_act(dy, b, x, y, _null_tensor, 1, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - dy if spec.has_2nd_grad else _null_tensor, - x, b, y) - return dx - - @staticmethod - def backward(ctx, d_dx): # pylint: disable=arguments-differ - d_dx = d_dx.contiguous(memory_format=ctx.memory_format) - dy, x, b, y = ctx.saved_tensors - d_dy = None - d_x = None - d_b = None - d_y = None - - if ctx.needs_input_grad[0]: - d_dy = BiasActCudaGrad.apply(d_dx, x, b, y) - - if spec.has_2nd_grad and (ctx.needs_input_grad[1] or ctx.needs_input_grad[2]): - d_x = _plugin.bias_act(d_dx, b, x, y, dy, 2, dim, spec.cuda_idx, alpha, gain, clamp) - - if spec.has_2nd_grad and ctx.needs_input_grad[2]: - d_b = d_x.sum([i for i in range(d_x.ndim) if i != dim]) - - return d_dy, d_x, d_b, d_y - - # Add to cache. - _bias_act_cuda_cache[key] = BiasActCuda - return BiasActCuda - -#---------------------------------------------------------------------------- diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/beit/image_processing_beit.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/beit/image_processing_beit.py deleted file mode 100644 index 6f8ce403e0a59ce7ba52f70c695097e113bc0698..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/beit/image_processing_beit.py +++ /dev/null @@ -1,505 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Image processor class for Beit.""" - -import warnings -from typing import Any, Dict, List, Optional, Tuple, Union - -import numpy as np - -from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict -from ...image_transforms import resize, to_channel_dimension_format -from ...image_utils import ( - IMAGENET_STANDARD_MEAN, - IMAGENET_STANDARD_STD, - ChannelDimension, - ImageInput, - PILImageResampling, - infer_channel_dimension_format, - is_scaled_image, - make_list_of_images, - to_numpy_array, - valid_images, -) -from ...utils import TensorType, is_torch_available, is_torch_tensor, is_vision_available, logging - - -if is_vision_available(): - import PIL - -if is_torch_available(): - import torch - - -logger = logging.get_logger(__name__) - - -class BeitImageProcessor(BaseImageProcessor): - r""" - Constructs a BEiT image processor. - - Args: - do_resize (`bool`, *optional*, defaults to `True`): - Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the - `do_resize` parameter in the `preprocess` method. - size (`Dict[str, int]` *optional*, defaults to `{"height": 256, "width": 256}`): - Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` - method. - resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): - Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the - `preprocess` method. - do_center_crop (`bool`, *optional*, defaults to `True`): - Whether to center crop the image. If the input size is smaller than `crop_size` along any edge, the image - is padded with 0's and then center cropped. Can be overridden by the `do_center_crop` parameter in the - `preprocess` method. - crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): - Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`. - Can be overridden by the `crop_size` parameter in the `preprocess` method. - rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): - Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the - `preprocess` method. - do_rescale (`bool`, *optional*, defaults to `True`): - Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` - parameter in the `preprocess` method. - do_normalize (`bool`, *optional*, defaults to `True`): - Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` - method. - image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): - The mean to use if normalizing the image. This is a float or list of floats of length of the number of - channels of the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. - image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): - The standard deviation to use if normalizing the image. This is a float or list of floats of length of the - number of channels of the image. Can be overridden by the `image_std` parameter in the `preprocess` method. - do_reduce_labels (`bool`, *optional*, defaults to `False`): - Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is - used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The - background label will be replaced by 255. Can be overridden by the `do_reduce_labels` parameter in the - `preprocess` method. - """ - - model_input_names = ["pixel_values"] - - def __init__( - self, - do_resize: bool = True, - size: Dict[str, int] = None, - resample: PILImageResampling = PILImageResampling.BICUBIC, - do_center_crop: bool = True, - crop_size: Dict[str, int] = None, - rescale_factor: Union[int, float] = 1 / 255, - do_rescale: bool = True, - do_normalize: bool = True, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - do_reduce_labels: bool = False, - **kwargs, - ) -> None: - if "reduce_labels" in kwargs: - warnings.warn( - "The `reduce_labels` parameter is deprecated and will be removed in a future version. Please use" - " `do_reduce_labels` instead.", - FutureWarning, - ) - do_reduce_labels = kwargs.pop("reduce_labels") - super().__init__(**kwargs) - size = size if size is not None else {"height": 256, "width": 256} - size = get_size_dict(size) - crop_size = crop_size if crop_size is not None else {"height": 224, "width": 224} - crop_size = get_size_dict(crop_size, param_name="crop_size") - self.do_resize = do_resize - self.size = size - self.resample = resample - self.do_center_crop = do_center_crop - self.crop_size = crop_size - self.do_rescale = do_rescale - self.rescale_factor = rescale_factor - self.do_normalize = do_normalize - self.image_mean = image_mean if image_mean is not None else IMAGENET_STANDARD_MEAN - self.image_std = image_std if image_std is not None else IMAGENET_STANDARD_STD - self.do_reduce_labels = do_reduce_labels - - @classmethod - def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs): - """ - Overrides the `from_dict` method from the base class to make sure `reduce_labels` is updated if image processor - is created using from_dict and kwargs e.g. `BeitImageProcessor.from_pretrained(checkpoint, reduce_labels=True)` - """ - image_processor_dict = image_processor_dict.copy() - if "reduce_labels" in kwargs: - image_processor_dict["reduce_labels"] = kwargs.pop("reduce_labels") - return super().from_dict(image_processor_dict, **kwargs) - - def resize( - self, - image: np.ndarray, - size: Dict[str, int], - resample: PILImageResampling = PILImageResampling.BICUBIC, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> np.ndarray: - """ - Resize an image to (size["height"], size["width"]). - - Args: - image (`np.ndarray`): - Image to resize. - size (`Dict[str, int]`): - Size of the output image. - resample (`PILImageResampling`, *optional*, defaults to `PIL.Image.BICUBIC`): - Resampling filter to use when resiizing the image. - data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format of the image. If not provided, it will be the same as the input image. - input_data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred. - """ - size = get_size_dict(size, default_to_square=True, param_name="size") - if "height" not in size or "width" not in size: - raise ValueError(f"The `size` argument must contain `height` and `width` keys. Got {size.keys()}") - return resize( - image, - size=(size["height"], size["width"]), - resample=resample, - data_format=data_format, - input_data_format=input_data_format, - **kwargs, - ) - - def reduce_label(self, label: ImageInput) -> np.ndarray: - label = to_numpy_array(label) - # Avoid using underflow conversion - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - return label - - def _preprocess( - self, - image: ImageInput, - do_reduce_labels: bool = None, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_center_crop: bool = None, - crop_size: Dict[str, int] = None, - do_rescale: bool = None, - rescale_factor: float = None, - do_normalize: bool = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ): - if do_reduce_labels: - image = self.reduce_label(image) - - if do_resize: - image = self.resize(image=image, size=size, resample=resample, input_data_format=input_data_format) - - if do_center_crop: - image = self.center_crop(image=image, size=crop_size, input_data_format=input_data_format) - - if do_rescale: - image = self.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format) - - if do_normalize: - image = self.normalize(image=image, mean=image_mean, std=image_std, input_data_format=input_data_format) - - return image - - def _preprocess_image( - self, - image: ImageInput, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_center_crop: bool = None, - crop_size: Dict[str, int] = None, - do_rescale: bool = None, - rescale_factor: float = None, - do_normalize: bool = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.ndarray: - """Preprocesses a single image.""" - # All transformations expect numpy arrays. - image = to_numpy_array(image) - if is_scaled_image(image) and do_rescale: - logger.warning_once( - "It looks like you are trying to rescale already rescaled images. If the input" - " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again." - ) - if input_data_format is None: - input_data_format = infer_channel_dimension_format(image) - image = self._preprocess( - image, - do_reduce_labels=False, - do_resize=do_resize, - size=size, - resample=resample, - do_center_crop=do_center_crop, - crop_size=crop_size, - do_rescale=do_rescale, - rescale_factor=rescale_factor, - do_normalize=do_normalize, - image_mean=image_mean, - image_std=image_std, - input_data_format=input_data_format, - ) - if data_format is not None: - image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) - return image - - def _preprocess_segmentation_map( - self, - segmentation_map: ImageInput, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_center_crop: bool = None, - crop_size: Dict[str, int] = None, - do_reduce_labels: bool = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ): - """Preprocesses a single segmentation map.""" - # All transformations expect numpy arrays. - segmentation_map = to_numpy_array(segmentation_map) - # Add an axis to the segmentation maps for transformations. - if segmentation_map.ndim == 2: - segmentation_map = segmentation_map[None, ...] - added_dimension = True - input_data_format = ChannelDimension.FIRST - else: - added_dimension = False - if input_data_format is None: - input_data_format = infer_channel_dimension_format(segmentation_map, num_channels=1) - segmentation_map = self._preprocess( - image=segmentation_map, - do_reduce_labels=do_reduce_labels, - do_resize=do_resize, - resample=resample, - size=size, - do_center_crop=do_center_crop, - crop_size=crop_size, - do_normalize=False, - do_rescale=False, - input_data_format=ChannelDimension.FIRST, - ) - # Remove extra axis if added - if added_dimension: - segmentation_map = np.squeeze(segmentation_map, axis=0) - segmentation_map = segmentation_map.astype(np.int64) - return segmentation_map - - def __call__(self, images, segmentation_maps=None, **kwargs): - # Overrides the `__call__` method of the `Preprocessor` class such that the images and segmentation maps can both - # be passed in as positional arguments. - return super().__call__(images, segmentation_maps=segmentation_maps, **kwargs) - - def preprocess( - self, - images: ImageInput, - segmentation_maps: Optional[ImageInput] = None, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_center_crop: bool = None, - crop_size: Dict[str, int] = None, - do_rescale: bool = None, - rescale_factor: float = None, - do_normalize: bool = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - do_reduce_labels: Optional[bool] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - data_format: ChannelDimension = ChannelDimension.FIRST, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> PIL.Image.Image: - """ - Preprocess an image or batch of images. - - Args: - images (`ImageInput`): - Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If - passing in images with pixel values between 0 and 1, set `do_rescale=False`. - do_resize (`bool`, *optional*, defaults to `self.do_resize`): - Whether to resize the image. - size (`Dict[str, int]`, *optional*, defaults to `self.size`): - Size of the image after resizing. - resample (`int`, *optional*, defaults to `self.resample`): - Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`, Only - has an effect if `do_resize` is set to `True`. - do_center_crop (`bool`, *optional*, defaults to `self.do_center_crop`): - Whether to center crop the image. - crop_size (`Dict[str, int]`, *optional*, defaults to `self.crop_size`): - Size of the image after center crop. If one edge the image is smaller than `crop_size`, it will be - padded with zeros and then cropped - do_rescale (`bool`, *optional*, defaults to `self.do_rescale`): - Whether to rescale the image values between [0 - 1]. - rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`): - Rescale factor to rescale the image by if `do_rescale` is set to `True`. - do_normalize (`bool`, *optional*, defaults to `self.do_normalize`): - Whether to normalize the image. - image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`): - Image mean. - image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`): - Image standard deviation. - do_reduce_labels (`bool`, *optional*, defaults to `self.do_reduce_labels`): - Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 - is used for background, and background itself is not included in all classes of a dataset (e.g. - ADE20k). The background label will be replaced by 255. - return_tensors (`str` or `TensorType`, *optional*): - The type of tensors to return. Can be one of: - - Unset: Return a list of `np.ndarray`. - - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. - data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`): - The channel dimension format for the output image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - Unset: Use the channel dimension format of the input image. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format for the input image. If unset, the channel dimension format is inferred - from the input image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. - """ - do_resize = do_resize if do_resize is not None else self.do_resize - size = size if size is not None else self.size - size = get_size_dict(size, default_to_square=True, param_name="size") - resample = resample if resample is not None else self.resample - do_center_crop = do_center_crop if do_center_crop is not None else self.do_center_crop - crop_size = crop_size if crop_size is not None else self.crop_size - crop_size = get_size_dict(crop_size, default_to_square=True, param_name="crop_size") - do_rescale = do_rescale if do_rescale is not None else self.do_rescale - rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor - do_normalize = do_normalize if do_normalize is not None else self.do_normalize - image_mean = image_mean if image_mean is not None else self.image_mean - image_std = image_std if image_std is not None else self.image_std - do_reduce_labels = do_reduce_labels if do_reduce_labels is not None else self.do_reduce_labels - - images = make_list_of_images(images) - if segmentation_maps is not None: - segmentation_maps = make_list_of_images(segmentation_maps, expected_ndims=2) - - if not valid_images(images): - raise ValueError( - "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, " - "torch.Tensor, tf.Tensor or jax.ndarray." - ) - - if segmentation_maps is not None and not valid_images(segmentation_maps): - raise ValueError( - "Invalid segmentation map type. Must be of type PIL.Image.Image, numpy.ndarray, " - "torch.Tensor, tf.Tensor or jax.ndarray." - ) - - if do_resize and size is None or resample is None: - raise ValueError("Size and resample must be specified if do_resize is True.") - - if do_center_crop and crop_size is None: - raise ValueError("Crop size must be specified if do_center_crop is True.") - - if do_rescale and rescale_factor is None: - raise ValueError("Rescale factor must be specified if do_rescale is True.") - - if do_normalize and (image_mean is None or image_std is None): - raise ValueError("Image mean and std must be specified if do_normalize is True.") - - images = [ - self._preprocess_image( - image=img, - do_resize=do_resize, - do_center_crop=do_center_crop, - do_rescale=do_rescale, - do_normalize=do_normalize, - resample=resample, - size=size, - rescale_factor=rescale_factor, - crop_size=crop_size, - image_mean=image_mean, - image_std=image_std, - data_format=data_format, - input_data_format=input_data_format, - ) - for img in images - ] - - data = {"pixel_values": images} - - if segmentation_maps is not None: - segmentation_maps = [ - self._preprocess_segmentation_map( - segmentation_map=segmentation_map, - do_reduce_labels=do_reduce_labels, - do_resize=do_resize, - resample=resample, - size=size, - do_center_crop=do_center_crop, - crop_size=crop_size, - ) - for segmentation_map in segmentation_maps - ] - data["labels"] = segmentation_maps - - return BatchFeature(data=data, tensor_type=return_tensors) - - def post_process_semantic_segmentation(self, outputs, target_sizes: List[Tuple] = None): - """ - Converts the output of [`BeitForSemanticSegmentation`] into semantic segmentation maps. Only supports PyTorch. - - Args: - outputs ([`BeitForSemanticSegmentation`]): - Raw outputs of the model. - target_sizes (`List[Tuple]` of length `batch_size`, *optional*): - List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, - predictions will not be resized. - - Returns: - semantic_segmentation: `List[torch.Tensor]` of length `batch_size`, where each item is a semantic - segmentation map of shape (height, width) corresponding to the target_sizes entry (if `target_sizes` is - specified). Each entry of each `torch.Tensor` correspond to a semantic class id. - """ - # TODO: add support for other frameworks - logits = outputs.logits - - # Resize logits and compute semantic segmentation maps - if target_sizes is not None: - if len(logits) != len(target_sizes): - raise ValueError( - "Make sure that you pass in as many target sizes as the batch dimension of the logits" - ) - - if is_torch_tensor(target_sizes): - target_sizes = target_sizes.numpy() - - semantic_segmentation = [] - - for idx in range(len(logits)): - resized_logits = torch.nn.functional.interpolate( - logits[idx].unsqueeze(dim=0), size=target_sizes[idx], mode="bilinear", align_corners=False - ) - semantic_map = resized_logits[0].argmax(dim=0) - semantic_segmentation.append(semantic_map) - else: - semantic_segmentation = logits.argmax(dim=1) - semantic_segmentation = [semantic_segmentation[i] for i in range(semantic_segmentation.shape[0])] - - return semantic_segmentation diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/lilt/modeling_lilt.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/lilt/modeling_lilt.py deleted file mode 100644 index 46fe2d3e9cd7794696a4780001e3e725cfaeb27c..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/lilt/modeling_lilt.py +++ /dev/null @@ -1,1198 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch LiLT model.""" - -import math -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutput, - BaseModelOutputWithPooling, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings -from .configuration_lilt import LiltConfig - - -logger = logging.get_logger(__name__) - -_CONFIG_FOR_DOC = "LiltConfig" - -LILT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "SCUT-DLVCLab/lilt-roberta-en-base", - # See all LiLT models at https://huggingface.co/models?filter=lilt -] - - -class LiltTextEmbeddings(nn.Module): - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer( - "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)), persistent=False - ) - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - - # End copy - self.padding_idx = config.pad_token_id - self.position_embeddings = nn.Embedding( - config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx - ) - - def forward( - self, - input_ids=None, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - ): - if position_ids is None: - if input_ids is not None: - # Create the position ids from the input token ids. Any padded tokens remain padded. - position_ids = self.create_position_ids_from_input_ids(input_ids, self.padding_idx).to( - input_ids.device - ) - else: - position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) - - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - if token_type_ids is None: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings, position_ids - - def create_position_ids_from_input_ids(self, input_ids, padding_idx): - """ - Args: - Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding - symbols are ignored. This is modified from fairseq's `utils.make_positions`. - x: torch.Tensor x: - Returns: torch.Tensor - """ - # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. - mask = input_ids.ne(padding_idx).int() - incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask)) * mask - return incremental_indices.long() + padding_idx - - def create_position_ids_from_inputs_embeds(self, inputs_embeds): - """ - Args: - We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.: - inputs_embeds: torch.Tensor - Returns: torch.Tensor - """ - input_shape = inputs_embeds.size()[:-1] - sequence_length = input_shape[1] - - position_ids = torch.arange( - self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device - ) - return position_ids.unsqueeze(0).expand(input_shape) - - -class LiltLayoutEmbeddings(nn.Module): - def __init__(self, config): - super().__init__() - # we divide the hidden_size by 6 here as there are 6 different layout embeddings, - # namely left_position, upper_position, right_position, lower_position, height, width - self.x_position_embeddings = nn.Embedding(config.max_2d_position_embeddings, config.hidden_size // 6) - self.y_position_embeddings = nn.Embedding(config.max_2d_position_embeddings, config.hidden_size // 6) - self.h_position_embeddings = nn.Embedding(config.max_2d_position_embeddings, config.hidden_size // 6) - self.w_position_embeddings = nn.Embedding(config.max_2d_position_embeddings, config.hidden_size // 6) - - self.padding_idx = config.pad_token_id - self.box_position_embeddings = nn.Embedding( - config.max_position_embeddings, - config.hidden_size // config.channel_shrink_ratio, - padding_idx=self.padding_idx, - ) - self.box_linear_embeddings = nn.Linear( - in_features=config.hidden_size, out_features=config.hidden_size // config.channel_shrink_ratio - ) - self.LayerNorm = nn.LayerNorm(config.hidden_size // config.channel_shrink_ratio, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, bbox=None, position_ids=None): - try: - left_position_embeddings = self.x_position_embeddings(bbox[:, :, 0]) - upper_position_embeddings = self.y_position_embeddings(bbox[:, :, 1]) - right_position_embeddings = self.x_position_embeddings(bbox[:, :, 2]) - lower_position_embeddings = self.y_position_embeddings(bbox[:, :, 3]) - except IndexError as e: - raise IndexError("The `bbox` coordinate values should be within 0-1000 range.") from e - - h_position_embeddings = self.h_position_embeddings(bbox[:, :, 3] - bbox[:, :, 1]) - w_position_embeddings = self.w_position_embeddings(bbox[:, :, 2] - bbox[:, :, 0]) - - spatial_position_embeddings = torch.cat( - [ - left_position_embeddings, - upper_position_embeddings, - right_position_embeddings, - lower_position_embeddings, - h_position_embeddings, - w_position_embeddings, - ], - dim=-1, - ) - spatial_position_embeddings = self.box_linear_embeddings(spatial_position_embeddings) - box_position_embeddings = self.box_position_embeddings(position_ids) - - spatial_position_embeddings = spatial_position_embeddings + box_position_embeddings - - spatial_position_embeddings = self.LayerNorm(spatial_position_embeddings) - spatial_position_embeddings = self.dropout(spatial_position_embeddings) - - return spatial_position_embeddings - - -class LiltSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.layout_query = nn.Linear( - config.hidden_size // config.channel_shrink_ratio, self.all_head_size // config.channel_shrink_ratio - ) - self.layout_key = nn.Linear( - config.hidden_size // config.channel_shrink_ratio, self.all_head_size // config.channel_shrink_ratio - ) - self.layout_value = nn.Linear( - config.hidden_size // config.channel_shrink_ratio, self.all_head_size // config.channel_shrink_ratio - ) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = position_embedding_type or getattr( - config, "position_embedding_type", "absolute" - ) - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - - self.channel_shrink_ratio = config.channel_shrink_ratio - - def transpose_for_scores(self, x, r=1): - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size // r) - x = x.view(*new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states, - layout_inputs, - attention_mask=None, - head_mask=None, - output_attentions=False, - ): - layout_value_layer = self.transpose_for_scores(self.layout_value(layout_inputs), r=self.channel_shrink_ratio) - layout_key_layer = self.transpose_for_scores(self.layout_key(layout_inputs), r=self.channel_shrink_ratio) - layout_query_layer = self.transpose_for_scores(self.layout_query(layout_inputs), r=self.channel_shrink_ratio) - - mixed_query_layer = self.query(hidden_states) - - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - query_layer = self.transpose_for_scores(mixed_query_layer) - - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - layout_attention_scores = torch.matmul(layout_query_layer, layout_key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - tmp_attention_scores = attention_scores / math.sqrt(self.attention_head_size) - tmp_layout_attention_scores = layout_attention_scores / math.sqrt( - self.attention_head_size // self.channel_shrink_ratio - ) - attention_scores = tmp_attention_scores + tmp_layout_attention_scores - layout_attention_scores = tmp_layout_attention_scores + tmp_attention_scores - - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - layout_attention_scores = layout_attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - layout_attention_probs = nn.Softmax(dim=-1)(layout_attention_scores) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - layout_attention_probs = self.dropout(layout_attention_probs) - - # Mask heads if we want to - if head_mask is not None: - layout_attention_probs = layout_attention_probs * head_mask - - layout_context_layer = torch.matmul(layout_attention_probs, layout_value_layer) - - layout_context_layer = layout_context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = layout_context_layer.size()[:-2] + (self.all_head_size // self.channel_shrink_ratio,) - layout_context_layer = layout_context_layer.view(*new_context_layer_shape) - - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.Softmax(dim=-1)(attention_scores) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(*new_context_layer_shape) - - outputs = ( - ((context_layer, layout_context_layer), attention_probs) - if output_attentions - else ((context_layer, layout_context_layer),) - ) - - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertSelfOutput -class LiltSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class LiltAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - self.self = LiltSelfAttention(config, position_embedding_type=position_embedding_type) - self.output = LiltSelfOutput(config) - self.pruned_heads = set() - - ori_hidden_size = config.hidden_size - config.hidden_size = config.hidden_size // config.channel_shrink_ratio - self.layout_output = LiltSelfOutput(config) - config.hidden_size = ori_hidden_size - - # Copied from transformers.models.bert.modeling_bert.BertAttention.prune_heads - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - layout_inputs: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.Tensor]: - self_outputs = self.self( - hidden_states, - layout_inputs, - attention_mask, - head_mask, - output_attentions, - ) - attention_output = self.output(self_outputs[0][0], hidden_states) - layout_attention_output = self.layout_output(self_outputs[0][1], layout_inputs) - outputs = ((attention_output, layout_attention_output),) + self_outputs[1:] # add attentions if we output them - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertIntermediate -class LiltIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -# Copied from transformers.models.bert.modeling_bert.BertOutput -class LiltOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class LiltLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = LiltAttention(config) - self.intermediate = LiltIntermediate(config) - self.output = LiltOutput(config) - - ori_hidden_size = config.hidden_size - ori_intermediate_size = config.intermediate_size - config.hidden_size = config.hidden_size // config.channel_shrink_ratio - config.intermediate_size = config.intermediate_size // config.channel_shrink_ratio - self.layout_intermediate = LiltIntermediate(config) - self.layout_output = LiltOutput(config) - config.hidden_size = ori_hidden_size - config.intermediate_size = ori_intermediate_size - - def forward( - self, - hidden_states: torch.Tensor, - layout_inputs: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.Tensor]: - self_attention_outputs = self.attention( - hidden_states, - layout_inputs, - attention_mask, - head_mask, - output_attentions=output_attentions, - ) - attention_output = self_attention_outputs[0][0] - layout_attention_output = self_attention_outputs[0][1] - - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - ) - layout_layer_output = apply_chunking_to_forward( - self.layout_feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, layout_attention_output - ) - outputs = ((layer_output, layout_layer_output),) + outputs - - return outputs - - # Copied from transformers.models.bert.modeling_bert.BertLayer.feed_forward_chunk - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - def layout_feed_forward_chunk(self, attention_output): - intermediate_output = self.layout_intermediate(attention_output) - layer_output = self.layout_output(intermediate_output, attention_output) - return layer_output - - -class LiltEncoder(nn.Module): - # Copied from transformers.models.bert.modeling_bert.BertEncoder.__init__ with Bert->Lilt - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([LiltLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - layout_inputs: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, - ) -> Union[Tuple[torch.Tensor], BaseModelOutput]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - layout_inputs, - attention_mask, - layer_head_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - layout_inputs, - attention_mask, - layer_head_mask, - output_attentions, - ) - - hidden_states = layer_outputs[0][0] - layout_inputs = layer_outputs[0][1] - - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - all_hidden_states, - all_self_attentions, - ] - if v is not None - ) - return BaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -# Copied from transformers.models.bert.modeling_bert.BertPooler -class LiltPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class LiltPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = LiltConfig - base_model_prefix = "lilt" - supports_gradient_checkpointing = True - _no_split_modules = [] - - # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, LiltEncoder): - module.gradient_checkpointing = value - - -LILT_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`LiltConfig`]): Model configuration class with all the parameters of the - model. Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -LILT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - - bbox (`torch.LongTensor` of shape `({0}, 4)`, *optional*): - Bounding boxes of each input sequence tokens. Selected in the range `[0, - config.max_2d_position_embeddings-1]`. Each bounding box should be a normalized version in (x0, y0, x1, y1) - format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, - y1) represents the position of the lower right corner. See [Overview](#Overview) for normalization. - - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare LiLT Model transformer outputting raw hidden-states without any specific head on top.", - LILT_START_DOCSTRING, -) -class LiltModel(LiltPreTrainedModel): - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embeddings = LiltTextEmbeddings(config) - self.layout_embeddings = LiltLayoutEmbeddings(config) - self.encoder = LiltEncoder(config) - - self.pooler = LiltPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(LILT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - bbox: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPooling]: - r""" - - Returns: - - Examples: - - ```python - >>> from transformers import AutoTokenizer, AutoModel - >>> from datasets import load_dataset - - >>> tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - >>> model = AutoModel.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - - >>> dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train") - >>> example = dataset[0] - >>> words = example["tokens"] - >>> boxes = example["bboxes"] - - >>> encoding = tokenizer(words, boxes=boxes, return_tensors="pt") - - >>> outputs = model(**encoding) - >>> last_hidden_states = outputs.last_hidden_state - ```""" - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - if bbox is None: - bbox = torch.zeros(input_shape + (4,), dtype=torch.long, device=device) - - if attention_mask is None: - attention_mask = torch.ones(((batch_size, seq_length)), device=device) - - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output, position_ids = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - ) - - layout_embedding_output = self.layout_embeddings(bbox=bbox, position_ids=position_ids) - - encoder_outputs = self.encoder( - embedding_output, - layout_embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -@add_start_docstrings( - """ - LiLT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled - output) e.g. for GLUE tasks. - """, - LILT_START_DOCSTRING, -) -class LiltForSequenceClassification(LiltPreTrainedModel): - # Copied from transformers.models.roberta.modeling_roberta.RobertaForSequenceClassification.__init__ with Roberta->Lilt, roberta->lilt - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.lilt = LiltModel(config, add_pooling_layer=False) - self.classifier = LiltClassificationHead(config) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(LILT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=SequenceClassifierOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - bbox: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - - Returns: - - Examples: - - ```python - >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification - >>> from datasets import load_dataset - - >>> tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - >>> model = AutoModelForSequenceClassification.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - - >>> dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train") - >>> example = dataset[0] - >>> words = example["tokens"] - >>> boxes = example["bboxes"] - - >>> encoding = tokenizer(words, boxes=boxes, return_tensors="pt") - - >>> outputs = model(**encoding) - >>> predicted_class_idx = outputs.logits.argmax(-1).item() - >>> predicted_class = model.config.id2label[predicted_class_idx] - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.lilt( - input_ids, - bbox=bbox, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = outputs[0] - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - # move labels to correct device to enable model parallelism - labels = labels.to(logits.device) - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Lilt Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - LILT_START_DOCSTRING, -) -class LiltForTokenClassification(LiltPreTrainedModel): - # Copied from transformers.models.roberta.modeling_roberta.RobertaForTokenClassification.__init__ with Roberta->Lilt, roberta->lilt - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.lilt = LiltModel(config, add_pooling_layer=False) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(LILT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=TokenClassifierOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - bbox: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - - Returns: - - Examples: - - ```python - >>> from transformers import AutoTokenizer, AutoModelForTokenClassification - >>> from datasets import load_dataset - - >>> tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - >>> model = AutoModelForTokenClassification.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - - >>> dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train") - >>> example = dataset[0] - >>> words = example["tokens"] - >>> boxes = example["bboxes"] - - >>> encoding = tokenizer(words, boxes=boxes, return_tensors="pt") - - >>> outputs = model(**encoding) - >>> predicted_class_indices = outputs.logits.argmax(-1) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.lilt( - input_ids, - bbox=bbox, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - # move labels to correct device to enable model parallelism - labels = labels.to(logits.device) - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaClassificationHead with Roberta->Lilt -class LiltClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.out_proj = nn.Linear(config.hidden_size, config.num_labels) - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = torch.tanh(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -@add_start_docstrings( - """ - Lilt Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - LILT_START_DOCSTRING, -) -class LiltForQuestionAnswering(LiltPreTrainedModel): - # Copied from transformers.models.roberta.modeling_roberta.RobertaForQuestionAnswering.__init__ with Roberta->Lilt, roberta->lilt - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.lilt = LiltModel(config, add_pooling_layer=False) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(LILT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=QuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - bbox: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - - Returns: - - Examples: - - ```python - >>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering - >>> from datasets import load_dataset - - >>> tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - >>> model = AutoModelForQuestionAnswering.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base") - - >>> dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train") - >>> example = dataset[0] - >>> words = example["tokens"] - >>> boxes = example["bboxes"] - - >>> encoding = tokenizer(words, boxes=boxes, return_tensors="pt") - - >>> outputs = model(**encoding) - - >>> answer_start_index = outputs.start_logits.argmax() - >>> answer_end_index = outputs.end_logits.argmax() - - >>> predict_answer_tokens = encoding.input_ids[0, answer_start_index : answer_end_index + 1] - >>> predicted_answer = tokenizer.decode(predict_answer_tokens) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.lilt( - input_ids, - bbox=bbox, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/luke/configuration_luke.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/luke/configuration_luke.py deleted file mode 100644 index 6e5c99900bbdf51864dced99adf3160361e27d40..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/luke/configuration_luke.py +++ /dev/null @@ -1,137 +0,0 @@ -# coding=utf-8 -# Copyright Studio Ousia and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" LUKE configuration""" - -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -LUKE_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "studio-ousia/luke-base": "https://huggingface.co/studio-ousia/luke-base/resolve/main/config.json", - "studio-ousia/luke-large": "https://huggingface.co/studio-ousia/luke-large/resolve/main/config.json", -} - - -class LukeConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`LukeModel`]. It is used to instantiate a LUKE - model according to the specified arguments, defining the model architecture. Instantiating a configuration with the - defaults will yield a similar configuration to that of the LUKE - [studio-ousia/luke-base](https://huggingface.co/studio-ousia/luke-base) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - vocab_size (`int`, *optional*, defaults to 30522): - Vocabulary size of the LUKE model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`LukeModel`]. - entity_vocab_size (`int`, *optional*, defaults to 500000): - Entity vocabulary size of the LUKE model. Defines the number of different entities that can be represented - by the `entity_ids` passed when calling [`LukeModel`]. - hidden_size (`int`, *optional*, defaults to 768): - Dimensionality of the encoder layers and the pooler layer. - entity_emb_size (`int`, *optional*, defaults to 256): - The number of dimensions of the entity embedding. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 3072): - Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. - hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"silu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention probabilities. - max_position_embeddings (`int`, *optional*, defaults to 512): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - type_vocab_size (`int`, *optional*, defaults to 2): - The vocabulary size of the `token_type_ids` passed when calling [`LukeModel`]. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - use_entity_aware_attention (`bool`, defaults to `True`): - Whether or not the model should use the entity-aware self-attention mechanism proposed in [LUKE: Deep - Contextualized Entity Representations with Entity-aware Self-attention (Yamada et - al.)](https://arxiv.org/abs/2010.01057). - classifier_dropout (`float`, *optional*): - The dropout ratio for the classification head. - - Examples: - - ```python - >>> from transformers import LukeConfig, LukeModel - - >>> # Initializing a LUKE configuration - >>> configuration = LukeConfig() - - >>> # Initializing a model from the configuration - >>> model = LukeModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "luke" - - def __init__( - self, - vocab_size=50267, - entity_vocab_size=500000, - hidden_size=768, - entity_emb_size=256, - num_hidden_layers=12, - num_attention_heads=12, - intermediate_size=3072, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - max_position_embeddings=512, - type_vocab_size=2, - initializer_range=0.02, - layer_norm_eps=1e-12, - use_entity_aware_attention=True, - classifier_dropout=None, - pad_token_id=1, - bos_token_id=0, - eos_token_id=2, - **kwargs, - ): - """Constructs LukeConfig.""" - super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) - - self.vocab_size = vocab_size - self.entity_vocab_size = entity_vocab_size - self.hidden_size = hidden_size - self.entity_emb_size = entity_emb_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.hidden_act = hidden_act - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.type_vocab_size = type_vocab_size - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - self.use_entity_aware_attention = use_entity_aware_attention - self.classifier_dropout = classifier_dropout diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/cluster/kmeans.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/cluster/kmeans.py deleted file mode 100644 index 6111ea45e66a15d41b5b904be6f75affd3c4369f..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/cluster/kmeans.py +++ /dev/null @@ -1,201 +0,0 @@ -import math,pdb -import torch,pynvml -from torch.nn.functional import normalize -from time import time -import numpy as np -# device=torch.device("cuda:0") -def _kpp(data: torch.Tensor, k: int, sample_size: int = -1): - """ Picks k points in the data based on the kmeans++ method. - - Parameters - ---------- - data : torch.Tensor - Expect a rank 1 or 2 array. Rank 1 is assumed to describe 1-D - data, rank 2 multidimensional data, in which case one - row is one observation. - k : int - Number of samples to generate. - sample_size : int - sample data to avoid memory overflow during calculation - - Returns - ------- - init : ndarray - A 'k' by 'N' containing the initial centroids. - - References - ---------- - .. [1] D. Arthur and S. Vassilvitskii, "k-means++: the advantages of - careful seeding", Proceedings of the Eighteenth Annual ACM-SIAM Symposium - on Discrete Algorithms, 2007. - .. [2] scipy/cluster/vq.py: _kpp - """ - batch_size=data.shape[0] - if batch_size>sample_size: - data = data[torch.randint(0, batch_size,[sample_size], device=data.device)] - dims = data.shape[1] if len(data.shape) > 1 else 1 - init = torch.zeros((k, dims)).to(data.device) - r = torch.distributions.uniform.Uniform(0, 1) - for i in range(k): - if i == 0: - init[i, :] = data[torch.randint(data.shape[0], [1])] - else: - D2 = torch.cdist(init[:i, :][None, :], data[None, :], p=2)[0].amin(dim=0) - probs = D2 / torch.sum(D2) - cumprobs = torch.cumsum(probs, dim=0) - init[i, :] = data[torch.searchsorted(cumprobs, r.sample([1]).to(data.device))] - return init -class KMeansGPU: - ''' - Kmeans clustering algorithm implemented with PyTorch - - Parameters: - n_clusters: int, - Number of clusters - - max_iter: int, default: 100 - Maximum number of iterations - - tol: float, default: 0.0001 - Tolerance - - verbose: int, default: 0 - Verbosity - - mode: {'euclidean', 'cosine'}, default: 'euclidean' - Type of distance measure - - init_method: {'random', 'point', '++'} - Type of initialization - - minibatch: {None, int}, default: None - Batch size of MinibatchKmeans algorithm - if None perform full KMeans algorithm - - Attributes: - centroids: torch.Tensor, shape: [n_clusters, n_features] - cluster centroids - ''' - def __init__(self, n_clusters, max_iter=200, tol=1e-4, verbose=0, mode="euclidean",device=torch.device("cuda:0")): - self.n_clusters = n_clusters - self.max_iter = max_iter - self.tol = tol - self.verbose = verbose - self.mode = mode - self.device=device - pynvml.nvmlInit() - gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(device.index) - info = pynvml.nvmlDeviceGetMemoryInfo(gpu_handle) - self.minibatch=int(33e6/self.n_clusters*info.free/ 1024 / 1024 / 1024) - print("free_mem/GB:",info.free/ 1024 / 1024 / 1024,"minibatch:",self.minibatch) - - @staticmethod - def cos_sim(a, b): - """ - Compute cosine similarity of 2 sets of vectors - - Parameters: - a: torch.Tensor, shape: [m, n_features] - - b: torch.Tensor, shape: [n, n_features] - """ - return normalize(a, dim=-1) @ normalize(b, dim=-1).transpose(-2, -1) - - @staticmethod - def euc_sim(a, b): - """ - Compute euclidean similarity of 2 sets of vectors - Parameters: - a: torch.Tensor, shape: [m, n_features] - b: torch.Tensor, shape: [n, n_features] - """ - return 2 * a @ b.transpose(-2, -1) -(a**2).sum(dim=1)[..., :, None] - (b**2).sum(dim=1)[..., None, :] - - def max_sim(self, a, b): - """ - Compute maximum similarity (or minimum distance) of each vector - in a with all of the vectors in b - Parameters: - a: torch.Tensor, shape: [m, n_features] - b: torch.Tensor, shape: [n, n_features] - """ - if self.mode == 'cosine': - sim_func = self.cos_sim - elif self.mode == 'euclidean': - sim_func = self.euc_sim - sim = sim_func(a, b) - max_sim_v, max_sim_i = sim.max(dim=-1) - return max_sim_v, max_sim_i - - def fit_predict(self, X): - """ - Combination of fit() and predict() methods. - This is faster than calling fit() and predict() seperately. - Parameters: - X: torch.Tensor, shape: [n_samples, n_features] - centroids: {torch.Tensor, None}, default: None - if given, centroids will be initialized with given tensor - if None, centroids will be randomly chosen from X - Return: - labels: torch.Tensor, shape: [n_samples] - - mini_=33kk/k*remain - mini=min(mini_,fea_shape) - offset=log2(k/1000)*1.5 - kpp_all=min(mini_*10/offset,fea_shape) - kpp_sample=min(mini_/12/offset,fea_shape) - """ - assert isinstance(X, torch.Tensor), "input must be torch.Tensor" - assert X.dtype in [torch.half, torch.float, torch.double], "input must be floating point" - assert X.ndim == 2, "input must be a 2d tensor with shape: [n_samples, n_features] " - # print("verbose:%s"%self.verbose) - - offset = np.power(1.5,np.log(self.n_clusters / 1000))/np.log(2) - with torch.no_grad(): - batch_size= X.shape[0] - # print(self.minibatch, int(self.minibatch * 10 / offset), batch_size) - start_time = time() - if (self.minibatch*10//offset< batch_size): - x = X[torch.randint(0, batch_size,[int(self.minibatch*10/offset)])].to(self.device) - else: - x = X.to(self.device) - # print(x.device) - self.centroids = _kpp(x, self.n_clusters, min(int(self.minibatch/12/offset),batch_size)) - del x - torch.cuda.empty_cache() - # self.centroids = self.centroids.to(self.device) - num_points_in_clusters = torch.ones(self.n_clusters, device=self.device, dtype=X.dtype)#全1 - closest = None#[3098036]#int64 - if(self.minibatch>=batch_size//2 and self.minibatch=batch_size): - X=X.to(self.device) - for i in range(self.max_iter): - iter_time = time() - if self.minibatch= 2: - print('iter:', i, 'error:', error.item(), 'time spent:', round(time()-iter_time, 4)) - if error <= self.tol: - break - - if self.verbose >= 1: - print(f'used {i+1} iterations ({round(time()-start_time, 4)}s) to cluster {batch_size} items into {self.n_clusters} clusters') - return closest diff --git a/spaces/youngtsai/Mandarin-TTS/bert/ProsodyModel.py b/spaces/youngtsai/Mandarin-TTS/bert/ProsodyModel.py deleted file mode 100644 index 5f305b41894a4a8cec05c23dcdd29a9b939b748b..0000000000000000000000000000000000000000 --- a/spaces/youngtsai/Mandarin-TTS/bert/ProsodyModel.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import torch -import torch.nn as nn -import torch.nn.functional as F - -from transformers import BertModel, BertConfig, BertTokenizer - - -class CharEmbedding(nn.Module): - def __init__(self, model_dir): - super().__init__() - self.tokenizer = BertTokenizer.from_pretrained(model_dir) - self.bert_config = BertConfig.from_pretrained(model_dir) - self.hidden_size = self.bert_config.hidden_size - self.bert = BertModel(self.bert_config) - self.proj = nn.Linear(self.hidden_size, 256) - self.linear = nn.Linear(256, 3) - - def text2Token(self, text): - token = self.tokenizer.tokenize(text) - txtid = self.tokenizer.convert_tokens_to_ids(token) - return txtid - - def forward(self, inputs_ids, inputs_masks, tokens_type_ids): - out_seq = self.bert(input_ids=inputs_ids, - attention_mask=inputs_masks, - token_type_ids=tokens_type_ids)[0] - out_seq = self.proj(out_seq) - return out_seq - - -class TTSProsody(object): - def __init__(self, path, device): - self.device = device - self.char_model = CharEmbedding(path) - self.char_model.load_state_dict( - torch.load( - os.path.join(path, 'prosody_model.pt'), - map_location="cpu" - ), - strict=False - ) - self.char_model.eval() - self.char_model.to(self.device) - - def get_char_embeds(self, text): - input_ids = self.char_model.text2Token(text) - input_masks = [1] * len(input_ids) - type_ids = [0] * len(input_ids) - input_ids = torch.LongTensor([input_ids]).to(self.device) - input_masks = torch.LongTensor([input_masks]).to(self.device) - type_ids = torch.LongTensor([type_ids]).to(self.device) - - with torch.no_grad(): - char_embeds = self.char_model( - input_ids, input_masks, type_ids).squeeze(0).cpu() - return char_embeds - - def expand_for_phone(self, char_embeds, length): # length of phones for char - assert char_embeds.size(0) == len(length) - expand_vecs = list() - for vec, leng in zip(char_embeds, length): - vec = vec.expand(leng, -1) - expand_vecs.append(vec) - expand_embeds = torch.cat(expand_vecs, 0) - assert expand_embeds.size(0) == sum(length) - return expand_embeds.numpy() - - -if __name__ == "__main__": - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - prosody = TTSProsody('./bert/', device) - while True: - text = input("请输入文本:") - prosody.get_char_embeds(text) diff --git a/spaces/ysharma/text-to-image-to-video/util2.py b/spaces/ysharma/text-to-image-to-video/util2.py deleted file mode 100644 index 5f08466822f4f500d35593d5f0688a929fb3ef82..0000000000000000000000000000000000000000 --- a/spaces/ysharma/text-to-image-to-video/util2.py +++ /dev/null @@ -1,267 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() \ No newline at end of file diff --git a/spaces/ywqisok/ysyy/mel_processing.py b/spaces/ywqisok/ysyy/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/ywqisok/ysyy/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/zhang-wei-jian/docker/Dockerfile b/spaces/zhang-wei-jian/docker/Dockerfile deleted file mode 100644 index e1b6754a9207dff594d069c108edeff7ae004208..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -# 设置基础镜像 -# FROM node:18-alpine -FROM node:20-alpine3.16 - -# 设置工作目录 -WORKDIR /app - -# 复制 package.json 和 package-lock.json 文件 -COPY package*.json ./ - - -# 安装依赖 -RUN npm install - -# 复制源代码 -COPY . . - -# 暴露应用端口, 迷惑行为,只是提示项目里面启动的端口 -EXPOSE 8888 - -# 运行应用 -CMD [ "npm", "run", "dev" ] diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/cache.spec.ts b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/cache.spec.ts deleted file mode 100644 index 49e1cb276993af3991196dca44a43f9f14da17a9..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/cache.spec.ts +++ /dev/null @@ -1,17 +0,0 @@ -import { createConfigDir, getLastUpdate, saveLastUpdate } from './cache'; - -createConfigDir(); - -jest.useFakeTimers().setSystemTime(new Date('2022-01-01')); - -const fakeTime = new Date('2022-01-01').getTime(); - -test('can save update then get the update details', () => { - saveLastUpdate('test'); - expect(getLastUpdate('test')).toBe(fakeTime); -}); - -test('prefixed module can save update then get the update details', () => { - saveLastUpdate('@alexbrazier/test'); - expect(getLastUpdate('@alexbrazier/test')).toBe(fakeTime); -}); diff --git a/spaces/zhoupin30/zhoupin30/src/components/chat-panel.tsx b/spaces/zhoupin30/zhoupin30/src/components/chat-panel.tsx deleted file mode 100644 index 56b2112bd75ba08134383871177851fa2e3f43a4..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input, setFocus]) - - return ( -
      { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
      -
      -
      -
      -
      -
      -
      - -
      -
      -
      -
      - -